Make RGB LED Display with xc-3 kit more Economical

XCore Project reviews, ideas, videos and proposals.
nisma
Active Member
Posts: 53
Joined: Sun Dec 13, 2009 5:39 pm

Post by nisma »

My question :
Have you gone through the source code of MBI5026 driver? Is it working? Is the driver maintaining soft PWM properly?
I have doubt on it. I don’t think it is working and it can support 8bit and 10 bit PWM.
yes and no, i have checked the code, but not very in deep.
The code does PWM at 10bit, other bit depth is configurable changing the Led.h file.
The resistor measurement as asked should give responce to the flaw into the pwm source.
Annother thing, the pwm code is not made specifically for leds, it´s standard pwm code.
The code for PWM should be this, but there is some question that i don´t have investigated,
here as sort of pseudocode that are not 100% accurate (X and Y may be inverted, .... ).

Code: Select all

for(MUX=0;MUX<LED_MUX;MUX++) {
  memset(drivebuf,sizeof(drivebuf),0);
  for(X=0;X<LED_X;X++) {
   for(Y=0;Y*LED_MUX<LED_Y;Y++) {
    outval=dot_adj(X,Y*MUX,gamma_adj(color_adj(buffer[X][Y*MUX])));
      if (~outval&0x200) // bit9 == low
        for (i=0; i<9; i++,outval >>= 1)
          drivebuf[i][X] |= ( (outval & 1) << (Y) );    
      else // bit9 == high
        outval >>= 9; 
      drivebuf[9][X] |= ( (outval & 1) << (Y) );    
   }}
   spi_addr=MUX;
   for(i=0;i<10;i++) {
     output_drivebuf_to_pins(i);
     delay_led(1<<i);
   }
  }
It seems, that if bit9 is set, the other bits remain at zero.
The impression for making such a hardcoding if bit9 is set force the other
bits to be zero is that either the cpu otherwise run out of time or alternativly
to limit the led duty cycle and it´s intensity to max 50%.


jagspaul
Experienced Member
Posts: 117
Joined: Tue Oct 18, 2011 3:28 pm

Post by jagspaul »

nisma wrote:It seems, that if bit9 is set, the other bits remain at zero.
The impression for making such a hardcoding if bit9 is set force the other
bits to be zero is that either the cpu otherwise run out of time or alternativly
to limit the led duty cycle and it´s intensity to max 50%.
I understand the code and also found that same method has been used in XMOS source. They also used the same coding for bit9. But I don’t understand the advantage. Basically we are limiting the intensity of the LED. But overall brightness of the display will decrease. Where as every body want to increase the display brightness. Please tell me if we remove the bit9 logic then what will happen.

thank
jags
nisma
Active Member
Posts: 53
Joined: Sun Dec 13, 2009 5:39 pm

Post by nisma »

I have reported the code from Xmos source code because you have asked if there is working pwm code.
As said previously, measure the resistence of pin 23 to gnd that determine the intensity .
If the driving intensity is above a limit, derating must be observed as specified in datasheet.
Probably this intensity limiting is to force this derating and a bit more. Without knowing that
value it´s only a guess. Annother possibility is to limit the power requirement because this
little pice of code cut the power requirements drastically reducing the intensity as side effect.
jagspaul
Experienced Member
Posts: 117
Joined: Tue Oct 18, 2011 3:28 pm

Post by jagspaul »

Thanks nisma, I got the point.

Now I want to pull your attention to another point.
In the source of MBI5030 (HW PWM) driver it found that the data is fetching from image buffer as column wise and a temp buffer is holding the process column data for hole column chain and this temp buffer is double buffer.
mbi5030.xc
-----------------
int leddrive_mbi5030((....)
unsigned short buffers[2][2][NUM_MODULES_X*FRAME_HEIGHT][3];
……..
……..
……….
for (int x=0; x<SCAN_RATE; x++)
{
par
{
leddrive_mbi5030_pins(c, p_led_out_r0, p_led_out_g0, p_led_out_b0,
p_led_out_r1, p_led_out_g1, p_led_out_b1,
p_spi_addr, p_spi_clk , p_spi_ltch, p_spi_oe,
buffers[0], lastx, now, t);
retval = ledreformat_mbi5030(cLedData, cLedCmd, c, buffers[1], x);
}
………
………
……….

Where as in MBI5024 (SW PWM) driver the temp double buffer is holding hole display frame data instead a single column chain so here memory requirement is much higher due to which the total pixel capacity is decreases.

mbi5026.xc
-----------
int leddrive_mbi5026(.........)

unsigned drivebuf1[BCM_BITS][OUTPUT_BUF_SIZE];
unsigned drivebuf2[BCM_BITS][OUTPUT_BUF_SIZE];
....
.....
........
while (1)
{
cWdog <: 1;
par
{
leddrivepins_mbi5026(c, drivebuf2,
p_led_out_r0, p_led_out_g0, p_led_out_b0,
p_led_out_r1, p_led_out_g1, p_led_out_b1,
p_spi_addr, p_spi_clk , p_spi_ltch, p_spi_oe,
oeval2, scanval);
retval = ledreformat_mbi5026(cLedData, cLedCmd, c, drivebuf1, oeval1);
}
if (retval)
return retval;
.........
........
...........

Now my question is
1) What is the advantage of large memory buffer instead of a single column chain (which decreases pixel capacity)?
2) If we modify the code to process single column chain Is there any problem or disadvantage is raised?
User avatar
Interactive_Matter
XCore Addict
Posts: 216
Joined: Wed Feb 10, 2010 10:26 am

Post by Interactive_Matter »

Hi,

i promised to deliver my bit density modulation code. This seems to work better with LEDs than PWM since:

- it adapts itself to the refresh speed of the LEDS
- it spreads the on/off times better over time so that flickering can be reduced (often)

The only disadvantage of my code is that it is not really speed optimised. And you need to maintain error values of the same size as the pixel buffers - but that can happen on another core. But perhaps worth a look even though:

Code: Select all

/*
 * returns 0 or 1 as bit output
 * TODO is there a bool - or anything more efficient
 * and the quantization error
 */
{int, int} bit_density_modulate(int value, int quantisation_error) {
	int result;
	int new_quantisation_error;
	if (value>=quantisation_error) {
		result=1;
		new_quantisation_error=255-value+quantisation_error;
	} else {
		result=0;
		new_quantisation_error=-value+quantisation_error;
	}
	return {result,new_quantisation_error};
}
nisma
Active Member
Posts: 53
Joined: Sun Dec 13, 2009 5:39 pm

Post by nisma »

This is a detail that gives no work to change.
You have the tile kit. First thing to do is to change the ethernet code
from two cores to one core and add support for booting the other
xmos devices using the xlink. It´s important to know how many resources
are free. Actually the tile kit uses activly 3 cores. It´s essential to reduce it
to two cores and that is possible too. Only one Ethernet is needed, this frees
some threads. Having done this, the design should work as before on the
led tile kit without modification to the hardware and this working software
could/should be portet to single core L1 devices. This my advise.
After that, the system then could be improvered rewriting the pc software too.
jagspaul
Experienced Member
Posts: 117
Joined: Tue Oct 18, 2011 3:28 pm

Post by jagspaul »

Is it possible to boot all the L1 controller independently? All L1 board will have a serial flash for boot code. After power on they will boot independently and they will get different ID from DIP switch. L1 controller will receive packet through Xlinks. It If packet ID is matched with board ID the packet will be stored otherwise same will be pass on to next L1 through Xlinks.

I am not sure about this. I just want to get some Idea from you.

My question:
In this L1 daisy chain how code will be build??
1) In a single program (Ethernet + LED driver1. +……) with multiple core feature.
2) Different single core program will be written for Ethernet & LED driver.


jags
nisma
Active Member
Posts: 53
Joined: Sun Dec 13, 2009 5:39 pm

Post by nisma »

The chainded L1 controller can be bootet from the ethernet controller using xlink (no local flash) and
because of this the enumeration is automatically, no dip switch required.
I guess different single core programm for ethernet & led driver.
Actually the ethernet has a ethernet switch function in it and have two ethernet interfaces. Further it
need two cores. Removing the ethernet switch functionality and the second ethernet interface the
ethernet part fits into one core. Thats the Ethernet program. Just some copy/paste/delete operation.

This ist the actual led code, slightly simplified.
led1.jpg
And this is the same diagram how it should be in order to fit on L1
led2.jpg
You do not have the required permissions to view the files attached to this post.
jagspaul
Experienced Member
Posts: 117
Joined: Tue Oct 18, 2011 3:28 pm

Post by jagspaul »

If it is different single core programm for ethernet & led driver, then there is multiple bin file for Ethernet board and LED driver boards. Now please tell me how multiple bin will boot the two LED driver chain. (one chain connected with Ethernet link0 and another is Ethernet link1)

If there is no dip switch then how LED driver L1 board will be identify in a chain. How Ethernet L1 board will send right image packet to right LED driver board? How LED driver board will make decision which packet need to be stored and which need to be pass on to the next board of the chain??
nisma
Active Member
Posts: 53
Joined: Sun Dec 13, 2009 5:39 pm

Post by nisma »

There is, at least in my picture only one ethernet link, not two.
The Ethernet controller have two link, namly 0 and 1.
The driver board connected at link0 have the id 001, the next board in the chain have the id 002, the
twenty driver board will have the id 00k. On the xlink1 from the ethernet board the driver board nr 10 have the id 01A and it´s the same with registered ethernet boards, the second ethernet board will have the id 100. For booting, the Xlink switch component in the above image pass the
info to the ethernet controller, knowing it´s the board with id 003, board 004 wants boot.
Then the ethernet controller initiate a tftp request if configured and if there is no answer then it
use the default driver image stored in the spi flash. After that it send down a config message that
the board with id 000 should change the id to 004 and probably other info for the used driver and
led matrix size instead of hardcoding it.