I see that some of the bigger xCORE-200 parts have dual RGMII. Knowing that, should it be possible to port the 100Mbps AVB daisy chain code to work with Gigabit Ethernet? That may permit daisy chaining with large channel counts. Specifically I am looking to achieve 64+ channels @ 48kHz on a daisy chain, which is beyond what is possible with 100Mbps Ethernet.
Since the RGMII interfaces are on different tiles I suppose I would be limited to about 250Mbps by the xCONNECT link between the two tiles? Still better than 100Mbps.
Further, are any dev boards coming out that support one of these chips with dual RGMII (i.e. for testing the daisy chain port)? Perhaps if it is possible to link together two of the XK-EVK-XE216 boards using 5 bit xCONNECT that might simulate the chip with dual RGMII, obviously I could make the wires pretty short to avoid noise errors on the xCONNECT link.
Dual RGMII - GbE AVB daisy chain?
-
- XCore Expert
- Posts: 580
- Joined: Thu Nov 26, 2015 11:47 pm
-
Verified
- XCore Legend
- Posts: 1164
- Joined: Thu May 27, 2010 10:08 am
In theory this is doable. However, there is a *lot* of work to get to this stage. Some thoughts..
- The 100Mb dual mac component relies on shared memory - each Gb mac would be on a seperate tile. So you will need to write a new self contained (connects to 2 x Gb macs) switch function to handle legacy traffic.
- 64Ch endpoint is not yet implemented, but should be feasible. 32ch is in app note 203 - https://download.xmos.com/XM-008708-AN-4.pdf. Getting to 48 is probably Ok with moderate work (possibly doubling/quadrupling up on TDM core + buffering resources and giving the cores 100MIPS + maybe some more optimsiation). 64ch may require quite a lot of hand crafting and possibly doubling up on talker / listener units. Some work had to be done on packetisation to get to 32ch even..
- You will definitely need a 4 tile device - XE232. xConnect performance should be higher than 250Mbps on chip (especially within the same xcore node), however 4 switch paths may be a restriction for the connections between blocks. Haven't fully checked this yet though..
It (single chip switch and 64 ch endoint) would certainly be a very elegant solution, but for an engineering effort (ie cost) point of view, it may be worth considering focussing on the 64ch endpoint along with an off the shelf Gbit AVB switch.
- The 100Mb dual mac component relies on shared memory - each Gb mac would be on a seperate tile. So you will need to write a new self contained (connects to 2 x Gb macs) switch function to handle legacy traffic.
- 64Ch endpoint is not yet implemented, but should be feasible. 32ch is in app note 203 - https://download.xmos.com/XM-008708-AN-4.pdf. Getting to 48 is probably Ok with moderate work (possibly doubling/quadrupling up on TDM core + buffering resources and giving the cores 100MIPS + maybe some more optimsiation). 64ch may require quite a lot of hand crafting and possibly doubling up on talker / listener units. Some work had to be done on packetisation to get to 32ch even..
- You will definitely need a 4 tile device - XE232. xConnect performance should be higher than 250Mbps on chip (especially within the same xcore node), however 4 switch paths may be a restriction for the connections between blocks. Haven't fully checked this yet though..
It (single chip switch and 64 ch endoint) would certainly be a very elegant solution, but for an engineering effort (ie cost) point of view, it may be worth considering focussing on the 64ch endpoint along with an off the shelf Gbit AVB switch.
Engineer at XMOS