kevinm wrote:Please consider enabling -emit-llvm in future release builds. It's how compilers play nice with the LLVM ecosystem. For me, at least, LLVM support is a big selling point.
I discussed this with some people here and I don't think there is a problem enabling this option in future. In the meantime you can get the same effect using: xcc test.xc -mllvm -print-before=verify -c
kevinm wrote:I'm working on a product in which user-supplied serial protocols and signal processing are compiled and run on a microcontroller. We're looking at XMOS parts because of their IO capabilities, threads, and LLVM backend. Implementation-wise, the host will compile code with LLVM that will be sent to the device and run in its own XMOS thread or core and communicate with services written in XC over channels.
So, more specifically, how to communicate with XC-compatible channels, and more details on `select{}` (default clause, contitional inputs, timed inputs). Looks like port IO is documented in the architecture manual if there are intrinsics for all the relevant instructions.
If you have read the architecture manual you are most of the way to understanding the intrinsics - the majority map directly to individual instructions. The main exceptions to this are the llvm.xcore.waitevent() and llvm.xcore.checkevent() instrinsics that are used to implement XC's select statement. Conceptually llvm.xcore.waitevent() enables events and returns the address of the vector (set with llvm.xcore.setv) of the resource that is ready. If no resources are ready it will wait until a resource becomes ready. The result of the intrinsic must be used as the operand of an indirect branch. When the backend generates code it enables events but doesn't emit a branch instruction - the branch is done in hardware when the event is taken.
The llvm.xcore.checkevent() is simlar except that if no resources are ready it returns immediately. The value to return if no resources are ready is given as an operand to the instrinsic - this corresponds to a default case in a select in XC.
There is a very basic example that uses these intrinsics in the LLVM testsuite (
test/CodeGen/XCore/events.ll).
kevinm wrote:Eventually, I'll need to be able to codegen and link the module so it can be loaded into RAM and run as a thread, but for development, I can just link it into the .xe with XMOS tools. I think run-time loading will require getting any calls into XC code inlined or included into the LLVM module so it doesn't need a dynamic linker, so that's where -emit-llvm would be extremely handy.
Do you need the backend to emit binary code directly (i.e. without an external assembler)? Currently this isn't supported and the XCoreInstructionInfo.td file don't specify any instruction encoding information. This something I'd like to fix for another reason (I'd like to generate a dissassembler to replace some hand written code in
https://github.com/rlsosborne/tool_axe), but I'm not sure when I'll get time to do this.
kevinm wrote:I've been using mainline LLVM 3.1, which comes with xCORE support by default. Richard, does mainline differ significantly from what you use in xcc 12 / xcc 11 / have on the open source page? I haven't done more than hello world and some math, so if the XMOS intrinsics are totally broken there, I wouldn't know yet.
Yes they do differ. We make releases from our own internal branch of LLVM - this allows us to have stability prior to a tools release in cases where LLVM's release schedule doesn't match up with our own. We periodically merge in changes from the upstream and periodically submit patches back to minimize the difference between the two branches. What is present upstream should work - If you anything that doesn't then please submit bugs on llvm.org so I can take a look. However there are a few features that are missing upstream:
* Support for passing XTA information through the compiler.
* Some optimization passes that specificly target xCORE resource intrinsics.
* Support for emitting linker expressions describing maximum resource usage.
There will also be some things missing the other way since our internal branch is currently based off an older version of LLVM (LLVM 3.0). In general I would encourage people to use mainline LLVM if possible as the presence of a public bug trackers, mailing lists and repositories makes collaborating a lot easier.