Error: Meta information ("_Z3runPhj.nstackwords") for function "_Z3runPhj" cannot be determined.

Technical questions regarding the XTC tools and programming with XMOS.
varun
New User
Posts: 3
Joined: Thu Nov 28, 2024 9:41 am

Error: Meta information ("_Z3runPhj.nstackwords") for function "_Z3runPhj" cannot be determined.

Post by varun »

Hi All,

Not sure if this is a repeat post, if so, please point me to the relevant post.

I am running into an issue with a function call. The code is similar to the one in ai_tools example, with main and support .cpp files. I have added this to the explorer board example from xcore_iot.
Camera_Main.xc reads captures images. No issues with this at all. The issue is when I add the ai example to run inference on the image captured
The error:-
main.c:(.text+0x1e): Error: Meta information ("_Z3runPhj.nstackwords") for function "_Z3runPhj" cannot be determined.
/home/varun/xmos/xcore_iot/xcore_iot/examples/freertos/explorer_board/src/Camera_Main.xc:(.dp.data.4+0x24): Error: Meta information ("_Z3runPhj.nstackwords") for function "_Z3runPhj" cannot be determined.
/home/varun/xmos/xcore_iot/xcore_iot/examples/freertos/explorer_board/src/Camera_Main.xc:(.dp.data.4+0x24): Error: lower bound could not be calculated (function is recursive?).


I figured this might be happening because the compiler is having a hard time determining the stack size.

If I add #pragma stackfunction 100(or 1024 or any number) , the compilation completes but .xe file does not get created due to data+code size exceeding 512k. The model weight are stored on lpddr, so it should not be the issue.

How do I calculate stacksize? Is there any xmos tool that can do this.

I also tried running the function as an rtos task that automatically calculates the stack size. https://www.xmos.com/documentation/XM-0 ... ssues.html
In that case, I get the below error:-
support.cpp:(.text+0x16): Error: undefined reference to 'runTask.nstackwords'

The change I made:-
xTaskCreate((TaskFunction_t) runTask,
"run",
RTOS_THREAD_STACK_SIZE(runTask),
NULL,
configMAX_PRIORITIES - 1,
NULL);

For the function run in support.cpp in any of the examples in ai_tools , a runTask function is defined and this what is passed to xTaskCreate
I am not using any function pointers anywhere. The only change is that the run() function takes an argument of uint8_t* array (image data) unlike in the ai_tools example.

New to xmos and any help would be appreciated.

Thank you,
Best Regards
albertoisorna
Verified
New User
Posts: 3
Joined: Wed Jan 03, 2024 10:41 am

Post by albertoisorna »

Hello Varun,

Your first approach of including the # pragma stack function seems correct, but there are other things to consider. For instance, if the tensor arena + code is bigger than your RAM size, then the compilation can fail, even if your model will go to LPDDR.
That is because you need to be able to fit each layer of your model in RAM. (you could split conv layers, or reduce model, size...).
Also, this type of issue seems like the ai_tools version used here is a bit old. Could you clarify the following?

* Which ai_tools version are you using
* XTC tools version
* Build system (cmake, xcommon_cmake, ..)
* What is in your camera_main.xc, how the par jobs are distributed
* Tensor arena size (you can find this is model.tflite.h)

Thanks in advance,
Alberto
varun
New User
Posts: 3
Joined: Thu Nov 28, 2024 9:41 am

Post by varun »

Hi,

- ai_tool_version: I am using the e1acb3f commit from Oct 9th, 2024
- XTC Tools: 15.2.1, 15.3.0 causes some error and so sticking to 15.2.1
- xcore_iot: commit 85ecd0d from Sept 26th 2024
- build system: cmake
- In camera_main, on tile[1], raw image data is acquired from mipi bridge and converted to jpeg, and transferred to tile[0]. Not much running on tile[0], planning to add additional stuff on tile[0] based on image. model inference to be done before transfer to tile[0].
- tensor arena size: LARGEST_TENSOR_ARENA_SIZE 217728
User avatar
upav
Verified
Junior Member
Posts: 6
Joined: Wed May 22, 2024 3:30 pm

Post by upav »

Hi Varum,

I know that the way one would use ai_tools has changed since xcore_iot was last released, so I don't think you can "just" repin lib_tflite_micro and lib_nn.
I can give you a short advice on how to "properly" change ai_tools version and see if it works.

Change ai_tools:

So the latest ai_tools are now a complete python package. (I think in the version used in xcore_iot the python package is only used for the xformer and you have to build the backend (lib_tflite_micro, lib_nn) from the source).
Modern ai_tools python package comes with a compiled .a file inside, which you can just link to your application.
1. So first you need to pip install the latest xmos-ai-tools (1.3.1, I think).
2. Now you don't need to checkout and build lib_tflite_micro and lib_nn, can safely clean this cmakelist from building the libraries from the source.
3. Now to get the .a file from the python package and declare it in cmake see the example here:
Lines 1-32 will declare a static library called tflite_micro in cmake, which you can just link to your application. You can just copy those lines into the cmakelist and it should work.
Lines 35-44 will add your .tflite as a build dependency (so if you change the model, you can just rerun xmake (make, ninja) and it will regenerate your model files and compile an app with them). This is nice when you're changing the model frequently as it automates xforming in your model in the build system. You can always just xform your model once and commit the .cpp and .h files.

If it still doesn't work:

I'm not sure it can be solved in a forum format (I may be wrong tho). I would advise opening a support ticket on xmos.com, describing what exactly you are trying to do so we can have a proper look at it.
Pavel
xmos software engineer
varun
New User
Posts: 3
Joined: Thu Nov 28, 2024 9:41 am

Post by varun »

Hi upav,

I did notice that the submodules for tflite libs and nn libs were different commits in both the repos(and did face some issues) and I have changed the cmakelist file in the modules/inferencing(xcore_iot) to link to the static lib used by ai_tools.
The change in cmakelists is similar to the one you have posted.
User avatar
upav
Verified
Junior Member
Posts: 6
Joined: Wed May 22, 2024 3:30 pm

Post by upav »

Ok, so it looks like you are using the most recent ai_tools.
If it doesn't work in that case I would advise opening a support ticket providing more details on your project and what you are trying to solve.

Sorry, if it's not very helpful,
Pavel
xmos software engineer