I'm developing a custom audio application on the XK-VOICE-L71 development kit and encountering some hardware integration issues. I'd appreciate your guidance on resolving these problems and architectural decisions.
My Application:
Target: XK-VOICE-L71 development kit
Goal: PDM microphone input → Audio processing (AEC/FFT/DRC) → I2S output to DAC
Current Approach: Baremetal implementation (no RTOS) using XMOS libraries
Libraries: lib_aec, lib_mic_array, lib_xcore_math, lib_i2s
Current Implementation Status:
Issues I'm Facing:
1. I2C Communication with TLV320DAC3101 DAC
Problem: DAC not responding to I2C commands
Symptoms: All I2C reads return 0xFF (no device acknowledgment)
What I've tried:
Implemented bit-banging I2C with configurable timing
Tested I2C bus scan (no devices found)
Verified I2C address (0x18) and register definitions
Added delays between operations
Hardware: Using J3 speaker output port with TLV320DAC3101
Question: Are there specific I2C timing requirements or initialization sequences for the XK-VOICE-L71's DAC? Should I be using the hardware I2C peripheral instead of bit-banging?
2. PDM Microphone Array Integration
Problem: Need to integrate real PDM capture instead of simulated data
Current: Using mic_array library but only simulated capture in capture_pdm_frame()
Hardware: 2 PDM microphones on the XK-VOICE-L71
Question: What's the correct non-RTOS API sequence for PDM capture using the mic_array library? I see the library is included but need guidance on the actual capture calls.
3. I2S Output Optimization
Current: Using bit-banging I2S output on ports
Question: Should I switch to the hardware I2S peripheral for better performance? If so, what's the recommended configuration for the XK-VOICE-L71?
4. Architecture Decision: RTOS vs Baremetal
Requirement: I need microphone input, audio processing, and speaker output to happen simultaneously but for consecutive audio frames (pipeline processing)
Current: Baremetal with sequential processing
Question: For this type of real-time audio pipeline, would you recommend:
Switching to RTOS for better task scheduling and timing?
Staying with baremetal but implementing proper buffering between stages?
What are the performance and complexity trade-offs for each approach on the XK-VOICE-L71?
Technical Details:
Sample Rate: 16kHz target
Frame Size: 240 samples
I2S Format: 24-bit, stereo
PDM Clock: 3.072MHz (8:1 divider from 24.576MHz)
Processing: Need to process consecutive frames with minimal latency
Code Structure:
src/
├── main.c (audio processing loop)
├── hardware_setup.c (I2C, I2S, PDM initialization)
├── audio_pipeline.c (AEC/FFT/DRC framework)
└── hardware_setup.h (port definitions)Specific Questions:
What's the recommended I2C configuration for the XK-VOICE-L71's DAC?
How do I properly initialize and capture from PDM microphones without RTOS?
Should I use hardware I2S peripheral instead of bit-banging?
Are there any XK-VOICE-L71-specific initialization requirements I might be missing?
For real-time audio processing with consecutive frame handling, is RTOS or baremetal with buffering the better approach?
Resources and Examples Request:
Are there any specific XK-VOICE-L71 examples available that demonstrate:
I2C communication with the onboard DAC?
PDM microphone capture (both RTOS and baremetal)?
Hardware I2S peripheral configuration?
Real-time audio pipeline with buffering?
Do you have any reference implementations or application notes for custom audio applications on the XK-VOICE-L71?
Are there any debugging tools or utilities specifically for troubleshooting I2C and audio issues on this platform?
Can you point me to any existing examples in the XMOS voice solution that demonstrate proper audio pipeline buffering and timing?
Hardware Setup:
XK-VOICE-L71 development kit
J3 speaker output connected
PDM microphones on board
USB connection for programming and debug output
I've reviewed the XK-VOICE-L71 documentation and library examples, but would appreciate specific guidance for this real-time audio implementation approach. Having access to working examples or reference implementations would be extremely helpful for understanding the correct initialization sequences and API usage patterns.
Thank you for your assistance!
Best regards,
Benny Chainik