Interrupt jitter is a total non-issue for motor control, on any modern ARM Cortex anything, I would dare to say even including Cortex-M0, maybe excluding some very special high-tech micromachining applications. Maximum current change rate defined by motor inductance, and physical inertia are orders of magnitude more.
Interrupt priorization is trivial, and we are talking about tens of nanoseconds of jitter, in systems which need to be controlled at ~ tens to hundreds of microseconds intervals.
A typical case is that the timer hardware controls PWM, handles sudden overcurrent events, and the interrupt is triggered once per PWM cycle, or say, 20 kHz.
I have actually never seen a problem with interrupt latency nor jitter, even when designing software-defined DC/DC converters, with 10-20x higher switching frequency. That's why the dedicated peripherals exist - you won't bit-bang control signals in an interrupt.
I do see the theoretical appeal of the XMOS systems, but I have never encountered an actual use case, yet. When I do, I'll be happy to try them out.
My latest large project is something that the XMOS architecture would handle quite poorly. A lot of theoretical combined MIPS, but I would hit the classical problems of parallelization, trying to force such homogenous architecture, where every single core is relatively low-performance.
A single-core Cortex-M7 MCU seems to be a good match for such a heterogenous project:
* 1st priority interrupt, pre-empting everything else, safety shutdowns everything from multiple peripheral trigger sources,
* 2nd priority interrupt runs a 250kHz software DC/DC dual-phase buck converter feedback control loop
* 3rd priority interrupts run two separate 20kHz 3-phase BLDC motor control loops
* 4rd priority interrupts control the state machine to configure DMA and control readout of 18 (!) MEMS inertial measurement devices, in a matrix of three shared SPI buses and six shared chip select lines, each producing data at 1kHz.
* Said 4rd priority interrupt demotes its priority by calling a software interrupt whenever the inertial data has been collected to run a 250Hz motion estimation control loop
* Finally, 3D time-of-flight cameras are triggered by free-running code, their data is compensated with sensor calibration data, lens blur and flare compensation model (basically combinations of small convolution kernel, box blur, and some lookup tables) is ran, and data is converted to point clouds for motion planning.
All the interrupts take about 5% of the CPU, or about 30% when the buck converter is running. The last item is computationally expensive, and requires the "oomph" of 400MHz M7, with all the algorithms running from the core-coupled RAM. And this is something which is difficult to parallelize properly, but having enough single-core performance to run the heavy algorithm from bog standard C implementation (without too much optimization tricks) shortens the development time.
I run caches disabled. All even remotely timing critical code easily fits the ITCM memory.