The results are interesting. I’ve used this reverb patch from our library for testing: https://www.rebeltech.org/patch-library/patch/Owlgazer_Shimmer_Reverb
First of all, on current OWL (STM32F4 MCU) we get the expected effects of trading some extra RAM for lower CPU utilization:
dlt - RAM - CPU
1024 - 341k - 88%
2048 - 352k - 85%
4096 - 362k - 82%
8192 - 369k - 80%
DLT values outside of that range made no difference.
But the real surprise is running it on STM32H7 MCU, which is similar to what we’ll have on next generation OWL board (am I leaking secrets here already?). The two important points is that it has cache and that there’s 512kb of internal SRAM that is used for dynamic memory before SDRAM. SDRAM has much higher latency and things get worse as we get more cache misses on it.
dlt - RAM - CPU
1024 - 341k - 8%
2048 - 352k - 8%
4096 - 363k - 7%
8192 - 369k - 7%
So, as long as we’re not allocating on SDRAM, it doesn’t seem to make any measurable difference.
To confirm this, I’ve ran another patch that adds several more long buffers and would require utilizing SDRAM. This patch can’t run on current OWL, it’s not performant enough for that.
dlt - RAM - CPU
1024 - 6M - 22%
2048 - 6M - 26%
4096 - 6M - 30%
8192 - 6M - 36%
So on H7 we have the opposite effect - increasing DLT requires more SDRAM access and it slows thing down a lot. Luckily we can have separate settings for those platforms.
I’d suggest using 4k or 8k setting for OWL2 and 1k for OWL3.