Gaussian Blur 2D patch for Lich

I wondered what it would sound like if I treated audio like 2D texture data and blurred it. In one dimension it’s basically just a low pass filter, but in two dimensions it turns into some kind of weird flanger / comb filter thing.

Here’s a video demo: https://youtu.be/izN-dVmpV9s

And here’s where you can install it: https://www.rebeltech.org/patch-library/patch/_Lich_Gaussian_Blur_2D

As I was making the demo video I noticed some crunchiness in the sound, so there may still be some work to do here. It becomes particularly pronounced when processing sustained chords. In order to get texture size changes without clicks, I’m blurring at two adjacent texture sizes and then blending them, which meant I had to do the processing at half the sample rate in order for it to run on the Lich. It’s maybe this down/upsampling that’s part of the problem?

1 Like

Hey this gives me a commercial module’s review vibes, we’re note supposed to get quality demos here! Only “my module is broken” posts.

Depends… Downsampling to 1/2 SR means that any frequency > 12 KHz that your patch generates would lead to aliasing. If those frequencies are contained in downsampler’s input buffer, they would be filtered away. I’m not sure if Gaussian blur itself is supposed to add frequency content as it’s effectively a LPF in 2D, so it might be related to something else.

I would suggest checking that noise from ADC inputs is not causing this for some of the parameters, i.e. set all values read from parameters to constants instead of using getParameterValue to see if this issue goes away. If it does, enable them one by one to determine which parameter is causing this (could be more than 1). It could be that patch is sensitive to small changes and the tiny amount of noise that you get from analog readings is enough to cause discontinuities somewhere in it. This is a classical mistake made for setting delay lines length and maybe something similar is happening here, i.e. when doing texture lookup.

There are some optimization that could be added (block-based processing in BlurSignalProcessor, CircularTexture), but I’m not sure if that would be enough for making the patch fast enough to run at full SR on this MCU. Seems like it would be bottlenecked due to non-sequential IO on SDRAM for Y axis lookups.

Yes, I suspect trying to do block-based will be just as slow or possibly slower because what I discovered is that the compiler appears to optimize the readBilinear calls to just two reads and one interpolation due to the 0 passed in for the opposing axis. If I process in a block, I will have to provide a texture coordinate for that axis and it will have to do a full bilinear lookup, which is far too slow.

I will try your suggestion of eliminating parameter reads to see if that’s causing the noise.

Well definitely some weird high frequency content being introduced. This is a low frequency sine wave after blurring:

Guess I should have spent more time on this before releasing.

I figured out that the problem shown above is caused by the gain compensation I have in there, which is essentially being too reactive. I had tried to address that by smoothing the RMS values from the dry and blurred signals, but clearly that’s not good enough. I’ve tried also smoothing the calculated gain compensation values, which fixes the above, but can cause big jumps in volume when scanning across blur amount or texture size that causes clipping because gain compensation lags behind and can sometimes make a signal louder than it needs to be. Tomorrow I will try lerping the gain compensation across the block to see if it will keep the reactiveness required without creating those jagged edges. But also happy to hear other ideas about how to approach it.

Even without gain compensation, there is still some audible zipping when changing texture size or blur size quickly, which I think amounts to the problem @antisvin mentioned about setting delay line lengths. I think the only way to fix this would be to change blur size and texture size across the block, updating them at every sample, but the Lich is not going to be able to handle that.

Btw you could also try just using DaisySP compressor rather than reinvent a half baked one. It even has block based processing methods unlike most of DaisySP stuff.

Changing texture size per sample doesn’t look too expensive as in most cases you just need to recompute interpolation parameter (unless you have to use a different set of tables). So maybe it won’t be problem.

Setting blur size requires quite a bit of computation. However, what you could try is to compute new kernel for parameters used in current block (like in original patch), then get delta from the previous block’s values divided by block size. Then instead of recomputing kernel you could add the per-sample delta to previous value. That’s assuming that linear interpolation for kernel gives expected results.

Yeah, it occurred to me yesterday that what I am trying to do is basically compression on the wet signal, I will look into this.

I feel like a dummy, but I’m not sure how to use the compressor to get the signal boost that I want. It seems like for it to be useful I need to boost the blurred signal before compressing it, so I could boost it by some arbitrary amount, then set the threshold based on the input signal level, but this will result in pretty heavy compression when the blurred signal is already close in volume to the input signal before it is boosted, which brings me back to wanting to look at the level difference to determine the boost.

The other problem I’m encountering is that the compressor is just expensive enough to cause pretty consistent bufferunderun without actually causing the module to crash - it pushes CPU up around 98%, which only gets worse as texture size increases. :frowning:

It would indeed be nice if I could make the blurring a little less expensive so there’s a little more room for this post-processing.

You probably don’t need to boost the signal, because you get more or less the same result after compressor when you lower threshold and increase makeup gain.

I was thinking, another (a bit less common) approach for writing compressors is to use analytic signal from Hilbert transform to get signal magnitude. That means taking a real signal and sending it to 2 chains of allpass filters that produce complex signal in quadrature. The advantage here is that for a complex value we can get magnitude at any moment (at the cost of some fixed delay added during Hilbert transform).

This patch is already working with data in 2D, so maybe it could be possible to treat it as complex values and use magnitudes from the blurred texture? You might think that it’s similar to looking at RMS from a single axis, but instead of having to obtain average over multiple samples we would be able to use a momentary value.

Brilliant stuff once again @damikuy. You are really upping the game with your patches and videos.

Just a thought: instead of gain compensation, could the kernels be normalised? Or are you already doing that?

1 Like

Yeah that’s what I thought too and maybe I’m just not doing it right. I enabled AutoMakeup and heard no increase in loudness even when pulling the compression threshold all the way down to -80dB. I could hear compression taking some effect around -40dB, but without a significant boost in volume.

I’m not sure I follow what you are suggesting here. It sounds like it would introduce even more computation than the regular compression does.

Thanks for you kind words @mars!

The kernels are already normalized, have a look at setGauss in BlurKernel.h. I suppose I could add a signal boost here, not sure how that would change things.

Ok, so after spending a long time fussing with the gain compensation and getting to work kind of OK, I started looking at the zipping when changing texture size and finally implemented a fractional texture size scheme so that I don’t have to blur at two adjacent texture sizes and blend between the results. This freed up enough cycles to be able to lerp texture size across the block as well as update the kernel across the block with the delta approach you suggested. Once those were both implemented a magical thing happened: the gain compensation is no longer necessary because even at maximum blur, the signal strength is diminished only slightly. There is now only barely audible zipping when changing texture size with a simple waveform, but also the effect is now much more subtle, rather than the intense flanging and comb filter kinds of sounds that were happening before. I think the comb effect was actually just the weird discontinuities that happened while sweeping texture and blur size introducing related overtones in the high frequencies.

I’m going to spend a little more time with this tomorrow before pushing it to the public patch because it is 3am and maybe I messed something up. But at the moment, this feels a bit like a breakthrough.

1 Like

I finally have everything working nicely and have updated the public patch! Texture Size and Blur Amount can be modulate quite heavily without too many audible artifacts, thanks to fixing how I was offsetting how the samples are read relative to the write head of the circular buffer. Now it produces the kind of pitch warbling one expects from delays and flangers.

I removed my gain compensation code completely because now the signal loss from blurring is small enough that the Daisy Compressor works as one expects. I went ahead and exposed all of the compressors settings as MIDI parameters and gave them defaults that will make the compressor kick in only when the blur goes above 0 dB.

I also added a “brightness” parameter that can attenuate or boost the volume of the input signal as it is blurred, so you can either push a louder signal into the compressor or boost the signal level after the compressor with the make up gain.

All this and it performs better than the previous version. :smiley:

Two things I’m not entirely sure about:

  • compression is applied only to the blurred signal, not the final dry/wet mix. I did this so that fully dry will always be exactly what’s coming into the module, but I wonder if it might be nicer to compress the mixed output?
  • the compressed blurred signal is what’s put into the feedback path, but I wonder if it makes more sense to only feedback the uncompressed blur?

After all of this I now feel like I need to rerecord the demo video because the sound of the thing is so much better.

I would agree that it makes more sense not to compress dry signal. Also maybe there should be a control for amount of wet to compressed mix, i.e. to allow parallel compression on wet signal rather then fully compressing it.

This would make feedback level dependent on amount of blur in addition to feedback amount. I think that current approach makes feedback more controllable, but it should be easy enough to give the alternative a try too. Uncompressed feedback can have some advantages, i.e. you could try sending triggers to make the patch ping, compression could prevent this at it adds some lag and may prevent self-oscillation.

Ah, yes, I can add this. Although it is kind of funny to me that this patch basically has a fully-functional hidden compressor now.

I just tried pinging the patch as it stands and it seems to more or less work. It’s definitely the case that playing with high feedback and compression yields some interesting results, so I will probably leave the feedback in the compression path.

And since the thing that sent me back to working on the patch to eliminate the noise caused by my poor gain compensation was playing my Rhodes through the patch, I decided to try the Rhodes again and this patch makes for a nice stereo chorus effect, although I think I had gain up too high on my Instrument Interface so there is some distortion in the Rhodes from playing loudly. Might be even nicer if I dialed that back and boosted the signal going into the blur, which would then hit the compressor and hopefully not get so crunchy.

There are still some nasty audible artifacts, I would say that they mostly coincide with decay phase of bigger chords. Not sure if that’s compressor kicking in or something else.

Oooh, yes, I can hear those quite clearly in headphones. They were not noticeable to me listening on my monitor speakers. I didn’t really have the compressor engaged in this recording, but was using envelope followers to modulate some things, so it could be something to do with that. Although there are a few loud spots where I’m just playing single notes that don’t seen to cause the crackling. It feels like the crackling is mostly there with denser chords.

I’ve finally fixed the clicking when modulating texture size and added a blend parameter for the compressor so we can blend between uncompressed and compressed blur. It does change the sound pretty substantially from the previous version, but I think it is also the more correct implementation of the concept. There are definitely some sweet spots to be found where some subtle tilting of texture size and dialing in of feedback give a mono sound quite a nice stereo field.

I think tomorrow I will rerecord the segments for the demo video and maybe do another Rhodes improv to show the difference in sound there.

1 Like

Don’t have time to check it out myself right, so I’ll just ask here.

Does anything interesting comes out if you visualize output in 2D rather than as 2 mono channels? There are a few scopes with Lissajous mode in VCVrack you could use when recording video.