Updating the WaveTable Morphing Patch for Magus

Hi all,

I am the author of the Owl Morphing2D patch that we built with @mars two years ago. It is a morphing oscillator for complex “user made” wavetables. So you need an external wavetable editor as the one I suggested. The Owl patch worked as a 2D XY axis that interpolates between waves. The hidden part is an additional “third” dimension for the frequency sweeping to avoid aliasing. I’d like to make it a Magus patch and I’m asking for some help and recommendations.
First, you can have a look at the previous works :
Previous thread about wavetable oscillator
Owl modular patch

I just got my Magus working and updated recently so I am very new to it. I managed to build a quick patch using the screen and two different morphing osc. And right input is for FM.
Here’s the link : Github link to patchsource
I can upload it in the web patch library if someone wants to store it this way. However I think the player doesn’t support it.
I don’t have much modules now. Could someone tell me if v/oct left input works properly ?

Considering its numerous patch CV points, one interest could be to use Magus standalone even though that’s not my use. So I got different ideas that I can resume in three patches.

PATCH 1: Polyphonic 5 Morph Osc
As it is now with CPU, I can run up to 5 oscillators together. So it could be a MIDI driven patch that you can remotely play with a midi keyboard. Or a 5 notes chord transposed with v/oct input. I would use a single wavetable for every osc. I’d include envelopes as well as LFOs to get around the X and Y wavetables axis. FM in would be nice too.
I saw there is midi integrated in a few patches though I am not familiar to it. I guess there might be algorithm to sort out polyphony with midi notes…

PATCH 2: Dual Morph Osc
Back to monophony, I’d make a 2 osc patch again with integrated LFOs to trip around the XY. This one would use 2 different wavetables. v/oct input and why not MIDI too.

PATCH 3: Dual Morph Osc with Sequencer
This would be the PATCH 2 standalone version where the LFOs parameters get limited to leave more space for a sequencer and envelopes. I’d like the sequencer and LFOs to be driven by an external tempo clk input.

Now some general improvements :

I managed to display the left signal on the screen. A nice improvement could be to split the screen for left and right. And a big improvement would be to display the interpolated wave. Which is, IMHO, a lot of work for a small result because X and Y phases are not locked and it’d need structural changes.

Here I’m not sure how to do. Default patch point are DC coupled 0-10V if I’m right. So it needs audio amplifying to be used as an FM input around +5V. Then this float parameter has to get offset -0.5 to get back to audio values. And finally, this has to go in the setfrequency method argument (freq + FM). Nevertheless, I never succeeded this way :cry:
So I thought another solution is to use one audio input as FM input and write the setfrequency method in a buffer loop with agument (freq+right[n]). This gives results but not sure it’s the right result. Could someone try and give his view ?

I’d like to make a trig input to synchronize LFOs and sequencer to an external clk. I don’t remember anything like this in the C++ patch library. So I wonder: Is that even possible ? And what about generating a clock ? So that Magus can be either master or slave.

I noticed some waves are very low volume so it might need some sort of normalization. Maybe an other type of interpolation could give better results. It is linear for now. Alternatively, a compressor would do the job.

As explained in the previous threads, the 256 samples waves aren’t made for bass (except if you’re into this dull digital bass sound :grimacing:). 47Hz Bass playback at 48KHz samplerate needs 1024 samples/wave. So it would need a lot more storage. I need to make some calculations to know how big I can expand the wavetable and how big can the header files be.
There are few options here. The simpler way is to change all waves to 1024, this would induce UGE storage needs and a wavetable downsizing. Otherwise, for “dual morph synth”, we can define left channel as 1024samples “bass” oscillator over 4-5 octaves before it aliases and right as 256samples “lead” (e. with two octaves offset). Finally, I think it is possible to playback different sized waves and this would be the most advanced algorithm but way different than the actual one…

As I mentioned in the previous thread about wavetable oscillator, waveedit makes 256 samples wave “by default”. I managed to change the sample length to 1024 in the source code here and then build the editor myself on ubuntu. So that’s a good start ! Let me know if you need some help with that. It’s actually a very useful free tool ! I’m not sure you can find an equivalent out there… You can do many sort of editing on it.
I thought about a nice feature for the lazy ones. It is a wavetable interpolator. So if we take the existing Owl patch as an example, instead of making all your 42 waves yourself on WaveEdit, you could just make 6 waves for the first X axis and 6 waves for the last X axis. Then enter that in an algorithm that creates the “in between” waves, in a way you get a “progression” from first to last wave on the Y axis. We could even extend that to the “corner” waves and let the algorithm build all the “inside waves” with some degree of randomness, effects…
This could be a way to reduce the header .h file size too.

Ok… Already a long post ! Thanks for reading.
Please share your ideas if you’ve got other features.
I am not sure how and when I’ll come up with new patches but I’ll try to get into it when I have time. Feel free to take part of it ! I can help you understanding the code. I’m not yet too familiar with github group working so let me know if I have to create specific branch or unlock anything.
I can also create a wavetable library for Rebel Technology devices so that we can share them.

Last commit I tried making LFOs like in KickBoxPatch but I always get errors with BX parameters and AX setparameters methods. Does anyone have an idea what’s wrong here ? You can try uncomment these lines.
Basically if this works, I’ve got LFOs ready !

Hi @AlWared,

It would take some time to digest your post, but based on a quick look:

  1. I’ve made a fork WaveEdit a while ago that is quite different from upstream version, check it out at GitHub - antisvin/WaveEdit: Synthesis Technology WaveEdit for the E370 and E352 Eurorack synthesiz . It has the same number of samples as upstream, but it’s easy to change that. Please check it out and let me know if you find it useful - it has a very different workflow, based on base wavetable generation and effect morphing

  2. Regarding screen, you shouldn’t be using generated audio, because display updates just once every 20ms. Instead you should render wavetable separately. So to render WT preview with width of N samples, you would render your table with phase step of 1 / N.

  3. Regarding FM and V/Oct control, you should use audio inputs for both if you want to support FM at audio rate. If it’s meant to be used for something like vibrato with LFO, you can get that from audio inputs instead. But you’re right that they’re currently set to 0-10V. Actually I think I could make it editable in UI, unless Martin has better idea about exposing this to users.

  4. WT volume - are you sure that you don’t have device volume set too low? It’s editable in menu.

  5. Something that I was working on recently is reading resources from flash. So basically, latest firmware release allows storing different wavetables on device and loading them (by name). Some info here - Resources usage - #15 by mars .

Hi @AlWared, great to see you’re picking this up again, very exciting!

As @antisvin says we’re just working on supporting flash-stored resources which would be perfect for the wavetables.

How many tables do you need in total for full 2D morphing?

I would expect 8x8 grid to be used for WaveEdit compatibility. With 16bit data and 1k samples per table that would require 128kb to be stored in resources. We may also add 1-4 repeating elements per table to avoid conditional checks or divisions for determining phase wrapping

Btw, I’m not convinced that 256 samples are insufficient for bass notes. Upper harmonics would have low volume and would drown in artifacts from wavetables morphing, FM or whatever extra effects would be applied after VCO. And it feels very wasteful to use multiple versions of tables on an embedded platform. In fact, I’m not aware of any eurorack module that would take this approach.

I think one thing that’s worth checking out is how it was done in MI Plaits based on this paper .


answering :

  1. I’ll try that asap !

  2. The sound is always an interpolation between 4 waves. So it’s this interpolation instead of a real “wave from the table” that would be relevant to show on the screen. I don’t know if that’s what you had in mind. I yet have to think about how to make this.
    I had an other idea. It’s to simply show the grid and the actual X/Y position. Better use when you morph quick.

  3. Yeah I think FM input works now.

  4. No it’s from the wavetable itself but I see you can normalize in waveedit. Maybe I could include a normalization in the code too.

  5. I had a quick read but I’ll go back to it. I’m not reading much of the forum… but Martin told me other users were interested in wavetable patch !

Yes I think I was wrong with this 256 vs 1k waves… Using 256 waves should be totally fine. The “dull” sound I had was because of excess amount of high end harmonics in the wave definition. 1024 is really not to be used for embedded application. However using two osc - two tables remains an interesting feature imo. I mean, if we can store them, then it could be user choice to use the same wavetable or one wavetable per osc.

8x8 would indeed go well with waveedit ! Then don’t forget the “anti-aliasing waves”. We need about 7 of them if I remember correctly.
So after “FFT anti-aliasing” processing we need 16256788= 1 835 008 bits/wavetable
Am I missing something ? It’s far from your calculation…

My number was based on 1024 samples and no anti-aliasing tables. So if you divide it by 4, multiply by 7 and then by 8, you’ll get the same value in bits.

Regarding the extra tables, I wasn’t counting them, because like I’ve said they are used for wavetables VCO that I know of. I could add as an option in the WaveEdit fork that I’ve mentioned if you would find that useful, no problem with this! But I won’t be using it myself and would likely use the approach used in Plaits, because I want to be able to make a 3D morphing wavetable VCO too. This would be based on the same 2D table, but patch would load 8 of them and use that as Z axis for morphing. This would require about half of available flash, so it’s possible - but not if use x7 more space.

I read higher order integrated and Gieger papers. If I got it right, that would change memory for cpu and a little aliasing increase. I still have to make an idea about the algorythm beyond but that sounds like an interesting approach. Especially for 3D as you mentionned. Maybe not recommended for a polyphonic patch though. I may fit to the BL because I’m to lazy to start again from scratch but I’d like to compare both WT oscillators.

I think the idea of my patch was to generate the band limited (BL) waves itself instead of stocking them all the time. I’m not sure it really is useful but we made it this way because we just could.

I’m actually trying to use “various size BL waves”. If I am successful, that will significantly reduce space.
I’ll give it a try by the end of the week and push it if it works. How big flash is ?

I tried it today ! Well done ! Good to include modulation. Unfortunately, a single change on carrier/modulator set the wavetable back to default (sine wave). For exemple, you can not import external users wavetable and then apply the modulations to it. Did you notice this ?

Thanks, glad to know that it’s worked for someone else.

I’ve made considerable changes to WaveEdit as you’ve noticed. So the idea is that you have a single base wavetable (as in - 256 samples) that acts as base for every wavetable in 64 tables grid that gets generated by applying various effects. Because there’s just one base wavetable, importing the whole grid allows you only to apply effect (the same way you would in original WaveEdit more or less). But if you change base table generation parameter, you’ll force re-rendering of the whole grid, so imported samples would be overwritten.

I think I could add some support for importing carrier or modulator from a WAV file. The easiest way to allow assigning carrier/modulator table from any table in the grid. It may be possible to load the table into carrier/modulator from a file directly, but then we’d have to know which specific table number should be used.

I would say that a 3D table is excessive if you want polyphony any way. Usually Z axis is used for different tables and morphing with it ends up with partials moving too unpredictably. Regarding aliasing, I haven’t noticed anything perceivable with Plaits.

512kb flash is available (that’s both for patches and all data). You should see how much of it is free on your Magus, there’s a page in UI with system info. There’s an extra 8MB chip that we don’t support in firmware yet, I think it would be working this year. That chip has one limitation - we must copy data from it into memory, while with internal flash we can read from it directly.

Also, there’s a big difference in performance when data fits in internal RAM (64Kb + 60Kb) or if we have to place it into slower SDRAM (8Mb).

I’ve added some features based on discussions here. There are 2 new “Set as carrier/modulator” options in the menu when you click on wevetables list. Also, there’s a checkbox to freeze waveform, which means that it disables generating new waveshapes in those base oscillators. But you can still use the phase-based processors for modifying it.

I’ve started work on integration with OWL based on RtMidi library in a separate branch, the plan is to make a librarian type application for reading/writing wavetables to OWL devices, maybe some patch management too. That branch is not particularly interesting yet.

1 Like

@antisvin Nice one, I tried it a bit but need to spend more time to explore !

I’ve edited the patch lately. First, I got rid of OscSelector files. I got a slightly lower CPU use. MorphOsc now directly interpolate between the four waveforms around. I want to follow the same idea to dispatch the unaliased interpolated waveform on the screen but I did not think about it much yet.
I updated the wavetable to 64 units 8x8. As before, I generate a .h header file with the .wav from waveedit. But now I can not use two different wavetable banks.

Building patch MorphMagus
/home/wawa/Documents/RebelTechnology/OwlProgram/Tools/gcc-arm-none-eabi-9-2020-q2-update/bin/../lib/gcc/arm-none-eabi/9.3.1/../../../../arm-none-eabi/bin/ld: Build/patch.elf section `.data' will not fit in region `PATCHRAM'
/home/wawa/Documents/RebelTechnology/OwlProgram/Tools/gcc-arm-none-eabi-9-2020-q2-update/bin/../lib/gcc/arm-none-eabi/9.3.1/../../../../arm-none-eabi/bin/ld: region `PATCHRAM' overflowed by 1720 bytes
collect2: error: ld returned 1 exit status
make[1]: *** [compile.mk:110: Build/patch.elf] Error 1
make: *** [Makefile:95: patch] Error 2

So I assume there is some memory issue. I tried copying from other header or deleting FloatArray but I always end up with something like this. I think this is also why we limited the previous Owl patch to 42 units wavetable.
I have to take a closer look at how we want to store wavetable to the device. So I did not try much to correct this and I stick to one 64units bank.

I did not get successful with various size waves :grimacing: I made a branch for it. Basicly, we set the waves in that order: 7 BL waves per 8 Y axis per 8 X axis. We use the full 256 for the first BL wave and then, for every new BL wave, we process the FFT to get rid of the upper half of frequency content (setting magnitude to 0). So I thought we only need half the information to recreate what can be a half sized sample to store in our wavetable. So that we free memory.
But I get discontinuous sound when I switch BL waves, which I thought could be a normalization issue with IFFT. But I also get no sound for upper BL waves so I might miss something here. Any thought ?

In a00412c I’ve set two LFOs thanks to @mars kickbox patch. I don’t know why I had issues with this two months ago but now it works. I will go on with VCAs, env generators and that should already give a good patch, maybe I’ll make it visible in the library at that point.
Then I’d like to use the top B parameters as trigs/gates for envelopes or LFO rates.
And eventually I can make another patch with a sequencer so that there is a “standalone” version of the patch.

About polyphony, for now I can run up to 5 oscillators / 2 LFOs before I hit critical CPU usage. I doubt 6 will be ever possible. But 5 voices is enough for a one hand keyboard play. Or even two hand for the bad player that I am :wink:
But I haven’t tried MIDI yet, I need to make this jack MIDI connector to try the existing patches from the library.

Ok, so it sounds like you should definitely add support for reading wavetables from flash. This would resolve issues with patch size and also let you share wavetables between patches. General direction is something like this:

  1. Use this script to convert WAV into OWL resource - OwlProgram/makeresource.py at develop · pingdynasty/OwlProgram · GitHub . Something like ./makeresource.py -w -i WT.wav -o WT.bin ResourceName should work, ResourceName is the value that you’ll use for loading

  2. Use make resource RESOURCE=WT.bin SLOT=... command to load it on device. This is similar to sending patches, you just give it a RESOURCE variable with path to resource file and specify SLOT. I think you should be using something in 43 - 54 range.

An alternative is to use approach described by Martin here. Note that he’s writing about converting data to floats, but that would use double the flash amount (with no improvements to anything). You probably would want to keep it as int16 on flash and convert to float when loading data to memory in your patch.

IFFT is not supposed to normalize anything, it outputs wave data with the same magnitudes/phases as inputs. And if you don’t get audio on upper tables, it’s a strong indicator that something is wrong in your code.

It looks like you’re just copying part of wavetable after IFFT, that obviously can’t work after you’ve converted to time domain. You must downsample it instead of that, i.e. by copying every sample with position divisible by (wave length / fftOffs). So after you’ve discarded half magnitudes, you’ll be using samples at position 0, 2, 4, … 254.

Right I’ll try that. I’d like to make this work before I decide how to manage memory. Especially, I’d like to run a few tests to check how downsample resolution affects the sound.
My Fourier theory already ages a bit. So I have a question here that I did not find : Why downsampling every even ? Would I get a different information/sound with odds ?

There should be no changes to the sound assuming that you zero out magnitudes that you’ll remove. You’ll be discarding frequencies that don’t exist in the decimated spectrum, that won’t add aliasing artifacts.

In this case, you would get the same wave with phase shifted by 1 / 256 if you use odd samples.

@AlWared I’ve made a fork of your repo here.

I got a patch working with dynamically loaded resources and two wavetables, using a pre-release firmware. It sounds great with two tables in stereo!

I was planning to make some more changes to allow several oscillators to use the same wavetable, probably just by passing references to waveTable objects. Did you think about how to do this already?

Does it compile with OwlProgram develop branch ? I’m still missing TapTempo.hpp afterwards.

Looks like you did a lot ! I didn’t really know how to describe it with synth voices so I’ll give it a closer look once it runs. I guess TapTempo.hpp is the way to sync LFOs. On my side I was working on a similar class to get a BPM smoothed from triggers and I’m quite happy with the result.

The “various size waves” is quite a failure tho… While that looked good on the paper, I didn’t manage to get rid of aliasing this way. I wanted to get that done before we change the wavetable loading… the reason why I was a bit quiet.
So nevermind and let’s go with the “single size” BL waves.

Sweet !
I think once you start sending ADSRs and LFOs to twist the morph inputs, it gets very playful ! The pain is maybe to find the appropriate and still versatile set of wavetables to store in the device.

Many Osc per wavetable… Well I thought I first had to build the synth voices so I have to think about it yet.

The library classes for AbstractSynth et al is still in flux, hoping to have them pinned down quite soon.

I wouldn’t worry about this. The thinking seems to be that the relatively bigger wavetables for higher frequencies actually help to mitigate the quantisation error you would otherwise get. So I don’t think this is a bad thing.