V/Oct calibration on Magus

This is not a question about V/Oct tracking not working out of the box as someone might expect, but announcement that I’ve been working on automatic procedure for calibrating input/output. V/Oct scaling code relies on calibration settings and those that are shipped gave results that are way off (i.e. scaling multiplier that I need is about x1.5 bigger than default setting). Currently this code resides at GitHub - antisvin/OpenWare at magus-calibration and it actually works, but I’ll be making a few minor updates before opening PR.

Calibration procedure that I’ve made should be run like this:

  1. Enter system menu, scroll to V/Oct page, confirm you want to calibrate with encoder click
  2. Choose Input, confirm
  3. Connect 1V source to left input, confirm
  4. Connect 3V to left input confirm
  5. Choose Save
  6. Choose Output, confirm
  7. Connect left input to left output, confirm (twice, with a short pause)
  8. Choose Save

That’s it, now you can use code from here OwlProgram/VoltsPerOctave.h at master · pingdynasty/OwlProgram · GitHub and it would use stored calibration settings (don’t forget to use VoltsPerOctave(input=False) for output CV). Also, I’m using max volume to get full output CV range (it could be that not doing that would result with incorrect measurements, haven’t tested this yet).

I’m also finishing some code that would expose V/Oct tracking to Faust patches.


Wow that’s brilliant! Nice code too. Do you want to make a PR?

Out of curiosity, what are the offsets and scalars that you get after calibration?

Input scalar=-6.4383 offset=-0.0021
Output scalar=5.0293 offset=-0.0034

This gives expected v/oct tracking. I’ve set output gain to max, while input gain is set to 92 and there’s no way to change it in UI. It looks like not using max output gain was giving me incorrect results. I was thinking about exposing input gain as well in volume menu to use full range for CV, should not take much time to get this working.

I don’t quite understand why their signs are opposite, but writing to DAC a value and reading it back from ADC gives me opposite sample sign when normalized as float. But after calibration voltage is coming out as expected, not inverted. I’ve seen mentions of inverted ADC in source code, but it was referring to other devices, so not sure if this is expected or not for Magus. I’m using the following code for this conversion (should be the same thing that happens in SampleBuffer class):

Output = (int32_t)(current_sample * 2147483648.0f) >> 8;

Input = (float)(int32_t)((pv->audio_input[0])<<8) / 2147483648.0f;

I will be opening PR soon, but wanted to make another minor change to current code first - will use average value over multiple samples during measurements, this should give a bit more precise results.

And I’ll rebase to latest commit and test with it - when I’ve tried it a few weeks ago, I had some issues compiling. It looked like CubeMX upgrade broke something related to USB audio device initialization. So I’ve used commit from stable release with older CubeMX and that worked fine. Did you run latest firmware on Magus after your December updates?

I’ve added a few more features to make Magus more usable:

  1. Renamed “Volume” menu to “Gain” and added editing for both input and output gain (selectable by encoder click). Changed UI for this menu to also include slider widgets for gain levels

  2. Unless I’m missing something, changes to volume were not stored to flash. So I’ve added a flag that would be set on gain change and store settings on exit.

  3. Display patch slot numbers in “Patch” menu. No more accidental patch overwriting! Also, slightly changed menu rendering - it used to display slot 0, but we don’t allow writing to that slot, so it’s always empty. BTW, I’m not sure if slot 0 used for anything (settings maybe?) or is it a bug.

  4. This was done earlier and is a trivial thing, but status menu has firmware version now

  5. Updated calibration to measure average value from the whole buffer instead of a single sample. This reduces error considerably, i.e. scalar (float value) is usually within 0.001 in subsequent measurements. Previously it could change by 0.02 or so. This approach must use using no more than 256 samples to fit sum of 24 bit samples in 32bit number, so I’ve also made it skip some samples on larger buffer sizes. But this is untested yet (and doesn’t matter until buffer size is editable).

I think this should be enough updates for now. I’ll write some test patches and open PR some time next week. However, I gave another try to latest version from master branch with cube MX 5.3 (also tried 5.4) and it hangs while booting. That happened after fixing USB audio device init issue the same way as cube update script does it for Wizard/Alchemist.


Brilliant, I’ll try out your version asap and will try to answer some of the questions.

As for input gain, the CS4271 codec that we use in the Magus, Wizard and Alchemist doesn’t have a PGA (Programmable Gain Amplifier) on the ADC, only on the DAC. That’s why we’ve only had an output gain setting.

The WM8731 that we used in the OWL Pedal and OWL Modular has a PGA both for the ADC and DAC.

See https://www.mouser.com/datasheet/2/76/CS4271_F1-28973.pdf for details on the codec.

Hm, haven’t thought about checking datasheets in that regard. At least now I won’t have to figure out why it didn’t interfere with V/Oct tracking like I expected. In such case I’ll restore previous functionality for volume menu.

Btw, one thing that was very annoying in current firmware is an issue with parameters display going in the wrong direction when encoder direction changed. Took a while to track it down, but there was a bug with user values calculation being performed before encoder value was updated. You’ll understand when you have a look at last commit.


Cube updates: there was a version of CubeMX which didn’t generate SAI synch configuration. And there’s a longstanding issue with NAK messages on the USB Host interface. Oh and the Cube code generation likes to set #define USBH_USE_OS 1U which doesn’t work for us - it slipped into the master branch again somehow!

I’ve now updated and verified the master branch so the Magus firmware works again, using latest CubeMX.
After code gen you will still have to reset some files in Src to the master branch version.
I’ve committed the script cube-update.sh for this purpose.


oh yes, thank you :champagne:
That was going to be next on my list to try to fix. Not an easy bug to find, well done!
I’ve updated master branch.

1 Like

Can confirm that V20.6 runs now. So I’ll work on rebasing, remove input gain settings and test it again before making a PR.

Also had a question about encoders - is there potentially any way to receive every click instead of every other click? I understand that it works according to what arrives to SPI and this not a firmware issue, but not sure if there’s some kind of gain or threshold that can be adjusted to make them more sensitive?

Hmm, audio is broken after this update. Getting bursts of noise or a sine wave in output. Sounds like my Magus can play Merzbow now. Input reading looks wrong now too.

Also, your last commit won’t build because getFirmwareVersion is not visible in MagusParameterController.hpp

Aah, that Merzbow vibe! I messed up, apologies - fixed now.

I’ve been developing support for a couple of new works in progress, that both require multi-channel audio support. And in the process I broke the code by setting the wrong default audio format.

Also fixed the getFirmwareVersion() not defined issue, so it should build now.

Ok, that issue is fixed. But there’s something else that I’ve noticed on that version and those regressions are still present (it was happening between bursts of noise):

  1. Constant tone high-pitch ringing in output (sounds like distorted sine)
  2. Encoders give noise in menu in some cases. Specifically, when using top left encoder in volume menu

This is in FW from main branch. In my fork current code also breaks V/oct calibration - it looks like sample measured gets huge amounts of extra noise in data.

Just in case, this is also happening when powering from PSU and dissapears if I restore FW 20.5

I couldn’t resist waiting for audio to be fixed and added something else to my branch:

  1. Sensitivity setting for encoders. Previously encoder step was hardcoded to be 1 << 6, now it can be changed to 1, 1 << 3, 1 << 9 or 1 << 12
  2. Added code for drawing circles (empty or filled) to ScreenBuffer class. It’s used to visualize sensitivity levels. It only uses integer maths and should be useful for other cases, so I think I’ll add it to ScreenBuffer code for OwlProgram repo too.

I’ve added this setting to Play menu and made that menu the default instead of stats page. I think this would be the most common reason to use menu and makes more sense as default page (besides being the first in order).

Opening PR since I’m not aware of any remaining issues

1 Like

I think this is due to changing the LED PWM rate. I’ve reverted to previous settings, let me know if that takes care of the noise for you.

I’m pretty sure that we’ve had several issues here. Your last update fixed noise from encoder in menu. There was also constant tone at ~2.9 kHZ + harmonics - that’s gone too. But there’s still some kind of periodic noise in codec input. There’s a quick way to reproduce this in Faust with one-line patch that just copies input to output:

process = _, _;

You can hear it at volume set to max. If you look at it with a scope, it looks like a short pulse made every 10 milliseconds.

Turns out this issue also happens if downgrading to old firmware (20.5 release), so this is not a regression. However, new firmware gives me input scalar value twice as big as I was getting before.

Now the fun part. After I’ve reduced volume from 127 to 100, noise is barely audible and I’m getting same input scalar as before (-6.44). So one of the two things must be happening:

  1. V/oct calibration reads some data from output as input and that caused amplified noise to skew values.
  2. Not a bug in my code, but output gain somehow changes input noise level

It looks like I was running into variable overflow when counting average sample value - haven’t realized that sum is being stored in a signed variable from which we should use just 31 bit. I can’t understand why lowering output level prevented this (it should make no difference for input!), but at least it’s fixed now and I get the same results as expected.

I wonder if it would be possible to get rid of input noise on full volume?

Uploaded build to github here https://github.com/antisvin/OpenWare/releases/tag/releases%2Fv20.6.beta1

1 Like