Magus patch projects log

You can’t work on per-sample basis with parameters

That is correct, but you can work with small buffer sizes to get audio rate parameters. The default buffer size is 256 samples, which at 48kHz sample rate means that block rate is 187.5Hz.
With the older codebase, on the OWL Pedal and OWL Modular, you can easily change the block size up to 1024, or down to 2 samples, which gives you a lot of range and makes audio rate parameters possible (up to 24kHz).
In the new OpenWare firmware we have not yet implemented a dynamically configurable block size. It’s not been a priority, since we’ve not so far seen a lot of demand for this feature. But it will still get done, eventually :slight_smile:

What exactly are the voltage ranges of the Magus patch points?

The audio ins/outs (top two jacks left and right) are +/-5V, while the bidirectional CV patch points (all the other jacks) are defaulting to 0-10V ie unipolar. The hardware supports different configurations, they can also be +/-5V or -10V to 0V, input or output.

Can a Patch tell whether a particular patch point has a cable plugged into it?

The patch only deals with the abstraction of a pararmeter, and doesn’t know if the value has been set by CV, MIDI or your hand. Or if a cable has been plugged in.

right now you can’t currently upload patches that draw to the screen (because device-specific builds are not implemented)?

You can’t compile them online yet, so we won’t be able to load a sysex directly from the patch page. But you can still share the source.

A (low priority) web feature that’s been on my todo list for a long time now is the ability to upload a pre-compiled binary, which would be useful in situations like this.

Also a quick note about drawing to the Magus screen. You can actually print a message, and up to three numbers, without using a special compiler flags or anything. In C++ simply use debugMessage(), ie one of:

void debugMessage(const char* msg, int);
void debugMessage(const char* msg, int, int);
void debugMessage(const char* msg, int, int, int);
void debugMessage(const char* msg, float);
void debugMessage(const char* msg, float, float);
void debugMessage(const char* msg, float, float, float);

In Pure data you can send to a print object and it will show on the screen.
Only the most recent message will be visible, so you can use it to display variable values in realtime, but you won’t get several messages showing one after another because they’ll be overwritten too quickly.

That sounds unfortunate and much limits the utility of the device, since it means the patch points would need to be limited to values that update slowly or infrequently… like it seems like you couldn’t implement things like FM operators and self-patch with a limitation like this.

And that’s why they’re called CV patch points, not audio patch points. Also, if you would try to pass audio for FM like you describe, it would be making a roundtrip through output and input buffers. This would increase latency and I think it would also be giving you jitter in your modulator’s phase that would make your waveshapes inconsistent.

it seems like if i plug in and then pull back out it snaps back to the last value i set by knob

I’ve just received mine yesterday, didn’t get to play with it a lot. But I suppose encoder is setting an offset for incoming CV that doesn’t change, so you would get the same value by pulling the plug out as sending 0V.

@mars, @antisvin, thank you, this is all very useful!

It makes sense the Patch API would be more limited than the Magus itself since Patches are meant to work on multiple devices… I can work within those limitations for now until the firmware gets upgrades or I figure out how to build firmware myself :slight_smile:

1 Like

Tested more with my saw4 patch today. I realized my MIDI support was broken so I fixed it. sets its root note on MIDI input correctly now. I had some fun plugging the Magus directly into my Minibrute 2 finally— magus took MIDI in from the brute, put its output into the Minibrute 2 EXT, and I was able to detune my saws and hear my synth as a voice on the Minibrute with the brute’s cutoff ADSR and everything. Wheee!

I did discover something weird. While I was testing I downloaded and to the magus and tried MIDI controlling them from the Minibrute keyboard (Magus as host brute as device). What I found on both patches is if I played lots of notes, occasionally (pretty frequently actually) there would be “note off” messages that got dropped. So single notes would just sustain forever. It seems unlikely that both of these patches just happened to have the same bug, also when I tried running the Magus in device instead of host mode (IE connecting it to a computer and sending MIDI from there) I could not reproduce the issue even with 5 or 6 notes playing at once. It seems as if NOTE OFF messages are being dropped by the Magus itself when it is in Host mode. @mars is this surprising?

1 Like

So after finally getting a custom firmware built and solving my knob / MIDI drop issues, I’m back to trying some patch dev! :slight_smile:
I am encountering some new questions… My main one right now is—

I am a bit confused by the debugMessage function… I’m running this:

  void processScreen(ScreenBuffer& screen){
    static int q = 0;
    debugMessage("Frame count", q++);

When I run this, I get a single number the first time I knob over to the “Status” screen. This number seems to never change. I can knob away from “Status” and back, it is still stuck on one number. If I press down on one of the bottom four knobs and then re-knob to the Status screen, the number changes. What is the correct way to use “debugMessage”?

Other, kinda specific questions:

  • Is there a way to not clear the screen between processScreen calls?
  • I notice that my patch outputs are scaled by the current “volume” number, not just on the headphone output but also on the top right (processAudio) CV outputs. This could cause problems for patches where the realtime CV outputs are used for triggers or CV note values. Is there a way to set separate “volume” for the headphones and the CV? Is there a way for a patch to opt out of the “volume” control for the CV outputs?

I am testing on git:732ceac17cb merged with a447765258d9.

Display is always drawn starting with an empty buffer in this callback function. That’s because Magus will update results that your buffer outputs before showing it and we don’t want to allocate a new buffer for that. But you can easily keep a copy of it between draw calls:

This should require copying a buffer of 8Kb for current display size. Btw, I think this class needs a method for calculating this like

void getBufferSize() const {
    return sizeof(Colour) * width * height;

Also, you should be using just object members to store data like frame counters, there’s no reason to use static variables here.

I don’t think that volume has anything to do with CV level, it just sends a specific command to audio codec’s builtin amplifier. Do you get this effect in every patch?

Thanks, that’s helpful!

I get it with every patch I have created as well as the YM2413 Synth patch. So that could mean that it is something to do with Magus or it could be that the patches I wrote are failing to opt out of the “builtin amplifier”.

I tried testing this patch:

because it was the one patch I could find in the library that generated CVs. But whenever I boot it on Magus+732ceac17cb+a447765258d9 it crashes.

Here is the patch code for my patch where I am trying to output CV control but it’s being scaled by the volume. It is like an improved Midi Note Interface plugin. My processAudio is very simple.

Oh I see. You’re still using audio, not just CV as in parameter outputs like I thought initially. So yeah, this goes through codec and its programmable amplifier. I think that you should keep volume on maximum setting if you want to generate V/Oct CV. Also, if you change volume, you’ll have to recalibrate it (or return previous settings).

Also, we’ll probably allow setting volume from user patches eventually. So you could set it to max for V/Oct output or lower it for normal audio patches. I think it’s already possible by MIDI from external controller.

Yeah, I haven’t figured out how to use the parameter inouts for rapidly changing values. Earlier in this thread @mars said about parameters:

The default buffer size is 256 samples, which at 48kHz sample rate means that block rate is 187.5Hz.
With the older codebase, on the OWL Pedal and OWL Modular, you can easily change the block size up to 1024, or down to 2 samples, which gives you a lot of range and makes audio rate parameters possible (up to 24kHz).
In the new OpenWare firmware we have not yet implemented a dynamically configurable block size[/quote]

If there’s a way in current OpenWare to dynamically configure block size I don’t know what it is.

If I’m outputting CV/gate I don’t necessarily need audio rate, but I might need better than 188hz to outrun MIDI?. (And I may start implementing things like an arpeggiator or sequencer soon, in which case 24kHz would be adequate but I would probably need things to change more often than every 256 samples…)

Also, we’ll probably allow setting volume from user patches eventually

When you reach this point I’d recommend some way to set volume temporarily/“locally”, IE, to say “set volume to 100% for me, but when you switch back to the other patch return to the previous global volume”.

Buffer size still can’t be changed. And with USB audio things are even more complicated now, so not sure when will this be working. I used to change it sometimes in old OWL FW.

BTW, that info is not quite correct. Audio block size is 64 samples, which is what is used for calculating latency. The number 256 is for the whole DMA buffer and it’s not particularly relevant as it’s 64 samples x 2 channels x 2 halves of buffer used for DMA. So we should be updating CV every 1.33 ms (or at 750 Hz rate if you like that better).

But I’m not sure how usable would be CV output in parameters - besides this low SR issue, it won’t have calibration applied. And it would have much lower precision (12 bits vs 24 bits). Though this would be still good enough for non-microtonal music.

BTW, that info is not quite correct. Audio block size is 64 samples, which is what is used for calculating latency. The number 256 is for the whole DMA buffer and it’s not particularly relevant as it’s 64 samples x 2 channels x 2 halves of buffer used for DMA. So we should be updating CV every 1.33 ms (or at 750 Hz rate if you like that better).

OK,this is all good to know. To be clear, though-- so that’s 64 stereo samples per audioProcess call, correct?. So that’s the fastest I can react to a MIDI change via “audio” (I don’t know how often processMidi can get called-- could it get called between every single audioProcess)?

Is it correct setParameter gets updated at the end of every audioProcess? So I guess I can react to MIDI on the audio channel exactly as quickly as I can react to it with a parameter…

I think the important question to me (for purposes of whether I can safely use setParameter) is whether this block size is predictable. That is, is it known at the time I compile with OwlProgram? Can I get it at runtime? (I can get the size from AudioBuffer when processAudio is called, but can I assume processAudio always uses the same buffer size?) If the block size (and therefore the parameter refresh rate) can vary between devices or between OpenWare versions then that makes it a little awkward for, say, gates-- if I code my patch to send a pulse on a parameter plug I can’t predict exactly how long that pulse will be held…

Yes, that’s how it works

It’s the same on all current devices and probably won’t be changed until it can be overridden at runtime. If that happens, we’ll likely be restarting current patch to change buffer size.

Even though we currently use the same settings, such things should be calculated at runtime based on AudioBuffer size and sampling rate. You’re looking at it from the opposite direction - if you need to send a pulse that is held for certain number of milliseconds, you should count how many calls to processAudio its bound parameter must be set to high.

This is all very helpful, thank you.

So I’ve got a few patches added to my patch repo now:

  • ScreenSaver: This is deeply useless, it plays no sound and draws random numbers to the screen
  • Silence: This does nothing at all, which is actually useful— I now have this set as my patch 1 so that when I first turn on the Magus it makes no noise and draws little power
  • Midi2CV: As described above this plays. It is an improvement on the “Midi Note Interface” on rebeltech/patches in that it has correct monophonic behavior (if you hold down two notes it will play the more recent one, and retrigger if the second note is then released) and it displays which notes are being held down (I am still struggling with whether to display right-to-left or left-to-right).

I uploaded Silence and Midi2CV to rebeltech/patches (although since USE_SCREEN is off on, it does not have the screen display in this version). I think next I will try to add RPEG to Midi2CV and maybe make a simple sequencer.

With Midi2CV I can now use the Magus to play my eurorack from a MIDI keyboard! Before I could only do that by getting a laptop in the loop. (My eurorack doesn’t have an oscillator so if you look carefully at this patch you might notice I’m actually using an oscilloscope as my oscillator…)

Super excited to have the Magus fully working now!

1 Like


I’ve got a PR in the works that will finally allow us to compile screen patches with the online compiler.
Note though that the API is changing. Instead of specifying PLATFORM=xyz, all the screen-specific code has been moved to a new base class. So in the future you will need to use this:

class KickBoxPatch : public MonochromeScreenPatch {
  void processScreen(MonochromeScreenBuffer& screen){

The ScreenBuffer i/f statys the same, bar a couple of new drawCircle/fillCircle methods courtesy of @antisvin.

I wonder if SevenSegmentLEDScreenPatch will be a thing eventually :wink:

Okay, cool. One thing that I’d ask though is you make sure it remains easy to switch between building with-screen and no-screen patches by #if-ing on USE_SCREEN. (This is probably possible even with a subclass because the developer could #define the parent class name.)

By the way, could I drop a few feature requests on you? These are just suggestions but…

  • Would it be hard to add support for directories to the upload interface? My patch repo now has a subdirectory of shared support headers, but I have to garble the file names when I upload to the patches website because the patches website puts all uploaded files in one dir.

  • Is there any way that I could draw input from the headphone jack input and the modular input at the same time? Or mix it so that the LEFT_CHANNEL samples come from the headphone jack and the RIGHT_CHANNEL samples come from the modular input? I’m asking this because it would be nice to be able to take input from my eurorack and my non-eurorack drum machine at the same time.

  • Let’s say I want to make a parameter that takes a limited number of discrete values— like, “Drone mode: On / Off” or “Mode: Drone / Midi / Random”. How would you recommend doing this? getIntParameter()? Does getIntParameter just map the -1 … 1 input range to an integer range? Is there a way to set up getIntParameter so that one tick of the knob corresponds to a single change in integer values?

    • Alternately: AudioUnits have a feature where “names” can be assigned to the values of integer parameters; when the interface is drawn, the patch object gets called with the value it wishes to display and is given a chance to return a custom string instead of the value. Would you consider a “parameter to string” feature like this? You could for example implement this as an overloadable method on Patch that returns a char *, when you are going to draw the “PARAMETER D [bar here]” at the bottom when the knob is being turned, if the returned string is non-NULL you could display that strihg in place of the bar. For some things this would be easier than custom screen drawing code.

I imagine you’d need to use a stereo to mono splitter for that (and leave one of inputs unconnected). This is dealt with in analog part of Magus, audio codec just processes whatever it gets as inputs in its 2 channels.

That’s how getParameterValue() works (except that it uses 0.0 …1.0 range). You can get parameter values scaled to any range with getIntParameter()

Encoder sensitivity can be changed and parameters don’t know if they’re controlled by encoders or pots or MIDI or from patched directly. But you can always track this with a separate value that counts increments/decrements:

if (prev_param > param)
else if (prev_param < param)
prev_param = param;

That would require making 20 function calls on every screen update even if this feature is unused. I think you should just draw a custom UI.

Oh, cool. To be clear, there’s nothing like a mixer here, right? But if I plug in a splitter it’s safe to use either one of [left audio channel 3.3V range] or [left modular channel 5V range]?

I’ll try this, I think I might need to add some debouncing but it should be enough for now. But, it seems like it would be better if the OpenWare layer handled this, because if I did it this way wouldn’t the parameter get “stuck” if you turned it all the way to the 0 or 1 limits? OpenWare “knows” that the rotary encoders are rotary encoders but my patch does not, my patch only knows it’s looking at a value between 0 and 1.

Well, I wasn’t thinking of the little “all inputs” thing at the bottom, I was mostly thinking of the separate “big” bar right above it. There’s the grid of bars at the bottom, but above that there’s a label for the last parameter turned and a bar for just that last parameter. OpenWare knows which parameter is being displayed there at any one time, but I don’t. If I knew what parameter it was displaying I could erase the bar and draw in text of my own.

(I find an onEncoderChanged in patch.h, which could potentially solve two of my problems above because I could (1) get a raw encoder delta out of it and (2) use it to track which knob was last turned, but in both cases I don’t know how to find out which parameter the menu interface has currently mapped to which knob.

By the way, what does the “samples” parameter on encoderChanged and buttonChanged mean?)

You shouldn’t need debouncing in this case as you won’t be getting raw encoder values.

I see - well that’s less problematic. I think that if you care about such feature and can provide an update and sample patch to test this, Martin likely wouldn’t mind adding this to OWL.

This is code that renders active parameter. So I think you’d need an extra callback function which would be registered by the patch, if it’s present than it should be called before last line that renders rectangle. And if that function returns non-empty result, it would be printing text instead of drawing rectangle.

I think that would number of samples since last click.

1 Like