Conceptual infos about C++ patches

Hi,
I’m starting to code some patches, but I clash with some concepts that are maybe not clear to me.
Please have patience if some of these questions are trivial for you.

  1. AudioBuffer samples: what’s exaclty inside the samples (the float array)? Every float value is the volume of a specific frequency?
  2. AudioBuffer samples: its size depends on the “block size” of the OWL Software?
  3. AudioBuffer samples: can I know, somehow, which array index corresponds to which frequency (in Hz?)
  4. AudioBuffer samples: can I resize the array size?
  5. I see I can create a new AudioBuffer (createMemoryBuffer method): can I “play” it instead of the AudioBuffer received from the processAudio method? How?
  6. Related to point 5, can I “sum” my AudioBuffer to the output without touching the original buffer?
  7. Point 5 and 6 refers somewhat to the same underlying question: the OWL plays out only its original buffer (AudioBuffer &buffer parameter), and I should always modify it to get a result, or I can play audio by other means?

Thanks.

Hiya!

The samples in the buffer are the PCM values of the current block. If you think of a speaker element moving forward and backward, the values represent the speaker position (from -1 to +1) at moments in time, starting from the first sample in the block to the last. So if you want to make a sine wave, the speaker must move like a sine, and so you simply put the values of the sine function into the sample buffer. This is what we call the time domain representation of the signal.
If you want the level (and phase) of the different frequencies you need to transform the signal into the frequency domain, using an FFT.
I hope this answers 1 and 3.

A more in-depth explanation of the sampling theorem with references:
http://www.earlevel.com/main/2017/08/16/sampling-theory-the-best-explanation-youve-ever-heard-part-1/

  1. Yes the buffer size equals block size.

  2. Not from the patch, but you can use OwlControl to change the block size.

  3. Yes, you can ‘play’ any signal by copying the values into the buffer you get in processAudio(). E.g. buffer.getSamples(LEFT_CHANNEL).copyFrom(mybuffer).

  4. yes signal summing is done by simply adding the sample values together. You can do it by iterating over the samples, or using e.g. FloatArray::add(). You can also scale the signals that you add by multiplying with a float.
    See for example https://hoxtonowl.com/patch-library/patch/Stereo_Mixer

  5. The incoming AudioBuffer is a container for the two FloatArray objects that represent the left and right channels. When processAudio() is called, these arrays contain the input audio block. When it finishes, they should contain the output audio. So you can discard the input completely and replace the values, or use the input in some way - up to you!

The way it works under the hood is that 24-bit integer values from the audio codec input stream are converted to floats. They are then handed over to the patch for processing, then converted back to integers and sent to the output stream. So if the patch does nothing, then the same samples are simply copied from the input to the output.

hth,

Martin

Ok, many thanks, that solved most of my doubts.

In particular, believing that I had a frequency domain while I was actually getting the time domain led to many funny results, hahaha.

I bet!

Most useful functions are probably in the FloatArray class. There’s also an FFT implementation if you want to try that out, but that’s definitely in the ‘advanced’ section!