The samples in the buffer are the PCM values of the current block. If you think of a speaker element moving forward and backward, the values represent the speaker position (from -1 to +1) at moments in time, starting from the first sample in the block to the last. So if you want to make a sine wave, the speaker must move like a sine, and so you simply put the values of the sine function into the sample buffer. This is what we call the time domain representation of the signal.
If you want the level (and phase) of the different frequencies you need to transform the signal into the frequency domain, using an FFT.
I hope this answers 1 and 3.
A more in-depth explanation of the sampling theorem with references:
2) Yes the buffer size equals block size.
4) Not from the patch, but you can use OwlControl to change the block size.
5) Yes, you can 'play' any signal by copying the values into the buffer you get in
6) yes signal summing is done by simply adding the sample values together. You can do it by iterating over the samples, or using e.g.
FloatArray::add(). You can also scale the signals that you add by multiplying with a float.
See for example https://hoxtonowl.com/patch-library/patch/Stereo_Mixer
7) The incoming
AudioBuffer is a container for the two
FloatArray objects that represent the left and right channels. When
processAudio() is called, these arrays contain the input audio block. When it finishes, they should contain the output audio. So you can discard the input completely and replace the values, or use the input in some way - up to you!
The way it works under the hood is that 24-bit integer values from the audio codec input stream are converted to floats. They are then handed over to the patch for processing, then converted back to integers and sent to the output stream. So if the patch does nothing, then the same samples are simply copied from the input to the output.