Categories
academics math techniques

Some people know morphology, other people know convolutions.

>> The only question is, what do you convolve the durations with?
>
> I fear it isn’t the only question.   I’ll start with: ‘What does ‘convolution’ mean?’

Sorry.    Running ahead of myself there.  Mathematically, a convolution is this:

out[i] = Sum  of  input[i-d] * kernel[d],

where the sum is taken over whatever d-values are possible, i.e. wherever both the kernel and the input are defined.

In this equation, “output” is the output time series, and “input” is the input time series.    “kernel” is the “thing you convolve with”; it is also a time series.    Or, more precisely, it describes what happens at different time delays.  This may be a bit more obvious if we take a small kernel and write the convolution out:

Take kernel[0]=1, kernel[1]=0.5 and kernel[2]=0.1.   Then:

...
out[5] = in[5]*1 + in[4]*0.5 + in[3]*0.1
out[6] = in[6]*1 + in[5]*0.5 + in[4]*0.1
out[7] = in[7]*1 + in[6]*0.5 + in[5]*0.1
...
out[100] = in[100]*1 + in[99]*0.5 + in[98]*0.1
...

You can see that kernel[0] controls how much goes “straight through”, immediately going from in to out without any delay.     Kernel[1] controls how much the input affects the output with a delay of 1, and kernel[2] controls how much input shows up on the output with a delay of 2.

So, a kernel of [1, -1] would give these equations:

...
out[40] = in[40] - in[39]
out[41] = in[41] - in[40]
out[42] = in[42] - in[41]
out[43] = in[43] - in[42]
...

As you can see here, each output is the difference between successive inputs.

You can think of a convolution as a filter.  Kernels like [1,1] or [1,1,1] or [1,2,1] are smoothing filters.  At each point, they perform a little sum or average over a region of the input.   And, it turns out that (with this kind of kernel) short, sharp wiggles in the input will be reduced in the output.

Kernels like [1, -1] or [1, -2, 1] are “sharpening  filters”.  They will make slowly changing inputs disappear because the report the differences between successive inputs.    This kind of kernel will preserve sharp little wiggles.

The big, important properties of convolutions are these:
* They are linear operations on a time series.    If you double the entire input, you’ll double the entire output.
* They are translationally invariant.   That’s a fancy way of saying that they perform the same operation to every piece of the input.   If they smooth the beginning, they will also smooth the end.
* They can act as low-pass, band-pass, and high-pass filters. (Or various mixtures of them.)

And, finally, in python, a convolution looks like this:

output = []
m = len(kernel)
n = len(input)
assert n >= m
for i in range(m, n-m):
    o = 0.0
    for d in range(m):
        o += kernel[d] * input[i-d]
    output.append(o)

or this:

import numpy
output = numpy.convolve(input, kernel)