Obsfication - the hiding of intended meaning in communication, making communication confusing, willfully ambiguous, and harder to interpret.

Which was what I was dancing around above. While I read math I have to confess that I tend to glaze over when I see it used like this.

**"The arithmetic introduced by the variable substitution"**The process of simulation is to mathematically manipulate the values of the samples of the sound to produce the desired effect. In some cases this results in math that gets a bit complex to do fast with a limited processor, e.g. floating point "123.456" against integer arithmetic "123456".

In digital signal processing some operations such as "multiply A * B", or "multiply A * B and add to C" are fairly common. There is an old saying that goes something like "A ha'peth of hardware is worth a shitload of code" so you can get Digital Signal Processing chips that contain specific instructions such as these that use internal hardware to pull the rabbit, C, out of a hat rather than grinding around "turnips times bananas equals pineapples" many times to get the same result.

But if you're cheap you take a step back and try and construct the problem so you avoid the need for such calculations in the first place, the need for "The arithmetic introduced by the variable substitution", and you can get away with using a cheaper/slower processor.

**"This is not a memoryless nonlinearity"**This hinges on "memoryless". In DSPing a "memoryless" operation is one which makes no reference to a previous state, for example simply scaling samples as a volume control or straightforward diode clipping.

An operation with memory uses a value stored from a previous operation as part of this operation, and in turn stores a value for use in the next iteration. Sundry effects that rely on

*delay* such as reverb, echo, chorus, &c are in this class, as are tone controls and filters.

Here they are saying that this distortion has memory of a previous state, and the most obvious thing that comes to mind in this context is op-amp latch-up or saturation recovery time, where what is happening now depends on what just happened previously.

There was a rather good article in old

*Wireless World* a few (many?) years back that (actually) explained "z-plane" operations/transforms in exactly the way this paper doesn't.

Putting it as simply as I can, this is looking at functions such as poles and zeros (e.g. CR networks, and more complex filters) in terms of finite time steps or samples in the

z-plane (z = this sample, z-1 = last sample) rather than the more usual analogue s-plane transforms.

e.g.

new-z-value = this-z-value somefunction previous-z-value

This idea of system memory can also be understood in analogue systems such as CR networks as the voltage stored on the C at any instant as a memory of a previous instant.

Some people seem to get a bit carried away with the fact that you can make a mathematical model of reality, particularly when you can make it run on a modern processor fast enough to simulate real time, and potentially replace the analogue system it is modelling. While it's certainly a neat trick (and there are some things you can do easily in a processor you can't in analogue) I think it tends to confuse the map with the territory.