## apulSoft Blog

apulSoft apShaper uses antiderivative antialiasing. The idea of such an algorithm first appeared in a paper from 2016 by Julian D. Parker, Vadim Zavalishin, and Efflam Le Bivic. I found the detection and handling of ill-conditioned cases in the paper to not work for arbitrary signals and came up with a different idea, but that's a topic for another day. However, the base algorithm is the same, and here is a simple explanation:

Green diamonds: four samples $s_n$ at times {-1, 0, 1, 2}.

A continuous version of the signal is needed for the algorithm. An easy one is linearly interpolating between samples (green polygon). A waveshaping function $tanh(x)$ is then applied to arrive at the red curve.

As the input samples are linearly connected, a good approximation for the waveshaping result between two samples can be found by calculating the area of the shaper function between the sample values and dividing it by the difference between the sample values. If antiderivatives are used to calculate the area, an infinite time resolution is achieved on the waveshaping function, reducing aliasing a lot.

$y_n = (AD1(s_n) - AD1(s_(n-1)))/(s_n - s_(n-1))$

$AD1(x)$ is the first-order antiderivative of the waveshaping function.

To make it work in apShaper, a lot of additional problems had to be solved. The entire process happens one level higher with second-order antiderivatives. Fast and robust calculation of the required antiderivatives is key to making it all work.

Now, looking at $tanh(x)$ and asking Maxima to produce the antiderivatives, this is what we get:

$f(x) = tanh(x)$

$AD1(x) = ln(cosh(x))$

$AD2(x) = 0.5*Li2(-e^(-2x)) - 0.5*x^2 - x*ln(1 + e^(-2x)) + x*ln(cosh(x)) + pi^2/24$

$tanh(x)$ function.

$tanh$ itself seems unproblematic and is implemented in most computer languages, but the antiderivatives are tricky.

Calculating $AD1(x)=ln(cosh(x))$ has accuracy issues for large values. $cosh(x)$ produces huge values for large $x$ and $ln(x)$ produces small values for large $x$. They almost cancel each other out, and accuracy suffers. $tanh(x)$ approximates horizontal lines at $y=-1$ and $y=1$ for large magnitudes. Therefore the antiderivate for large magnitudes $x$ is $~~|x|$.

As $tanh(x)$ is an odd function, $AD1(x)$ has to be even. $x$ can be replaced by $|x|$.

$AD1(x) = ln(cosh(|x|)) = |x| + ln(cosh(|x|)) - |x| = |x| + ln(cosh(|x|)) - ln(e^|x|)$

$=|x| + ln(1/2(e^|x|+e^(-|x|))) - ln(e^|x|) = |x| + ln(1/2(1 + e^(-2|x|)))$

$=|x| + ln(1 + e^(-2|x|)) - ln(2)$

This is much better for computers. $|x|$ has perfect precision, subtracting $ln(2)$ is trivial and accurate and $ln(1 + e^(-2|x|))$ is well-behaved: $e^(-2|x|)$ always stays between 1 and 0 and the entire term becomes very close to zero for huge $|x|$. To evaluate it precisely, there's even a $ln(1+x)$ function in c++: `std::log1p()`.

For $AD2(x)$, one problem is the polylogarithm needed for the $Li2()$ function. Some c++ implementations exist, but they have complex argument reduction with many cases, which makes them slow and impossible to accelerate using SIMD operations. Interestingly, the $AD2(x)$ from Maxima contains the original $AD1(x)$ formula. Inserting the improved-precision one and a lot of algebraic steps finally results in:

$AD2(x) = sgn(x) * (0.5*x^2 - |x|*ln(2) + 0.5*Li2(-e^(-2|x|)) + pi^2/24)$

Similar to the first-order case, this just becomes the difference to a simple (negative-mirrored) parabola - which makes it stable. Additionally, the $Li2(x)$ argument $-e^(-2|x|)$ has a known range: -1 .. 0. That makes the $Li2$ implementation much easier and the usual argument reduction is no longer needed.

From the definition $tanh(x) = (e^(2x)-1)/(e^(2x)+1)$ and knowing $tanh$ is odd, everything can be reformulated:

$f(x) = tanh(x) = -sgn(x)*(1 - 2/(1 + e^(-2|x|)))$

$AD1(x) = |x| + ln(1 + e^(-2|x|)) - ln(2)$

$AD2(x) = sgn(x) * (0.5*x^2 - |x|*ln(2) + 0.5*Li2(-e^(-2|x|)) + pi^2/24)$

All three functions contain $e^(-2|x|)$, saving computation time when all are needed for the same values.

Bonus content: The $Li2(x)$ c++ function needed to calculate values between -1 und 0.

```
#include <cmath>
using namespace std;
static inline double estrin(double x, double c0, double c1, double c2, double c3, double c4, double c5) {
const auto x2 = x * x;
const auto x4 = x2 * x2;
return fma(x4, fma(x, c5, c4), fma(x2, fma(x, c3, c2), fma(x, c1, c0)));
}
static inline double estrin(double x, double c0, double c1, double c2, double c3, double c4, double c5, double c6) {
const auto x2 = x * x;
const auto x4 = x2 * x2;
return fma(x4, fma(x2, c6, fma(x, c5, c4)), fma(x2, fma(x, c3, c2), fma(x, c1, c0)));
}
/// calculate Li2 between -1 and 0
double Li2_m1_0(double x) {
const auto l = log1p(-x);
const auto y = x / (x - 1);
const auto p = estrin(y, 0.9999999999999999502, -2.6883926818565423430, 2.6477222699473109692,
-1.1538559607887416355, 2.0886077795020607837e-1, -1.0859777134152463084e-2);
const auto q = estrin(y, 1, -2.9383926818565635485, 3.2712093293018635389, -1.7076702173954289421,
4.1596017228400603836e-1, -3.9801343754084482956e-2, 8.2743668974466659035e-4);
return -0.5 * l * l - y * p / q;
}
```

##### Comments

##### Latest Blog Posts

Oct 27th, 2022Clang-Format Xcode Automator Quick Action

Mar 16th, 2022Feb 19th, 2022Mar 17th, 2021##### Tags

math(6)dsp(3)

cpp(3)

filters(2)

blog(2)

webdesign(1)

gui(1)

macOS(1)

optimization(1)

lessonslearned(1)