~ Nonlinear Adaptive Filtering as a Form of Artificial Intelligence ~

The error filtering capabilities

Error filtering demo
The goal of computational experiment is to find how identification method filters an input errors. It is fair to assume, according to iteration algorithm, that errors in output signal are very well filtered.

for (int i = T - 1; i < N; ++i) {
   double predicted = 0.0;
   for (int j = 0; j < T; ++j) {
      predicted += U[(int)((x[i-j] - xmin) / deltaX), j];
   }
   double error = (y[i] - predicted) / T * learning_rate;
   for (int j = 0; j < T; ++j) {
      U[(int)((x[i-j] - xmin) / deltaX), j] += error;
   }
}
The value of $error$ in the code is used to correct kernel $U[\cdot, \cdot]$, which is performed in statistical way with compensation of inaccuracy by $learning-rate$ parameter. However, we can't say that about input. The estimated model is two dimensional array $U[\cdot, \cdot]$ and errors in inputs lead to errors in addressing the corrected elements. The influence of errors on the accuracy is significantly reduced by smooth properties of the kernel $U[\cdot, \cdot]$ and $learning-rate$ parameter as well, however it is not eliminated, only reduced.

Computational simulation

The goal of experiment was to test accuracy of estimated model for the case of correlated with input errors. The errors were generated as a random process with zero mean and amplitude near 10 percent of the input range with Pearson correlation coefficient near 85 percent.

Experiment showed that errors were effectively filtered. After an identification of the model on inaccurate data, it converted unseen input into output with near 99 percent accuracy.

The linear model on the same data fails miserably with errors 20 to 35 percent. The code with simulation example is on the top link.