Interesting quote from H. Holger and T. Schreiber, "Nonlinear Time System Analysis"

A signal which does not change is trivial to predict:  the last observation is a perfect forecast for the next one.  Even if the signal changes, this can be a reasonable method of forecasting, and a signal for which this holds is called persistent.  A system which changes periodically over time is also easy once you have observed one full cycle.  Independent random numbers are easy as well:  you do not have to work hard since working hard does not help anyway.  The best prediction is just the mean value.  Interesting signals are something in between; they are not periodic but they contain some kind of structure which can be exploited to obtain better predictions. (Holger, et.al, 2005)

views
1 response
This is true for a limited class of non-constant functions: differentiable maps R --> R that have periodic functions as a natural dense subset. But nerve signals are not well-modeled this way. (Wavelets on non-periodic basis functions do better than e^(ikt)!) And moreover, this map of reals to reals is a tiny part of the universe of maps M --> N , where M and N are riemannian manifolds of some dimensions 0 ≤ m, n ≤ ∞, or more generally topological spaces that may have no dimension at all. And yet we can define variation pretty rigorously in those situations.