This sounds similar to work I did years ago to combine phase-space manifolds with a rule-based expert system to address problems diagnosing failures in mechanical systems exhibiting multi-modal operating regimes.
Hopefully the researchers found a simpler computational method than I did in trying to mate those two systems together. :)
What really caught my attention was the output of a probability curve showing how the system might operate in the never-before-seen regimes once the tipping point was reached. The ability to predict behavior outside the training set is a huge win. My method was only predictive while the the system operated in the training regime; outside that regime it was useless.
The researchers appear to use reservoir computing approaches, which usually aren't terribly costly in terms of cpu cycles.
I'm unsure about real life applications though because one of the quoted papers [0] only uses idealized strange attractors or whatever they're called -- only systems described by math.
I'd be very interested to learn how the methods apply to real-world mechanical chaotic systems.
This isn't my field of expertise at all, maybe someone has some experience with this?
I computed a bounding volume in a hyper-dimensional space containing all sensor instruments on the system. The volume was constructed to encompass the entire sensor state space of many previously recorded "normal" operating periods (from startup through steady-state and shutdown).
New operating regimes where then compared to the volume, and any excursions were considered diagnostically relevant conditions.
The cool part (to me, at least) was that the direction of the vector as system state trajectory exited the volume could be put through a classifier that would effectively tell you what went wrong.
> In many systems, a steady state is not achieved until some time after the system is started or initiated. This initial situation is often identified as a transient state, start-up or warm-up period. [1]
> In the artificial systems that are specified by engineering control theory, the reference signal is considered to be an external input to the 'plant'.[7] In engineering control theory, the reference signal or set point is public; in PCT, it is not, but rather must be deduced from the results of the test for controlled variables, as described above in the methodology section. This is because in living systems a reference signal is not an externally accessible input, but instead originates within the system. In the hierarchical model, error output of higher-level control loops, as described in the next section below, evokes the reference signal r from synapse-local memory, and the strength of r is proportional to the (weighted) strength of the error signal or signals from one or more higher-level systems. [26]
> In engineering control systems, in the case where there are several such reference inputs, a 'Controller' is designed to manipulate those inputs so as to obtain the effect on the output of the system that is desired by the system's designer, and the task of a control theory (so conceived) is to calculate those manipulations so as to avoid instability and oscillation. The designer of a PCT model or simulation specifies no particular desired effect on the output of the system, except that it must be whatever is required to bring the input from the environment (the perceptual signal) into conformity with the reference. In Perceptual Control Theory, the input function for the reference signal is a weighted sum of internally generated signals (in the canonical case, higher-level error signals), and loop stability is determined locally for each loop in the manner sketched in the preceding section on the mathematics of PCT (and elaborated more fully in the referenced literature). The weighted sum is understood to result from reorganization.
> Engineering control theory is computationally demanding, but as the preceding section shows, PCT is not. For example, contrast the implementation of a model of an inverted pendulum in engineering control theory [27] with the PCT implementation as a hierarchy of five simple control systems. [28]
The basic idea is that you've got a process with feedback that behaves like a chaotic attractor, moving around a lot but staying in a stable regime.
Where's the edge of that regime?
Here's a video of a leaky bucket waterwheel that exhibits chaotic behavior.[1] If all you had was a graph of rotational velocity, could you tell when it was about to reverse? Probably. Could you train a machine learning system to do that? Yes.
It's not clear how general a result this is, but undoubtedly someone is already trying it on financial data.
Even looking at the title only you can tell this is from Quantum Mag. What is it about Quantum Mag that produces these pseudoscientific-sounding titles (regardless of content)?
Hopefully the researchers found a simpler computational method than I did in trying to mate those two systems together. :)
What really caught my attention was the output of a probability curve showing how the system might operate in the never-before-seen regimes once the tipping point was reached. The ability to predict behavior outside the training set is a huge win. My method was only predictive while the the system operated in the training regime; outside that regime it was useless.