Document Positioning
This document does not explain a specific paper paragraph by paragraph. Instead, it organizes the most crucial mathematical analysis logic from the previous two papers from a methodological perspective. It aims to solve not only "what these two proofs wrote," but more importantly, to help readers establish a reusable understanding framework for reading future stability papers.
After reading this, you should ideally be able to answer questions like: What exactly is the Lyapunov method doing? Why can the Lyapunov function be combined with the system equations? Which terms push the system towards instability? Why can control terms suppress this trend? Why can analysis still be performed when the exact expression of nonlinear terms is unknown? What gap is the balancing function actually filling? Why is the Itô formula indispensable in stochastic systems? And how should one read a stability proof.
Why Stability Proofs Always Involve the Lyapunov Method
The reason the Lyapunov method occupies a central position in stability theory is straightforward. For most complex systems, it is difficult to explicitly find the state trajectory. Once a system involves nonlinearity, stochastic disturbances, time delays, switching, impulses, or network coupling, it is often impractical to fully determine the state . Yet the stability problem must be answered, hence the need for an analysis method that does not rely on exact solutions.
The Lyapunov method provides precisely such an approach. It does not directly track the state trajectory itself. Instead, it first constructs a function to measure the size of the state, usually denoted as . This function can be understood as a kind of generalized "energy." As long as it can be proven that is decreasing overall along the system evolution, it indicates the system state is contracting towards the equilibrium point.
The most basic example is , which measures the squared distance from the state to the origin. If becomes smaller over time, then the state naturally gets closer to the equilibrium point. Thus, the originally hard-to-handle "trajectory problem" is transformed into a more manageable "energy function rate-of-change problem."
Therefore, the true role of the Lyapunov method can be summarized as: Transforming the direct tracking of state trajectories into a judgment of whether an energy function is continuously decreasing.
What is the Exact Relationship Between the Lyapunov Function and the System Equations?
This is the first hurdle in understanding the entire method. Many people feel the Lyapunov function seems like an "extra thing inserted" when reading proofs, but its relationship with the system equations is actually very tight.
The system equations tell you how the state moves. For example, a simple deterministic system is written as .
This equation describes the direction of state change, i.e., where the system will go at this moment.
The Lyapunov function tells you how "large" the state is. For example, .
It maps the state to a scalar, measuring how far it currently is from the equilibrium point.
The truly crucial step is connecting these two. Taking the derivative of along the system trajectory yields .
In the simplest case of , we have .
Substituting the system equation then gives .
At this point, the system dynamics and the energy rate of change are linked. What we care about thereafter is not just which direction the system is moving, but whether the state size is shrinking or amplifying while it moves.
Thus, the logical relationship here is clear:
The system equations provide the evolution rules. The Lyapunov function provides the state scale. The derivative or generator connects these two.
Most of the proof in a stability paper essentially does this.
Why Not Look at the State Directly, But Introduce a Lyapunov Function?
If the system is very simple, of course you can look at the state directly. But the systems in papers are usually not simple. They may be nonlinear, or involve stochastic disturbances, time delays, switching, impulses, and network coupling. In this context, directly finding the state is almost always difficult, and there may not even be an analytical solution to write.
The value of the Lyapunov method lies precisely in bypassing the obstacle of "having to solve for the state trajectory." You don't need to know the exact expression of ; you only need to find a sufficiently suitable and prove that it is decreasing overall along the system trajectory to deduce stability.
Therefore, the enduring popularity of the Lyapunov method is not due to its elegant form, but because it is particularly suited for handling systems that are "difficult to solve for trajectories, structurally complex, yet still require stability judgment."
Why Do Some Terms Make the Lyapunov Derivative Positive?
This question superficially asks about signs, but fundamentally asks: Which mechanisms are injecting energy into the system?
First, consider the simplest case. If we take , then .
If the system is written as , then .
Here, any term that pushes the state further away from the origin may manifest as a positive contribution in the Lyapunov derivative. That is, as long as a mechanism makes the system more likely to "move outward," it will usually increase , even making it positive.
Several typical sources of growth are common.
The Intrinsic Dynamics May Have a Natural Growth Trend
If a segment of the dynamics satisfies , it means the system itself is pushing the state outward from the equilibrium point. In this case, even without attacks or noise, the system may not be naturally stable.
Coupling Terms May Amplify Local Errors
In networked systems, fluctuations at one node may propagate to other nodes through coupling relationships. Thus, a local deviation can be amplified by the network structure, ultimately appearing as a positive growth term in the Lyapunov estimate.
Stochastic Disturbances Increase State Fluctuations
In stochastic systems, noise does not simply average out like ordinary terms; it typically enters the generator in squared form. This means stochastic terms tend to raise the system's average fluctuation level, thus also appearing as a source of growth.
Time Delays Weaken the Contraction Effect of Current Control
When the current system evolution is influenced by past states, there is a lag between control action and state change, increasing the possibility of oscillation and amplification. In the Lyapunov derivative, delay terms often introduce troublesome cross-terms and additional bounds.
Destabilizing Impulses Instantly Amplify the State
This is typical in the second paper. If an impulse makes the state jump from to instantly, it is clearly "adding energy" to the system. Such impulses appear as growth terms in the Lyapunov jump estimate.
Therefore, why a term makes the derivative positive fundamentally depends on whether it pushes the system away from the equilibrium point. When reading a proof, what you really need to look at is not the name of the term, but whether it ultimately corresponds to "contraction" or "amplification" in the inequality.
Why Can Control Terms Push the Derivative Back to Negative Values?
Control terms can stabilize the system because they are typically designed as negative feedback structures with dissipative effects. The simplest example is .
Substituting gives , where is a clear negative term. If the matrix is chosen appropriately enough, this negative term can overpower the growth term from the system's intrinsic dynamics, causing the overall derivative to turn negative again.
From an energy perspective, the role of control is to artificially add a "dissipation mechanism" to the system. The system might originally accumulate energy due to its intrinsic dynamics, coupling, delays, or noise; the control term tries to pull this energy down.
The continuous feedback term in the second paper is a clear example:
If we take a Lyapunov function like , this part contributes .
Since , it corresponds to a negative dissipation term. Precisely because of this, once a DoS attack makes , this dissipation disappears, and system stability is immediately weakened.
Thus, the essential role of control terms can be summarized as: Creating a sufficiently strong dissipation structure through negative feedback to pull the system back from a growth trend to a contraction trend.
Why Do Proofs Always Look for the "Dominant Negative Term"?
Many stability proofs ultimately organize into , or in stochastic systems, .
Here, is the so-called dominant negative term. It represents a clear overall contraction trend in the system. The remaining terms represent additional influences from coupling, delays, noise, attacks, errors, or impulses.
Why must proofs always seek this structure? Because as long as it can finally be shown that the dominant negative term is strong enough to suppress all other growth and residual terms, it can be concluded that the system overall still moves towards stability.
Therefore, many stability proofs may seem like a bunch of inequality manipulations, but they actually revolve around the same goal: Finding and preserving an overall negative energy change structure.
Why Can Analysis Still Be Done Without Knowing the Exact Form of Nonlinear Terms?
This is where beginners are most easily confused. Many nonlinear terms , , in papers are not given exact explicit formulas, so why can the authors continue the derivation?
The key reason is that stability analysis usually does not need to know "what it looks like," but only needs to know "how bad it can be at most." In other words, the analysis relies on bound conditions, not analytical expressions.
For example, a paper might give assumptions like: or .
Such conditions indicate that although we don't know the exact form of the function, we know it won't grow faster than some linear bound. This way, in the Lyapunov derivative estimate, we can write .
Thus, the originally seemingly uncontrollable unknown function is replaced by a bound term comparable to .
Therefore, in control theory, the goal is often not to know the complete closed form of the system, but to know what type of growth constraints it satisfies. What's truly important is not "what function it is," but: Can its growth rate be bounded into an estimable upper bound?
Where Do Those Bounds and Inequalities Actually Come From?
A common frustration when reading proofs is feeling that the author is "writing inequalities out of thin air." Actually, these estimates usually come from several very clear sources.
Some Come from the System Equations Themselves
The system dynamics already tell you which terms compose the state change, e.g., , which directly determines what objects will appear in the derivative.
Some Come from Modeling Assumptions
Many bounds are not derived; they are structural assumptions the author makes about the subject from the start. For example, Lipschitz conditions, linear growth conditions, bounded delay conditions, coupling matrix properties, etc. These conditions limit the system from becoming so bad that analysis becomes impossible.
Some Come from Standard Algebraic Inequalities
These are the most common technical tools in stability proofs. For instance, comes from the Cauchy-Schwarz inequality. Another example, , comes from Young's inequality. It is especially useful because it can split a troublesome cross-term into two squared terms, which are usually easier to match with the Lyapunov function.
Others Come from Stochastic Analysis Tools
In stochastic systems, ordinary derivatives are insufficient; the Itô formula and generator must be used. These tools automatically introduce noise squared terms, so the estimation structure in stochastic systems differs significantly from deterministic ones.
Thus, the inequalities in papers are usually not mysterious. You can understand them as a three-step combination: first find terms from the system equations, then use modeling assumptions to bound the growth of these terms, and finally use standard inequalities to compress them into a uniformly comparable form.
Why is the Itô Formula So Important in Stochastic Systems?
Once a system contains Brownian motion terms, ordinary differentiation rules are no longer sufficient. Stochastic systems are often written as .
Here, state change includes not only the deterministic drift term but also the stochastic diffusion term . If we still directly calculate as in deterministic systems, the crucial noise contribution would be missed.
The role of the Itô formula is to tell you how the Lyapunov function changes along a stochastic trajectory. Taking the simplest as an example, the generator typically appears as something like .
The extra noise squared term appearing here is an important distinction between stochastic and deterministic systems. It shows that noise does not "average out to nothing"; it genuinely enters the energy rate of change with squared intensity.
Therefore, the Itô formula is important because it gives you the correct way to calculate the Lyapunov rate of change in stochastic systems. Without it, many stability conclusions cannot be written rigorously.
How Exactly is the Lyapunov Function Chosen?
The Lyapunov function is not mechanically calculated from the system equations; it is designed based on the system structure and proof objectives. You can think of it as a selection problem for an analysis tool.
The most basic starting point is usually , as it most naturally measures state size. But once the system structure becomes complex, plain is often insufficient.
The first paper involves time delays, sampling errors, mode mismatches, attack intervals, and switching behavior. Using only , the effects of these factors are difficult to fully absorb, so more complex Lyapunov functionals need to be constructed, incorporating past states, integral terms, or error terms.
The second paper, while not having discrete observation errors like the first, has impulse moments, non-impulse intervals, stabilizing impulses, destabilizing impulses, attack intervals, and node asynchronous behavior. Here, relying solely on a static Lyapunov function is also insufficient, so balancing functions are used to characterize the stabilizing and destabilizing effects across different intervals on the time axis.
Thus, the real criterion for choosing a Lyapunov function is not just "looking like energy," but: After substituting into the system equations, its rate of change must be effectively estimable and ultimately organized into an overall contraction structure.
Why is the Balancing Function Useful?
The significance of the balancing function becomes particularly clear only when the system simultaneously contains many mechanisms acting in different directions.
If a system is always in the same mode, control is always effective, and all impulses act in a stabilizing direction, then Lyapunov analysis might only need to focus on a uniform decay inequality. But the two papers deal with systems where multiple intervals, multiple events, and multiple mechanisms interweave.
In the first paper, one must simultaneously consider attack and non-attack intervals, mode-matched and mode-mismatched intervals, and node asynchronous switching behavior. In the second paper, one must simultaneously consider stabilizing and destabilizing impulses, impulse moments and non-impulse intervals, attack and non-attack intervals, and asynchronous impulse behavior of different nodes.
In this context, a fixed decay rate can no longer accurately describe the net effect of the system. The role of the balancing function is to uniformly record these beneficial and adverse effects scattered across different time periods. You can think of it as a dynamic ledger. Which time periods are "profitable," which are "losing," it encodes all of them.
Finally, if it can be proven that the overall ledger still continuously moves in a direction favorable for decay, then the system remains stable. Precisely because of this, the balancing function in the two papers is not a dispensable decoration but a key bridge piecing together local fragmented analyses into a global stability conclusion.
What Gap Does the Graph Theory Method Fill Here?
The two papers do not deal with single-node systems but with networked large-scale systems. There are coupling terms between nodes, e.g., or .
When performing single-node analysis only, these terms appear as cross-influences from other nodes on the current node. The problem is that single-node estimates cannot automatically lead to the stability of the entire network, because network topology can allow errors to propagate, superimpose, or even amplify between nodes.
The graph theory method fills this leap from local nodes to the global network. Using graph structure, Laplacian matrices, co