A New Set of Stability Criteria Extending Lyapunov’s Direct Method

A dynamical system is a mathematical model described by a high dimensional ordinary differential equation for a wide variety of real world phenomena, which can be as simple as a clock pendulum or as complex as a chaotic Lorenz system. Stability is an important topic in the studies of the dynamical system. A major challenge is that the analytical solution of a time-varying nonlinear dynamical system is in general not known. Lyapunov’s direct method is a classical approach used for many decades to study stability without explicitly solving the dynamical system, and has been successfully employed in numerous applications ranging from aerospace guidance systems, chaos theory, to traffic assignment. Roughly speaking, an equilibrium is stable if an energy function monotonically decreases along the trajectory of the dynamical system. This paper extends Lyapunov’s direct method by allowing the energy function to follow a rich set of dynamics. More precisely, the paper proves two theorems, one on globally uniformly asymptotic stability and the other on stability in the sense of Lyapunov, where stability is guaranteed provided that the evolution of the energy function satisfies an inequality of a non-negative Hurwitz polynomial differential operator, which uses not only the first-order but also high-order time derivatives of the energy function. The classical Lyapunov theorems are special cases of the extended theorems. the paper provides an example in which the new theorem successfully determines stability while the classical Lyapunov’s direct method fails.


Introduction
Consider a continuous time dynamical systeṁ where x ∈ R n is a point in an n-th dimensional space and t is a one dimensional variable representing time. x(t) is the state of the dynamical system at time t and represents the trajectory of the point as time passes. The initial state is given as is only a function of x, the system is called time-invariant; otherwise, it is time-varying. Furthermore, if f (x, t) = Ax for some square matrix A, it is linear time-invariant. The dynamical system is a mathematical model widely used in many disciplines including engineering, physics, economics and biology.
A point x * is an equilibrium point of (1) if f (x * , t) = 0 for any t ≥ 0. If the initial state x 0 = x * , then the state will remain at x * forever. The stability of the equilibrium point is to characterize whether the state will return to x * after certain perturbation away from it or diverge. Stability is an important topic in the studies of the dynamical system. The stability criteria for a linear time-invariant system have been well developed. However, it is difficult to examine the stability of a nonlinear or time-varying system, because the analytical solution of such a system is in general not known.
Lyapunov's direct method, which was founded in A. M. Lyapunov's thesis The General Problem of Stability of Motion at Moscow University in 1892 (Wikipedia, 2018), has been a widely used approach to study the stability of the dynamical system (Parks, 1992). Rather than solving (1) analytically, the method employs a scalar positive definite function V (x, t) intuitively representing the energy of the state, where V (x * , t) = 0 and V (x, t) > 0 for any x = x * .V (x, t) is the time derivative of V (x, t) along the trajectory. IfV (x, t) < 0 for any x except at x * , then the energy decreases monotonically over time and the trajectory converges to x * . Lyapunov's direct method has been employed in numerous applications ranging from aerospace guidance systems, chaos theory, to traffic assignment (Wikipedia, 2018).
It should be pointed out that Lyapunov's direct method is sufficient but not necessary. Intuitively, for example, the dynamical system is still stable even if the energy does not monotonically decrease as long as it eventually converges to 0. This paper will develop this idea rigorously and propose a new set of stability criteria, which are more relaxed than the conventional Lyapunov stability criteria. Specifically, the energy function is still employed but does not have to be monotonically decreasing. Instead, the evolution of the energy is to satisfy an inequality of a Hurwitz polynomial differential operator defined in the paper, which uses not only the first-order but also high-order time derivatives of V (x, t).
The remaining of this paper is organized as follows. Section 2 introduces two commonly used definitions of stability, namely stability in the sense of Lyapunov and asymptotic stability, and reviews the Lyapunov's direct method. Section 3 introduces the notion of a Hurwitz polynomial and proposes a definition of nonnegative Hurwitz polynomial differential operator. Section 4 presents the main results of this paper, i.e., the new set of stability criteria extending Lyapunov's direct method. Section 5 shows an example in which the new criteria can determine the stability while the conventional Lyapunov stability criteria cannot. The paper is concluded in Section 6.

Definitions of Stability and Lyapunov's Direct Method
For the sake of simplifying the description, shift the origin of the system by −x * so that the equilibrium point is at x * = 0. The following two definitions of stability are commonly used (Teschl, 2012).

Definition 2.1 Stability in the sense of Lyapunov
The equilibrium point x * = 0 is stable in the sense of Lyapunov at t = t 0 if for

Definition 2.2 Asymptotic stability
The equilibrium point x * = 0 is asymptotically stable at t = t 0 if it is stable in the sense of Lyapunov and there exists a ∆(t 0 ) > 0 such that if x 0 < ∆(t 0 ) then lim t→∞ x(t) = 0. Furthermore, x * is uniformly asymptotically stable if it is uniformly stable and ∆ is not a function of t 0 . x * is globally asymptotically stable if lim t→∞ x(t) = 0 for x 0 anywhere in R n .  Lyapunov's direct method can be used to determine the asymptotic stability of the dynamical system without analytically solving the differential equation (1), as stated in the following theorem (Narendra & Annaswamy, 1989).
Theorem 2.3 Lyapunov theorem for global uniform asymptotic stability The dynamical system (1) is globally uniformly asymptotically stable if a scale function V (x, t) with continuous partial derivative with respect to x, t exists and if the following conditions are satisfied: 1. There exists continuous non-descending functions α( x ) and β( x ), such that ∀t ≥ t 0 , x > 0, 2. There exists a continuous non-descending function W ( x ) > 0, ∀ x > 0, and W (0) = 0.
Here the time derivative is given bẏ The above two sets of conditions are illustrated in Figure 2.
Similarly the following theorem (Murray, Li, & Sastry, 1994) states Lyapunov's direct method to determine the stability in the sense of Lyapunov.
Theorem 2.4 Lyapunov theorem for uniform stability in the sense of Lyapunov The dynamical system (1) is uniformly stable in the sense of Lyapunov if a scale function V (x, t) with continuous partial derivative with respect to x, t exists and if the following conditions are satisfied: is bounded by −W ( x ) above.
The main difference between the two theorems lies in (6) and (12). To ensure asymptotic stability, W ( x ) is strictly positive except at x = 0. For the stability in the sense of Lyapunov, W ( x ) only needs to be non-negative. In addition, conditions (2), (5) and (6) are satisfied globally for x ∈ R n , while (8), (11) and (12) are satisfied locally near the equilibrium point x < Ω. Because x(t) is a function of t, V (x, t) along the trajectory is a function of only t. For example, for asymptotic stability, if the conditions (2) to (6) are met, V (x(t), t) monotonically decreases to 0 because the time derivative is strictly negative unless x(t) = 0. The time evolution of V (x(t), t) in the two stability theorems of Lyapunov is illustrated in Figure 3  Note that monotonic decrease in V (x(t), t) is sufficient but not necessary to ensure the convergence of V (x(t), t) to 0 or some constant. Figure 3 (b) shows two examples. In one example, V (x(t), t) goes up and down and converges to 0 and in the other example V (x(t), t) does not even converge to any constant although it does not diverge either. The main contribution of the paper is to characterize such dynamics of the energy function V (x(t), t). To this end, the next section defines a non-negative Hurwitz polynomial differential operator.

Non-negative Hurwitz Polynomial Differential Operator
Denote differential operator D = d dt . An m-th degree polynomial differential operator is given by where the constant coefficients a 1 , . . . , a m are real. In the special case of m = 0, a 0 = 1. The corresponding polynomial is Denote λ 1 , . . . , λ l the complex roots of polynomial P (s) and n 1 , . . . , n l the corresponding multiplicities where l j=1 n j = m. l is the number of distinct complex roots.
A time-invariant linear ordinary differential equation Figure 4: Illustration of input and output of a linear system corresponding to linear differential equation can be written in the form of a polynomial differential operator This linear differential equation is said to be defined by P (D). y(t) and z(t) are called the output and input of the corresponding linear system, respectively, as shown in Figure 4.
The solution of (15) or (16) with input z(t) and initial conditions y(0), dy(0) dt , . . ., d m−1 y(0) dt m−1 is given by (Lathi, 2005) where y 1 (t) is the zero-state response and y 2 (t) is the zero-input response given by For unit step input function y 1 (t) is called the zero-state unit step response and given by Constant coefficients d k,j are completely determined by P (D). c k,j is a function of both P (D) and the initial conditions y(0), dy(0) dt , . . ., d m−1 y(0) dt m−1 . The notion of zero-state unit step response is meaningful only when a m = 0, which holds for any Hurwitz polynomial differential operator P (D) defined next.
Definition 3.1 Hurwitz polynomial and Hurwitz polynomial differential operator (Kuo, 1966) Polynomial P (s) is said to be Hurwitz if the roots of P (s), λ 1 , . . . , λ l , all have negative real parts. Polynomial differential operator P (D) is Hurwitz if the corresponding polynomial P (s) is Hurwitz.
If P (D) is Hurwitz, then a m = 0, because otherwise P (s) has a root at λ = 0.
Lemma 3.2 t t 0 y 1 (τ ) dτ goes to infinity as t → ∞ for any fixed t 0 .
Proof This follows immediately from (21).
I next define a new type of Hurwitz polynomial differential operator, which has not been studied in the literature so far, but will be used in Section 4. Definition 3.3 Non-negative Hurwitz polynomial differential operator Hurwitz polynomial differential operator P (D) is non-negative if the zero-state unit step response defined in (20), y 1 (t) ≥ 0, ∀t ≥ 0.
Proof Solve the m coefficients d k,j of the zero-state unit step response (20) by setting the initial conditions to zero For m = 0, (15) is reduced to y(t) = z(t). Thus y 1 (t) = 1. P (D) is non-negative.
For m = 1, dy(t) dt + a 1 y(t) = z(t). For the differential operator to be Hurwitz, a 1 > 0. The zero-state unit step response is For the differential operator to be Hurwitz, a 2 > 0. There are three cases of the roots of P (s).
The minimum value of y 1 (t) can be obtained by lettingẏ 1 (t) = 0. It can be verified by algebra that the minimum value is non-negative in all of the three cases. Therefore, y 1 (t) ≥ 0, ∀t ≥ 0, For m = 3, d 3 y(t) dt 3 + a 1 d 2 y(t) dt 2 + a 2 dy(t) dt + a 3 y(t) = z(t). For the differential operator to be Hurwitz, a 3 > 0. There are four cases of the roots of P (s).
Similarly to the case of m = 2, y 1 (t) ≥ 0, ∀t ≥ 0, in all of the four cases.
It should be pointed out that not all Hurwitz polynomial differential operators are non-negative. Figure 5 shows the zero-state unit step response y 1 (t) of m = 4 where polynomial P (s) has a pair of complex roots σ 1 ± ω 1 i = −0.1±i √ (−0.1) 2 +1 2 each with multiplicity 2. It is clear that while y 1 (t) converges, y 1 (t) < 0 for some t. This is an example where P (D) is not non-negative. As the damping factor increases, the polynomial differential operator can be made non-negative. Figure 5 shows that y 1 (t) > 0, ∀t when the complex roots become σ 2 ± ω 2 i = −0.5±i

Main Results
The higher order time derivatives of V (x, t) along the trajectory can be derived from (7). For example, the second order time derivative is given by to exist, all the second order partial derivatives of V with respect to x, t have to exist and the first order partial derivatives of f with respect to x, t have to exist. In general, for the m-th order time derivative d m V (x,t) dt m to exist, all the m-th order partial derivatives of V with respect to x, t have to exist and the (m − 1)-th order partial derivatives of f with respect to x, t have to exist.

Theorem 4.1 Extended theorem for global uniform asymptotic stability
The dynamical system (1) is globally asymptotically uniformly stable if a scale function V (x, t) with continuous (m+1)-th order time derivative along the trajectory exists such that where γ( x ) is a continuous non-descending function, 2.V (x, t) along the trajectory given by (7) is bounded above by some positive function ρ( x ) when t is sufficiently large, i.e.,V (x, t) does not diverge to +∞ as t → ∞ as long as x(t) is finite, which is a much weaker assumption than (5) and (6), and if the conditions in Theorem 2.3 are satisfied except that (5) is replaced by where P (D) is an m-th degree non-negative Hurwitz polynomial differential operator. Proof Along the trajectory of (1) ẋ=f (x,t) ẋ=f (x,t) .
Equivalently, u(t) and v(t) can be considered the input and output, respectively, of a time-invariant linear ordinary differential equation defined by the (m + 1)-th degree polynomial differential operator D · P (D). Thus, similar to (17), v(t) can be written in two independent parts where v 1 (t) is the zero-state response of the differential equation with u(t) as the input and v 1 (t 0 ),v 1 (t 0 ), . . . , v and v 2 (t) is the zero-input response with the initial conditions given by where the time derivativesV , . . . , V (m) are taken along the trajectory. The above two equivalent representations are illustrated in Figure 6.  Figure 7. Therefore, v 2 (t) is given by where r 2 is the zero-input response of the linear system P (D)[r(t)] = u(t) and is given in (18). As noted previously, λ j , n j depend on P (D) and coefficients c k,j depend on P (D) and the initial conditions V (x 0 , t 0 ), . . . , V (m) (x 0 , t 0 ) . Because P (D) is Hurwitz, the real part of λ j is negative for any j. Therefore, v 2 (t) converges to some constant as t → ∞. Because where T and M are large positive constants. Denote h(t) the zero-state unit impulse response of the linear differential equation defined by P (D). From the property of impulse response, the zero-state response of this linear differential equation with u(t) as the input is given by v 1 , the zero-state response of the linear differential equation defined by D · P (D), is where y 1 (·) is the zero-state unit step response and is given in (20). The last step holds because t−τ 0 h(η) dη is the unit step response of the linear differential equation defined by P (D) at the instant of t − τ , i.e., Because P (D) is non-negative, y 1 (t − τ ) is non-negative for any t − τ , and because u(τ ) ≤ 0, ∀τ , I next show by contradiction that Assume that v(t) does not converge to 0 as t → ∞. Then there exist a positive constant and an infinite time sequence {t i }, i = 1, 2, . . ., such that v(t i ) > , where t i+1 > t i and t i goes to infinite as i → ∞. From (2), Because β(·) is continuous and non-descending, there must exist a positive constant µ( ) as a function of such that Furthermore, because W (·) is continuous and non-descending, where ω ( ) is a positive constant as a function of . Therefore, If a time instant t exist in interval (t i−1 , t i ) such that v(t) = 2 , then let s i be the maximum value of t such that t < t i and v(t) = 2 . Otherwise, let s i = t i−1 . Figure 8 illustrates the two different cases of t i and s i . In either case, By construction, intervals [s i , t i ] do not overlap each other except possibly at the end points.
I next show that an infinite number s i exist such that v(s i ) = 2 . Assume on the contrary that a number i 0 exists such that for any i > i 0 , s i = t i−1 . This means v(t) > 2 , ∀t > t i 0 . Then u(t) ≤ −ω 2 , where ω 2 is a constant derived similarly to ω ( ) in (44), (45), and (46). From (40) and (42) v From Lemma 3.2, v 1 (t) goes to negative infinite as t → ∞, which contradicts (35).
Denote the subsequence s i k for which v(s i k ) = 2 . As shown below by contradiction, t i k − s i k does not converge to 0 as k → ∞. Because of the continuity of v(t), there exists v(s i k ,1 ) = .
As illustrated in Figure 8, s i k ,1 is the minimum value of t such that t > s i k and v(t) = . From the mean-value theorem, there exists Assume that lim i→∞ t i k − s i k = 0. Thenv(s i k ,2 ) goes to +∞. On the other hand, from (51), v(s i k ,2 ) < .
From (2), where K 1 ( ) is a positive constant as a function of . Thus, v(s i k ,2 ) < K 2 ( ) for some positive constant K 2 ( ), becauseV (x, t) along the trajectory is bounded by ρ( x ) when t is sufficiently large from (25) in the theorem. Contradiction! Therefore, t i k − s i k does not converge to 0 as k → ∞. In other words, there exist a positive constant ζ and a subsequence of {s i k , t i k }, referred to as {s i k j , t i k j }, j = 1, 2, . . . , such that Denote Φ the union of all the intervals [s i k j , t i k j ]. Similar to (50), From (21), y 1 (t) ≥ 1 am − 1 for a small positive constant 1 when t is sufficiently large. Therefore, as t → ∞, Because of (57), the right side and thus v 1 (t) go to negative infinite, which contradicts (35). Hence, by contradiction, I have proved (43). From (2), I conclude that lim t→∞ x(t) = 0.
Next I show that x * is uniformly stable in the sense of Lyapunov. For any given > 0, to show that x(t) < , ∀t ≥ t 0 , from (2) it suffices to require that where ψ( ) is a positive constant as a function of , because α( x ) ≤ V (x, t) < ψ( ) and α( x ) is continuous non-descending and α( x ) = 0 only when x = 0. From (40) and (42), v 1 (t) ≤ 0, ∀t.
Therefore, it suffices to require that From (34), v 2 (t) is a linear combination of coefficients c k,j and V (x 0 , t 0 ), Theorem 2.3 is a special case of Theorem 4.1 when m = 0. From Lemma 3.4, one can use any m = 0, 1, 2, 3 Hurwitz polynomial differential operator. However, it should be pointed out that the use of the high order time derivatives in Theorem 4.1 imposes more stringent requirements on the choice of V (x, t) and on the dynamical system itself. Specifically, from (7), f (x, t) is not necessarily continuous in Theorem 2.3. From (23), both ∂f ∂x and ∂f ∂t have to exist to apply Theorem 4.1 with m = 1. Just like the classical Lynapunov theorem, Theorem 4.1 provides a sufficient condition to test global uniform asymptotic stability. One can try out multiple choices of m, P (D) and V (x, t) to see whether one of them works out.
Theorem 4.2 Extended theorem for uniform stability in the sense of Lyapunov The dynamical system (1) is uniformly stable in the sense of Lyapunov if a scale function V (x, t) with continuous (m+1)-th order time derivative along the trajectory exists such that (24) holds when x < Ω, and if the conditions in Theorem 2.4 are satisfied except that (11) is replaced by where P (D) is an m-th degree non-negative Hurwitz polynomial differential operator.
Proof Because of the difference between (12) and (6), (46) in the proof of Theorem 4.1 does not hold here and v(t) does not necessarily converge to 0 as shown in (43). The difference between Theorems 4.1 and 4.2 parallels that between Theorems 2.3 and 2.4.
The proof of Theorem 4.1 has already provided the steps from (62) and onward to show uniform stability. The proof is based on u(t) ≤ 0, ∀t, which does not rely on (6). (12) is sufficient to ensure u(t) ≤ 0, ∀t. Therefore, the proof is applicable here, with the only difference being that in this theorem (8), (24) and (72) now hold for x < Ω instead of for x ∈ R n . To show uniform stability here, consider any given > 0. Without loss of generality 1 , suppose that ≤ Ω. It suffices to show that x(t) < , ∀t ≥ t 0 for which (8), (24) and (72) always hold.
From the proof of Theorem 4.1, it follows that if where δ( ) is given in (71), then (66) and (70) hold. As a result, As the trajectory starts with x 0 < ≤ Ω, v 1 (t) ≤ 0 because u(t) ≤ 0 from the assumption of the theorem and because of (40) and (42). Thus and therefore x(t) < from (8). In other words, once the trajectory starts within the local region around the equilibrium point, (8), (24) and (72) hold, therefore keeping the trajectory to stay within the region.
Hence, it is concluded that x(t) < , ∀t ≥ t 0 and x * is uniformly stable in the sense of Lyapunov from Definition 2.1.
Theorem 2.4 is a special case of Theorem 4.2 when m = 0.

An Example
Consider the following example of a one dimensional time-varying linear systeṁ with t 0 ≥ 0. The solution of this system can be analytically given as from which stability can be determined. However, for the sake of illustration, I use Lyapunov's direct method and compare the Lyapunov and extended theorems for asymptotic stability.
To apply Theorem 2.3, g(t) has to be negative ∀t ≥ t 0 . Now consider the second order time derivative of V (x, t) and let where a 1 > 0 making P (D) non-negative Hurwitz. It follows that Figure 9: Plots of g(t), G(t) of the dynamical system (77) with (83) with t 0 = 0. V (x, t) = g(t)x 2 is not always negative. P (D)[V (x, t)] = G(t)x 2 < 0, ∀t ≥ t 0 . Therefore the system is globally uniformly asymptotically stable from Theorem 4.1.
To apply Theorem 4.1, g(t) does not have to be negative ∀t ≥ t 0 . When g(t) > 0, iḟ g is sufficiently negative, G(t) can be negative. Consider a concrete example of g(t) g(t) = −1 + 5e −10t .
It is easy to show that G(t) < 0, ∀t ≥ t 0 . Figure 9 plots g(t), G(t). Because g(t) is not always negative, Theorem 2.3 cannot be applied to confirm global uniform asymptotic stability. Because G(t) < 0, ∀t ≥ t 0 , Theorem 4.1 can be applied to show that in (77) with (83) x * = 0 is globally uniformly asymptotically stable.

Conclusion
Stability is an important topic in the studies of the dynamical systems. A major challenge is that the analytical solution of a time-varying nonlinear dynamical system is in general not known. Lyapunov's direct method is a classical approach used for many decades to study stability without explicitly solving the dynamical system. Roughly speaking, an equilibrium is stable if an energy function monotonically decreases along the trajectory of the dynamical system. In this paper, I extend Lyapunov's direct method by allowing the energy function to temporarily increase. More precisely, I prove two theorems, one on globally uniformly asymptotic stability and the other on stability in the sense of Lyapunov, where stability is guaranteed provided that the evolution of the energy function satisfies an inequality of a non-negative Hurwitz polynomial differential operator. The classical Lyapunov theorems are special cases of the extended theorems. I provide an example in which the new theorem successfully determines stability while the classical Lyapunov's direct method fails. In the future study I hope to apply the extended stability theorems to more sophisticated dynamical systems in the real world.