Global and Local Controllability

In general, mathematical global properties imply local ones. That is the case of a minimum or a maximum global (or absolute) point of a real function which is in particular a local extremum point. Also, in some situations a local property can be used to determine a global property of a mathematical object. Thus, usually there is a connection between the two concepts. Dealing with controllability of dynamical systems, meaning a system of autonomous ordinary differential equations with control, a certain notion of a system being “globally” controllable does not always imply that it is locally controllable. We present with details the example suggested in [2] where for a nonlinear control system it is possible to drive arbitrary states into a state x0 using large excursions, but if the intention is to control any state closed to the desired x0, using a path that does not leave a neighborhood of x0, it can not be done. In this sense, what we mean for the moment by “global” controllability is the concept known in literature as complete controllability and it does not imply local controllability for nonlinear systems, although for linear systems both notions are equivalent [2], [1]. Thus, to define the concept of global controllability for


Introduction
In general, mathematical global properties imply local ones.That is the case of a minimum or a maximum global (or absolute) point of a real function which is in particular a local extremum point.Also, in some situations a local property can be used to determine a global property of a mathematical object.Thus, usually there is a connection between the two concepts.Dealing with controllability of dynamical systems, meaning a system of autonomous ordinary differential equations with control, a certain notion of a system being "globally" controllable does not always imply that it is locally controllable.We present with details the example suggested in [2] where for a nonlinear control system it is possible to drive arbitrary states into a state x 0 using large excursions, but if the intention is to control any state closed to the desired x 0 , using a path that does not leave a neighborhood of x 0 , it can not be done.In this sense, what we mean for the moment by "global" controllability is the concept known in literature as complete controllability and it does not imply local controllability for nonlinear systems, although for linear systems both notions are equivalent [2], [1].Thus, to define the concept of global controllability for 28 M. H. P. L. Mello and L. H. S. Santos nonlinear systems, both completely and local controllability notions must be taken in consideration.

Preliminaries Notions on Controllability
Definition 2. 1.By an open-loop control system we mean an autonomous system of the form where f : is the input or control variable vector.We assume that f is a continuous function and locally Lipschitz with respect to variable x, defined in an open and connected subset D ⊂ R n , called state space.We observe that if f ∈ C 1 , then f is locally Lipschitz.The subset U ⊂ R m is a nonempty set and it is called the set of the admissible controls or control space.We assume that the control functions are piecewise continuous functions, but in a more general context, the controls are considered as being Lebesgue integrable functions.Also, if the control space U is a one-element set, we consider the system as a classical dynamical one, for if it is possible to control the system, there is only one way to do it so.
The state x = x(t) and the control u = u(t) variables are defined on a finite time interval I = [0, T ], T > 0, or on I = [0, +∞), depending on the system we are interested in controlling.For the purpose of the definitions of this article, we consider these functions defined only on a finite interval.
Considering the above hypotheses, for every fixed control u, the autonomous system satisfies the Uniqueness and Existence Theorem for Ordinary Differential Equation for any given initial condition x(0) = x 0 .Let us denote by x u = x u (t) the unique solution of dx dt = f (x, u), such that x u (0) = x 0 .We call this solution the trajectory or path associated to the fixed control u.Definition 2.2 (Admissible process).An admissible process is a pair (x u , u), where u is a control variable that was fixed and x u = x u (t) is the unique solution of the autonomous system dx dt = f (x, u) associated to the fixed control u, satisfying a given initial condition x u (0) = x 0 .Global and Local Controllability 29 Definition 2.3 (Controllable state in a finite time).Let x 0 , x 1 ∈ D. The state x 0 is U -controllable to the state x 1 in time T , T > 0, if there exists an admissible process (x u , u) where u ∈ U , and x u = x u (t) is the trajectory associated to the control u, both defined on the interval [0, T ], such that x u (0) = x 0 and x u (T ) = x 1 .
That is, the state x 0 can be driven into the state x 1 through the application of a suitable control.
Definition 2.4 (Controllability in a neighborhood).Let x 0 , x 1 ∈ D and W an open neighborhood of D such that x 0 , x 1 ∈ W .We say that x 0 can be controlled to x 1 without leaving W (or inside W ), if there exists a time T , T > 0, and an admissible process (x u , u), where u ∈ U , and x u = x u (t) is the trajectory associated to the control u, both defined in the interval [0, T ], such that x u (0) = x 0 and

Local and Global Controllability
The idea of local controllability related to a state x is that it is possible to drive any two states closed to the desired state x changing one into the other trough the application of a suitable control, in such a way that the trajectory remains close to the state x, i.e., without deviating far from it.Definition 3.1 (Local controllability related to a state).Let x ∈ D. The system (1) is locally controllable at the state x (or related to the state x, or around x), if for each neighborhood W of x, there exists a neighborhood V , with x ∈ V ⊂ W , such that for any pair of states x 0 , x 1 ∈ W , the state x 0 can be controlled to the state x 1 without leaving W in the sense of definition 1.4.This concept of local controllability is also called strong local controllability around the state x [2].
In the next definition, the property of a system being completely controllable is somehow a "global" notion, for it allows us to control the entire system, although, during the process, the trajectoty taking one state into the other can leave a neighborhood or subset W ⊂ D that was fixed for some purpose.Definition 3.2 (Complete Controllability).Let D be the state space and U the set of admissible controls.The system (1) is said to be completely U -controllable in D if for any two arbitrary states x 0 , x 1 ∈ D, there exists an admissible process (x u , u), where u ∈ U , and x u = x u (t) is the trajectory associated to the control u, both defined on an interval [0, T ], for some T > 0, such that x u (0) = x 0 and x u (T ) = x 1 .
The following example [2] shows that a nonlinear system can be completely controllable without being locally controllable related to a state.
(i. 2) If ρ 0 = ρ 1 , we choose the null control.For if u = 0, the system (2) turns out to be a linear system without control, whose matrix has pure imaginary eigenvalues.So, one state is driven to the other one by using the own closed orbit (a circle) of the linear system that passes by these two states.
(ii) The system is not locally controllable related to the state with cartesian coordinates (x 0 , y 0 ) = (1, 0).
Definition 3.3 (Global controllability related to a state).Let D be the space state and x ∈ D. The system (1) is globally controllable to the state x, if (i) The system is locally controllable related to the state x; (ii) Every x ∈ D can be U -controllable to the state x for some time T , T > 0.
Definition 3.4 (Global controllability).Let D be the space state.The system (1) is globally controllable in D, if (i) For each x ∈ D, the system is locally controllable related to the state x; (ii) The system is completely controllable in D.
In the definitions above, if all pairs of given states x 0 and x 1 can be controlled in a very same finite interval of time [0, T ], T > 0, then we say that the system is locally controllable at time T , completely controllable at time T and so on.

Equivalence between local and global controllability for autonomous linear systems
An autonomous linear control system is a system of the form where A and B are matrices, A ∈ R n,n , B ∈ R n,m , x = x(t) ∈ R n and u = u(t) ∈ R m .We denote it by (A, B).We suppose det A = 0 in other to have the origin as the only equilibrium point of the linear system with no controls dx dt = Ax.Also we consider the control set U with no constrains, which means that we do not impose restrictions on the images of the controls.So we can consider any piecewise continuous function, u : I → R m , as a control input, where I = [0, T ] or I = [0, +∞), depending on whether we are interested in controlling the system in a finite time T or not.
Theorem 4.1.Let (A, B) be an autonomous linear control system.The following properties are equivalent: (i) The system is locally controllable at the state x = 0.
(ii) The system is controllable to x = 0 in R n .

Proof:
(i) Suppose that the linear system (A, B) is locally controllable at the state x = 0.
Let y ∈ R n , y = 0. We claim that the state y is controllable to the origin in time T .In fact, let us take x = δ 2 y y.We notice that x = δ 2 y y = δ 2 < δ.Let (x u , u) be a process given above and consider the control w(t) = 2 y δ u(t).
Remark 4.2.If a state x is controlled to the origin when t → +∞, we have the analogous definition of a state being asymptotically controllable to the origin.Just like in the theorem above, for autonomous linear control systems, the concepts of global and local asymptotic controllability to the origin coincide ( [2], p. 213).

Conclusion
For autonomous linear control systems (A, B), local and global controllability are equivalent.But the same is not true for nonlinear systems.Thus, for nonlinear systems the definition of global controllability must include the property of the system being locally controllable as a first condition plus an extra one, which is the complete controllability condition.When the admissible processes are defined for t ≥ 0, similar results are valid for local and global asymptotic controllability notions.

Example 3 . 1 .
Let us consider the open-loop control system