Random Robustness for Control Performance Algorithms-Dr.Qiao Mu
2024-10-24
Modern approaches to control system performance analysis focus primarily on deterministic methods (Bhattacharyya, Chapellat, and Keel, 1995; Chen, Song, and Chen, 2007; Milanese and Tempo, 1985; Zinober, 1990). However, the main drawback of this approach is the computational complexity limitations, especially in problems that are considered "hard" (in the sense of NP-hard), especially for controller analysis and synthesis (Blondel and Tsitsiklis, 2000; Nemirovskii, 1993). As models become more complex, exhaustive search of the solution space becomes unmanageable and may not be feasible for real-world systems (Vidyasagar, 1998). The second reason for the difficulty of the problem is dimensionality: even though some results may be achieved in low-order systems, the problem becomes intractable as the dimensionality of the system increases (Papadimitriou and Tsitsiklis, 1986). These issues have forced the research community to look for ways to circumvent the "complexity theoretic barrier" to overcome these difficulties. Therefore, the study of probabilistic methods has recently attracted increasing attention (Rust, 1997). In fact, probability is crucial in the theory of choice under uncertainty (e.g., game theory). Moreover, probabilistic and stochastic algorithms based on standard Monte Carlo methods provide complementary approaches for studying robustness (Calafiore, Dabbene, & Tempo, 2003; Ray & Stengel, 1993), control synthesis (Ishii, Basar, & Tempo, 2005; Koltchinskii, Abdallah, Ariola, Dorato, & Panchenko, 2000; Vidyasagar, 1998), and control design (Vidyasagar, 1998). Even though stochastic techniques are used willingly in the performance verification of control systems (Calafiore, Dabbene, & Tempo, 2007), they can often only be used to describe idealized problems. In particular, since stochastic algorithms deal only with the static case of stochastic processes (static random variables), they are not applicable to control systems with time-varying uncertainties (e.g. external noise, internal parameters affecting the process and the controller).
This paper presents stochastic methods related to the dynamic case of stochastic processes (time-dependent random variables). Our goal is to show that stochastic processes (e.g. Brownian motion) perfectly model realistic control systems. This will allow us to approach this problem with more advanced and innovative mathematical tools, such as stochastic differential equations (Oksendal, 1985; Yong and Zhou, 1999). However, although there are many papers dealing with the application of stochastic methods to various scientific fields, only a few authors have proposed the application of stochastic methods to robust control. Ravichandran (1991) presented various stochastic models used in the reliability analysis of repairable redundant systems.
Marti (2000) showed that, in addition to computing robust optimal decisions, stochastic methods can also determine solutions to optimal control problems under real-time conditions (e.g. optimal control of robots). According to Fetiso (2004), a robust system is constructed that can be used for stochastic control systems according to the average performance criterion. The paper by Brockett (2000) treated some stochastic optimal control problems in the case of quantized feedback and formulated the performance criteria as steady-state probability densities. The stochastic maximum principle for optimal control under uncertainty was proposed by Rico-Ramirez and Diwekar (2004). An important type of noise/disturbance problem described by Brownian motion was treated for filtering problems by Zhang, Chen, and Tseng (2005). The reachability problem was considered for the case of random Brownian noise in Digailova, Kurzhanski, and Varaiya (2008). Since there are only a few papers dealing with stochastic methods in robust control, the topic is relatively new. Our work proposes a stochastic approach to robustness analysis of control systems.
To clarify the idea, we adopt the platform design approach of Agostini, Balluchi, Bicchi, Piccoli, Sangiovanni-Vincentelli, and Zadarnowska (2005). Thus, for a given control system, a set of ready-made feedbacks is used to match some performance criteria. The system includes parameters that may evolve in a stochastic manner (affecting both the system dynamics and the feedback). In this framework, the main results of this paper are as follows: the stochastic algorithm produces realistic probability distribution functions for the performance measures. In particular, these distribution functions are more concentrated than those produced by the random algorithm.
First, we develop some necessary theoretical results to deal with stochastic differential equations for the evolution of the system state and parameters, in particular we want to preserve the main feature of Brownian motion, namely that the values are within a bounded set. For clarity, we make a precise choice of the evolution of the stochastic process controlling the parameters, but the results of this paper also hold in the general case. Moreover, the simulation technique can be adapted to any Ito diffusion. To implement the simulation, we need to generate samples of Brownian motion paths, as described in the existing literature in Section 4 on convergence analysis. Then, we provide a natural definition of the robustness of the performance measures, both as mathematical expectations and probabilistically, to satisfy given bounds. The effectiveness of the method is demonstrated by using the Chernoff bound, which has been used in the stochastic case (Calafiore et al., 2003; Chernoff, 1952). In particular, we show how to use the Chernoff bound in both a direct and indirect way to obtain results on robustness in the mean and probability.
Then, we start to deal with concrete examples. In the first example, we focus on the trajectory following problem of a Dubins car with feedback inaccuracies and external noise. We test randomized algorithms and compare with them. Using the analysis developed starting from the Chernoff bound, explicit values for the expected performance measure and the probability of staying below a fixed level are given. Computing probability distribution functions (pdfs for short) allows to illustrate the practical differences between randomized and stochastic algorithms. Finally, we focus on a robotic manipulation example, illustrating a technique to deal with parameter sensitivity. We use a randomized algorithm to determine the effect of each parameter on the system through a two-step process.
The paper is organized as follows. The next subsection explains the main differences between randomized and stochastic simulation results in Section 4. Section 2 elaborates on the formulation of the problem. It explains the concepts of static and dynamic random processes while introducing the background of stochastic differential equations. Section 3 deals with the Chernoff bound and the robustness of the system performance with respect to a given performance measure. Section 4 provides the results of computer simulations specifically addressing the robustness problem of some example control systems. The paper ends in Section 5.