Nonfragile Output Feedback Tracking Control for Markov Jump Fuzzy Systems Based on Integral Reinforcement Learning Scheme - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue IEEE Transactions on Cybernetics Année : 2023

Nonfragile Output Feedback Tracking Control for Markov Jump Fuzzy Systems Based on Integral Reinforcement Learning Scheme

Résumé

In this article, a novel integral reinforcement learning (RL)-based nonfragile output feedback tracking control algorithm is proposed for uncertain Markov jump nonlinear systems presented by the Takagi–Sugeno fuzzy model. The problem of nonfragile control is converted into solving the zero-sum games, where the control input and uncertain disturbance input can be regarded as two rival players. Based on the RL architecture, an offline parallel output feedback tracking learning algorithm is first designed to solve fuzzy stochastic coupled algebraic Riccati equations for Markov jump fuzzy systems. Furthermore, to overcome the requirement of a precise system information and transition probability, an online parallel integral RL-based algorithm is designed. Besides, the tracking object is achieved and the stochastically asymptotic stability, and expected performance for considered systems is ensured via the Lyapunov stability theory and stochastic analysis method. Furthermore, the effectiveness of the proposed control algorithm is verified by a robot arm system.
Fichier non déposé

Dates et versions

hal-03825983 , version 1 (23-10-2022)

Identifiants

Citer

Jing Wang, Jiacheng Wu, Jinde Cao, Mohammed Chadli, Hao Shen. Nonfragile Output Feedback Tracking Control for Markov Jump Fuzzy Systems Based on Integral Reinforcement Learning Scheme. IEEE Transactions on Cybernetics, 2023, 53 (7), pp.4521--4530. ⟨10.1109/TCYB.2022.3203795⟩. ⟨hal-03825983⟩
75 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More