FORMATION CONTROL FOR MULTI-UNMANNED VEHICLES VIA DEEP REINFORCEMENT LEARNING
Abstract
Targeting the problem of multi-agent formation control, this work investigates the formation control of a multi-unmanned vehicle system using the DDQN deep reinforcement learning algorithm. The approach combines consensus control with accompanying configuration to model and simplify the formation control problem. This work establishes a state space based on relative distance and velocity, making control inputs independent of global information, and then designs an action space based on nine major motion directions and formulates reward functions based on relative distance and relative velocity. The work involves the design of neural network architecture, network training, and developing a motion simulation environment. The controller is successfully trained and can be directly applied to the formation task of underactuated unmanned vehicles with nonholonomic constraints, representing a model-free control approach that only requires motion data rather than precise models. Finally, the effectiveness of the controller is verified through extensive motion simulations in various scenarios, including multiple formations, positions, trajectories, as well as examinations of formation transformation, switching communication, and communication failures. The controller performs effectively in all scenarios. The paper concludes by optimizing the strategies in the initial stages of formation, defining waiting and starting conditions, which effectively reduces control energy consumption. The optimization is validated through motion simulations and comparison.