Station keeping control method based on deep reinforcement learning for aerostat using ambient wind
CSTR:
Author:
Affiliation:

College of Aerospace Science and Engineering, National University of Defense Technology, Changsha 410073 , China

Clc Number:

V274

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Aiming at the station keeping control problem of stratospheric aerostat in dynamic wind field, a station keeping controller designed based on deep reinforcement learning D3QN algorithm for different control channels of aerostat operated with ambient wind, studied the impact of different reward functions on the performance of regional resident controllers. Station keeping control simulation was carried out under the task constraint of a station keeping duration of three days and a station keeping radius of 50 km. Results show that: compared with the station keeping controller designed by DDQN method, the performance of the controller designed by D3QN method is significantly improved. When the control trajectory of aerosat is only adjusted by altitude, the average station keeping radius can reach 25.26 km, and the station keeping ratio is 96-25%. With the aid of horizontal propulsion, the average station keeping radius can be significantly reduced and the station keeping time ratio can be significantly increased. At the same time, the strong robustness of the station keeping controller based on deep reinforcement learning was verified, and the controller can be designed with different reward functions to meet the requirements of different station keeping tasks.

    Reference
    Related
    Cited by
Get Citation

柏方超, 杨希祥, 邓小龙, 等. 环境风利用的浮空器区域驻留深度强化学习控制方法[J]. 国防科技大学学报, 2025, 47(2): 78-88.

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:December 03,2022
  • Revised:
  • Adopted:
  • Online: April 14,2025
  • Published:
Article QR Code