Governed by: Ministry of Industry and Information Technology of the People's Republic of China
Sponsored by: Northwestern Polytechnical University  Chinese Society Aeronautics and Astronautics
Address: Aviation Building,Youyi Campus, Northwestern Polytechnical University
One-on-One Air Combat Control Method Based on Situation Assessment and DDPG Algorithm
Author:
Affiliation:

Xi’an Aviation Institute of Computing Technology

Clc Number:

v323.9

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Due to the advantages of low cost and no casualties in unmanned aerial vehicle (UAV) autonomous air combat, it has attracted increasing attention. This paper is based on the deep deterministic policy gradient (DDPG) reinforcement learning method. Building upon the situation evaluation function as the reward function for reinforcement learning, a comprehensive reinforcement learning environment is designed that considers flight altitude limits, flight overload, and flight speed limits. The interaction between the DDPG algorithm and the learning environment is achieved through the fully connected carrier speed control network and the environment reward network. The end condition for air combat is designed based on abnormal height and speed, missile lock time, and combat time. By simulating one-on-one air combat, the effectiveness of this combat control method is validated in terms of learning under environmental constraints, situation evaluation scores, and combat mode learning. This research can provide guidance for the further development of autonomous air combat.

    Reference
    Related
    Cited by
Get Citation

hebaoji, BAI LINTING, WEN Pengcheng. One-on-One Air Combat Control Method Based on Situation Assessment and DDPG Algorithm[J]. Advances in Aeronautical Science and Engineering,2024,15(2):179-187

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:June 19,2023
  • Revised:September 19,2023
  • Adopted:November 07,2023
  • Online: March 08,2024
  • Published: