Research on small target damage detection of aero-engine based on improved YOLOv4
-
摘要:
智能化的航空发动机损伤检测是飞机故障诊断重要的研究方向,针对现有目标检测模型对航空发动机的小目标损伤检测效果差的问题,提出了一种改进的基于You Only Look Once version 4(YOLOv4)的多尺度目标检测方法。在路径聚合网络(PANet)中构建低层次的特征融合层,将更浅层的特征与深层特征融合,提高网络对小目标损伤的检测性能。为减少网络中的冗余参数,在颈部结构中引入了深度可分离卷积,将标准卷积重构为深度可分离卷积的形式。实验表明:改进后的YOLOv4对小目标损伤的检测精度提升了3.43%,模型大小降低了54.06 MB,同时检测速度提高了31.03%。研究结果表明改进的YOLOv4模型对小目标损伤具有更好的检测性能。
Abstract:Intelligent aero-engines damage detection is an important research direction in aircraft fault diagnosis. An improved multi-scale target detection method based on You Only Look Once version 4 (YOLOv4) was proposed for the problem that existing target detection model has a poor effect on the detection of small target damage of aero-engine. A new shallow feature fusion layer was constructed in path aggregation network (PANet), which fused shallower features with deep features to improve the network detection performance for small target damage. In order to reduce redundant parameters in the network, depthwise separable convolution was introduced in neck and the standard convolution was reconstructed into the form of depthwise separable convolution. Experiments showed that the improved YOLOv4 improved the detection accuracy of small target damage by 3.43%, reduced the model size by 54.06 MB, and increased the detection speed of the model by 31.03%. The results of the study indicated that the improved YOLOv4 model had better detection performance for small target damage.
-
表 1 消融实验结果
Table 1. Ablation experiment results
方法 P/% 模型
大小/MB参数量 检测速度/
(帧/s)YOLOv4-a 92.74 244.25 64029231 29 YOLOv4-b 96.90 248.34 65100220 27 YOLOv4-c 96.17 190.19 49856380 38 表 2 不同模型检测性能对比分析
Table 2. Comparative analysis of detection performance of different models
模型 P/% 模型
大小/MB参数量 检测速度/
(帧/s)SSD 85.28 90.07 23612246 43 YOLOv3 90.50 234.69 61523734 31 Faster R-CNN 94.83 521.43 136689024 11 YOLOv4-c 96.17 190.19 49856380 38 -
[1] 张勇. 航空发动机故障诊断中孔探图像特征提取技术应用研究[D]. 长沙: 国防科技大学, 2006.ZHANG Yong. Research on the application of endoscopic image feature extraction technology in aeroengine diagnosis[D]. Changsha: National University of Defense Technology, 2006. (in Chinese) [2] LECUN Y,BENGIO Y,HINTON G. Deep learning[J]. Nature,2015,521(7553): 436-444. doi: 10.1038/nature14539 [3] JANG J,AN H,LEE J H,et al. Construction of faster R-CNN deep learning model for surface damage detection of blade systems[J]. Journal of the Korea Institute for Structural Maintenance and Inspection,2019,23(7): 80-86. [4] 李龙浦. 基于孔探数据的航空发动机叶片损伤识别研究[D]. 天津: 中国民航大学, 2020.LI Longpu. Research on damage identification of aeroengine blades based on borescope data[D]. Tianjin: Civil Aviation University of China, 2020. (in Chinese) [5] 李浩. 基于图像识别的航空发动机叶片裂纹检测研究[D]. 成都: 电子科技大学, 2019.LI Hao. Research on the blade crack detection of aero-engine based on image recognition[D]. Chengdu: University of Electronic Science and Technology of China, 2019. (in Chinese) [6] CHEN Z H, JUANG J C. Attention-based YOLOv4 algorithm in non-destructive radiographic testing for civic aviation maintenance[EB/OL]. [2022-12-21]. https://www.preprints.org/manuscript/202104.0653/v1. [7] 陈为,梁晨红. 基于改进SSD的航空发动机目标缺陷检测[J]. 控制工程,2021,28(12): 2329-2335. doi: 10.14107/j.cnki.kzgc.cpcc2019-063CHEN Wei,LIANG Chenhong. Aeroengine target defect detection based on improved SSD[J]. Control Engineering of China,2021,28(12): 2329-2335. (in Chinese) doi: 10.14107/j.cnki.kzgc.cpcc2019-063 [8] SUN Q S,ZENG S G,LIU Y,et al. A new method of feature fusion and its application in image recognition[J]. Pattern Recognition,2005,38(12): 2437-2448. doi: 10.1016/j.patcog.2004.12.013 [9] ZHAO Q, SHENG T, WANG Y, et al. M2det: a single-shot object detector based on multi-level feature pyramid network[C]//Proceedings of the AAAI Conference on Artificial Intelligence. Honolulu, US: AAAI Press, 2019: 9259-9266. [10] KIM S W, KOOK H K, SUN J Y, et al. Parallel feature pyramid network for object detection[C]//Proceedings of the European Conference on Computer Vision (ECCV). Munich: Springer, 2018: 234-250. [11] 张陶宁,陈恩庆,肖文福. 一种改进MobileNet_YOLOv3网络的快速目标检测方法[J]. 小型微型计算机系统,2021,42(5): 1008-1014. doi: 10.3969/j.issn.1000-1220.2021.05.018ZHANG Taoning,CHEN Enqing,XIAO Wenfu. Fast target detection method for improving MobileNet_YOLOv3 network[J]. Journal of Chinese Computer Systems,2021,42(5): 1008-1014. (in Chinese) doi: 10.3969/j.issn.1000-1220.2021.05.018 [12] 尹山青. 面向工件缺陷的小目标检测方法研究与应用[D]. 北京: 华北电力大学, 2021.YIN Shanqing. Research and application of small object detection method for workpiece defects[D]. Beijing: North China Electric Power University, 2021. (in Chinese) [13] ZENG N,WU P,WANG Z,et al. A small-sized object detection oriented multi-scale feature fusion approach with application to defect detection[J]. IEEE Transactions on Instrumentation and Measurement,2022,71: 3507014.1-3507014.14. [14] BOCHKOVSKIY A, WANG C Y, LIAO H Y M. Yolov4: optimal speed and accuracy of object detection[EB/OL]. [2022-12-21].https: //arxiv.org/abs/2004.10934. [15] WANG C Y, LIAO H Y M, WU Y H, et al. CSPNet: a new backbone that can enhance learning capability of CNN[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Seattle, US: IEEE, 2020: 390-391. [16] HE K,ZHANG X,REN S,et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2015,37(9): 1904-1916. doi: 10.1109/TPAMI.2015.2389824 [17] NEUBECK A, VAN GOOL L. Efficient non-maximum suppression[C]//18th International Conference on Pattern Recognition (ICPR'06). Hong Kong: IEEE, 2006: 850-855. [18] CHEN L C, PAPANDREOU G, SCHROFF F, et al. Rethinking atrous convolution for semantic image segmentation[EB/OL]. [2022-12-21]. https: //arxiv.org/abs/1706.05587. [19] LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multibox detector[C]//European Conference on Computer Vision. Cham, Switzerland: Springer, 2016: 21-37. [20] LIN T Y, DOLLÁR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]//Proceedings of the IEEE Conference on Computer Vision And Pattern Recognition. Honolulu, US: IEEE, 2017: 2117-2125. [21] HOWARD A G, ZHU M, CHEN B, et al. Mobilenets: efficient convolutional neural networks for mobile vision applications[EB/OL]. [2022-12-21].https: //arxiv.org/abs/1704.04861.