Overview of the YOLO Object Detection Algorithm
DOI:
https://doi.org/10.54097/8n22df05Keywords:
YOLO, Target detection, Convolutional Neural NetworksAbstract
The YOLO (You Only Look Once) object detection algorithm, proposed in 2015, has now evolved to YOLOv12, boasting improved detection speed and accuracy, and is currently a hot research topic. This article focuses on introducing the basic network structure of the YOLO series algorithms, summarizing the innovations, advantages, and limitations of the YOLOv1~YOLOv12 algorithms, reviewing the applications of YOLO detection algorithms and their improved versions in industrial, agricultural, and security fields, and based on this, looking ahead to the possible future development trends of the YOLO algorithm.
Downloads
References
[1] Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection [C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 779-788.
[2] Redmon J, Farhadi A. YOLO9000: better, faster, stronger [C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 7263-7271.
[3] Farhadi A, Redmon J. Yolov3: An incremental improvement [C]//Computer vision and pattern recognition. Berlin/Heidelberg, Germany: Springer, 2018, 1804: 1-6.
[4] Wen C, Wen J, Li J, et al. Lightweight silkworm recognition based on Multi-scale feature fusion [J]. Computers and electronics in agriculture, 2022, 200: 107234.
[5] Wang H, Jin Y, Ke H, et al. DDH-YOLOv5: improved YOLOv5 based on Double IoU-aware Decoupled Head for object detection [J]. Journal of Real-Time Image Processing, 2022, 19(6): 1023-1033.
[6] Norkobil Saydirasulovich S, Abdusalomov A, Jamil M K, et al. A YOLOv6-based improved fire detection approach for smart city environments [J]. Sensors, 2023, 23(6): 3161.
[7] Wang C Y, Bochkovskiy A, Liao H Y M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors [C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023: 7464-7475.
[8] Hussain M. YOLO-v1 to YOLO-v8, the rise of YOLO and its complementary nature toward digital manufacturing and industrial defect detection [J]. Machines, 2023, 11(7): 677.
[9] Wang C Y, Yeh I H, Mark Liao H Y. Yolov9: Learning what you want to learn using programmable gradient information [C]//European conference on computer vision. Cham: Springer Nature Switzerland, 2024: 1-21.
[10] Wang A, Chen H, Liu L, et al. Yolov10: Real-time end-to-end object detection [J]. Advances in Neural Information Processing Systems, 2024, 37: 107984-108011.
[11] Cheng S, Han Y, Wang Z, et al. An underwater object recognition system based on improved yolov11 [J]. Electronics, 2025, 14(1): 201.
[12] Tian Y, Ye Q, Doermann D. Yolov12: Attention-centric real-time object detectors [J]. arXiv preprint arXiv: 2502.12524, 2025.
[13] Alhwaiti Y, Khan M, Asim M, et al. Leveraging YOLO deep learning models to enhance plant disease identification [J]. Scientific Reports, 2025, 15(1): 7969.
[14] Yu C, Xie J, Tony F J A. BGM-YOLO: An accurate and efficient detector for detecting plant disease [J]. Plos one, 2025, 20(5): e0322750.
[15] Wang J, Dai H, Chen T, et al. Toward surface defect detection in electronics manufacturing by an accurate and lightweight YOLO-style object detector [J]. Scientific Reports, 2023, 13(1): 7062.
[16] Lu M, Sheng W, Zou Y, et al. WSS-YOLO: An improved industrial defect detection network for steel surface defects [J]. Measurement, 2024, 236: 115060.
[17] Gawande U, Hajari K, Golhar Y. Novel person detection and suspicious activity recognition using enhanced YOLOv5 and motion feature map [J]. Artificial Intelligence Review, 2024, 57(2): 16.
[18] Wang G, Ding H, Duan M, et al. Fighting against terrorism: A real-time CCTV autonomous weapons detection based on improved YOLO v4 [J]. Digital Signal Processing, 2023, 132: 103790.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 International Journal of Advanced Engineering and Technology Research

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.










