Open Access
SHS Web Conf.
Volume 194, 2024
The 6th ETLTC International Conference on ICT Integration in Technical Education (ETLTC2024)
Article Number 01004
Number of page(s) 11
Section Intelligent Applications in Society
Published online 26 June 2024
  1. J. Redmon, S. Divvala, R. Girshick and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 779–788, doi: 10.1109/CVPR.2016.91. [Google Scholar]
  2. X. Feng, Y. Jiang, X. Yang, M. Du, and X. Li, “Computer vision algorithms and hardware implementations: A survey,” in Integration, vol. 69, pp. 309–320, 2019, doi: 10.1016/j.vlsi.2019.07.005. [CrossRef] [Google Scholar]
  3. M. Umair, M. U. Farooq, R. H. Raza, Q. Chen, and B. Abdulhai, “Efficient Video-based Vehicle Queue Length Estimation using Computer Vision and Deep Learning for an Urban Traffic Scenario,” Processes, vol. 9, no. 10, p. 1786, 2021, doi: [CrossRef] [Google Scholar]
  4. S. He et al., “Automatic Recognition of Traffic Signs Based on Visual Inspection,” in IEEE Access, vol. 9, pp. 43253–43261, 2021, doi: 10.1109/ACCESS.2021.3059052. [CrossRef] [Google Scholar]
  5. S. Houben, J. Stallkamp, J. Salmen, M. Schlipsing and C. Igel, “Detection of traffic signs in real-world images: The German traffic sign detection benchmark,” The 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 2013, pp. 1–8, doi: 10.1109/IJCNN.2013.6706807. [Google Scholar]
  6. Shapiro, L. G. (2020). Computer vision: the last 50 years. International Journal of Parallel, Emergent and Distributed Systems, 35(2), 112–117. [CrossRef] [Google Scholar]
  7. Donthu, N., Kumar, S., Mukherjee, D., Pandey, N., & Lim, W. M. (2021). How to conduct a bibliometric analysis: An overview and guidelines. Journal of Business Research, 133, 285–296. [CrossRef] [Google Scholar]
  8. Lv, Z., Lou, R., & Singh, A. K. (2020). AI-empowered communication systems for intelligent transportation systems. IEEE Transactions on Intelligent Transportation Systems, 22(7), 4579–4587. [Google Scholar]
  9. Krontiris, I., Grammenou, K., Terzidou, K., Zacharopoulou, M., Tsikintikou, M., Baladima, F., … & Kaouras, K. (2020, December). Autonomous vehicles: Data protection and ethical considerations. In Proceedings of the 4th ACM Computer Science in Cars Symposium (pp. 1–10). [Google Scholar]
  10. Dick, K., Russell, L., Souley Dosso, Y., Kwamena, F., & Green, J. R. (2019). Deep learning for critical infrastructure resilience. Journal of Infrastructure Systems, 25(2), 05019003. [CrossRef] [Google Scholar]
  11. Janai, J., Güney, F., Behl, A., & Geiger, A. (2020). Computer vision for autonomous vehicles: Problems, datasets and state of the art. Foundations and Trends® in Computer Graphics and Vision, 12(1–3), 1–308. [CrossRef] [Google Scholar]
  12. Khan, A. A., Laghari, A. A., & Awan, S. A. (2021). Machine learning in computer vision: a review. EAI Endorsed Transactions on Scalable Information Systems, 8(32), e4–e4. [Google Scholar]
  13. J. Tao, H. Wang, X. Zhang, X. Li, and H. Yang, “An object detection system based on YOLO in traffic scene,” 2017 6th International Conference on Computer Science and Network Technology (ICCSNT), Dalian, China, 2017, pp. 315–319, doi: 10.1109/ICCSNT.2017.8343709 [Google Scholar]
  14. Q. Xu, R. Lin, H. Yue, H. Huang, Y. Yang, and Z. Yao, “Research on Small Target Detection in Driving Scenarios Based on Improved Yolo Network,” in IEEE Access, vol. 8, pp. 27574–27583, 2020, doi: 10.1109/ACCESS.2020.2966328. [CrossRef] [Google Scholar]
  15. A. Sarda, S. Dixit and A. Bhan, “Object Detection for Autonomous Driving using YOLO [You Only Look Once] algorithm,” 2021 Third International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV), Tirunelveli, India, 2021, pp. 1370–1374, doi: 10.1109/ICICV50876.2021.9388577. [Google Scholar]
  16. F. M. Talaat and H. ZainEldin, “An improved fire detection approach based on YOLO-v8 for smart cities,” Neural Computing and Applications, vol. 35, no. 28, pp. 20939–20954, Oct. 2023, doi: 10.1007/s00521023-08809-1. [CrossRef] [Google Scholar]
  17. N. Sharma and R. D. Garg, “Real-Time Computer Vision for Transportation Safety using Deep Learning and IoT,” 2022 International Conference on Engineering and Emerging Technologies (ICEET), Kuala Lumpur, Malaysia, 2022, pp. 1–5, doi: 10.1109/ICEET56468.2022.10007226. [Google Scholar]
  18. Ho, G. T. S., Tsang, Y. P., Wu, C. H., Wong, W. H., & Choy, K. L. (2019). A computer vision-based roadside occupation surveillance system for intelligent transport in smart cities. Sensors, 19(8), 1796. [Google Scholar]
  19. Chang, M. C., Chiang, C. K., Tsai, C. M., Chang, Y. K., Chiang, H. L., Wang, Y. A., … & Tseng, H. Y. (2020). Ai city challenge 2020-computer vision for smart transportation applications. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops (pp. 620–621). [Google Scholar]
  20. R. Laroca, L. A. Zanlorensi, G. R. Gonc¸alves, E. Todt, W. R. Schwartz, and D. Menotti, “An efficient and layout-independent automatic license plate recognition system based on the YOLO detector,” IET Intelligent Transport Systems, vol. 15, no. 4, pp. 483–503, 2021, [CrossRef] [Google Scholar]
  21. P. Albacar, O`. Lorente, E. Mainou, and I. Riera, “Video Surveillance for Road Traffic Monitoring,” 2021, arXiv:2105.04908. [Google Scholar]
  22. B. Baheti, S. Innani, S. Gajre, and S. Talbar, “Semantic scene segmentation in unstructured environment with modified DeepLabV3+,” Pattern Recognition Letters, vol. 138, pp. 223–229, 2020, ISSN 0167-8655. [Online]. Available: [CrossRef] [Google Scholar]
  23. S. Das, A. A. Fime, N. Siddique and M. M. A. Hashem, “Estimation of Road Boundary for Intelligent Vehicles Based on DeepLabV3+ Architecture,” in IEEE Access, vol. 9, pp. 121060–121075, 2021, doi: 10.1109/ACCESS.2021.3107353. [CrossRef] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.