Lane Line and Object Detection Using Yolo v3


Authors : Shiva Charan Vanga; Rajesh Vangari; Shyamala Vasre; Dheekshith Rao Nayini; A. Amara Jyothi

Volume/Issue : Volume 9 - 2024, Issue 6 - June


Google Scholar : https://tinyurl.com/bdctyr5z

Scribd : https://tinyurl.com/3yzx5b62

DOI : https://doi.org/10.38124/ijisrt/IJISRT24JUN1657

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.


Abstract : In the contemporary age, creating autonomous vehi- cles is a crucial starting point for the advancement of intelligent transportation systems that rely on sophisticated telecommu- nications network infrastructure, including the upcoming 6g networks. The paper discusses two significant issues, namely, lane detection and obstacle detection (such as road signs, traffic lights, vehicles ahead, etc.) using image processing algorithms. To address issues like low accuracy in traditional image processing methods and slow real-time performance of deep learning-based methods, barriers for smart traffic lane and object detection algorithms are proposed. We initially rectify the distorted image resulting from the camera and then employ a threshold algorithm for the lane detection algorithm. The image is obtained by extracting a specific region of interest and applying an inverse perspective transform to obtain a top-down view. Finally, we apply the sliding window technique to identify pixels that belong to each lane and modify it to fit a quadratic equation. The Yolo algorithm is well-suited for identifying various types of obstacles, making it a valuable tool for solving identification problems. Finally, we utilize real-time videos and a straightforward dataset to conduct simulations for the proposed algorithm. The simula- tion outcomes indicate that the accuracy of the proposed method for lane detection is 97.91% and the processing time is 0.0021 seconds. The proposal for detecting obstacles has an accuracy rate of 81.90% and takes only 0.022 seconds to process. Compared to the conventional image processing technique, the proposed method achieves an average accuracy of 89.90% and execution time of 0.024 seconds, demonstrating a robust capability against noise. The findings demonstrate that the suggested algorithm can be implemented in self-driving car systems, allowing for efficient and fast processing of the advanced network.

Keywords : Component, Formatting, Style, Styling, Insert.

References :

  1. Real Time Lane Detection in Autonomous Vehicles Using Image Processing Authors: Jasmine Wadhwa, G.S. Kalra and B.V. Kranthi Published: October 05, 2015
  2. CNN based lane detection with instance segmentation in edge- cloud computing. Author: Wei Wang, Hui Lin, Junshu Wang Published: 19 May 2020 (CNN based lane detection with instance segmentation in edge-cloud computing — Jour- nal of Cloud Computing — Full Text (springeropen.com))
  3. Real time lane detection for autonomous vehicles Au- thors: Abdulhakam. AM. A ssidiq, Othman O. Khalifa, Md. Rafiqul Islam, Sheroz Khan. Published: December 2021. (Real time lane detection for autonomous vehicles — IEEE Confer- ence Publication — IEEE Xplore)
  4. A Sensor-Fusion Drivable-Region and Lane-Detection System for Autonomous Vehicle Navigation in Challenging Road Scenarios. Authors: Qingquan Li, Long Chen, Ming Li, Shih- Lung Shaw and Andreas Nuchter. Joi Published: December 2021. (A Sensor-Fusion Drivable-Region and Lane- Detection System for Autonomous Vehicle Navigation in Challenging Road Scenarios — IEEE Journals Magazine — IEEE Xplore)

In the contemporary age, creating autonomous vehi- cles is a crucial starting point for the advancement of intelligent transportation systems that rely on sophisticated telecommu- nications network infrastructure, including the upcoming 6g networks. The paper discusses two significant issues, namely, lane detection and obstacle detection (such as road signs, traffic lights, vehicles ahead, etc.) using image processing algorithms. To address issues like low accuracy in traditional image processing methods and slow real-time performance of deep learning-based methods, barriers for smart traffic lane and object detection algorithms are proposed. We initially rectify the distorted image resulting from the camera and then employ a threshold algorithm for the lane detection algorithm. The image is obtained by extracting a specific region of interest and applying an inverse perspective transform to obtain a top-down view. Finally, we apply the sliding window technique to identify pixels that belong to each lane and modify it to fit a quadratic equation. The Yolo algorithm is well-suited for identifying various types of obstacles, making it a valuable tool for solving identification problems. Finally, we utilize real-time videos and a straightforward dataset to conduct simulations for the proposed algorithm. The simula- tion outcomes indicate that the accuracy of the proposed method for lane detection is 97.91% and the processing time is 0.0021 seconds. The proposal for detecting obstacles has an accuracy rate of 81.90% and takes only 0.022 seconds to process. Compared to the conventional image processing technique, the proposed method achieves an average accuracy of 89.90% and execution time of 0.024 seconds, demonstrating a robust capability against noise. The findings demonstrate that the suggested algorithm can be implemented in self-driving car systems, allowing for efficient and fast processing of the advanced network.

Keywords : Component, Formatting, Style, Styling, Insert.

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe