Behavior Cloning for Self Driving Cars using Attention Models


Authors : M Shvejan Shashank; Saikrishna Prathapaneni; M. Anil; N Thanuja Sri; Mohammed Owais; B Saikumar

Volume/Issue : Volume 6 - 2021, Issue 12 - December

Google Scholar : http://bitly.ws/gu88

Scribd : https://bit.ly/3JIVWiW

Vision transformers started making waves in deep learning by replacing the typical convolutional neural networks (CNN) in tasks like image classification, Object detection, segmentation etc. the implementation of vision transformers can be further extended to autonomous cars. As it was proven that pure transformer architecture can outperform CNNs when trained over a large dataset by using comparatively less computational resources [1]. These vision transformers can be implemented in selfdriving cars to calculate the optimal steering angle by capturing images of the surroundings. The vision transformers can take advantage of the attention mechanism to focus on the most important things in the image like road lanes, other vehicles, traffic signs, etc. and produce better results compared to CNN models. The paper discusses about implementation behavior cloning using vision transformers in a self-driving car which is simulated in a Udacity self-drive car simulator.

Keywords : Transformers, Self-Driving Vehicles, Vision Transformers, Convolutional Neural Networks CNN, Behavior Cloning.

CALL FOR PAPERS


Paper Submission Last Date
30 - April - 2024

Paper Review Notification
In 1-2 Days

Paper Publishing
In 2-3 Days

Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe