Connected Vehicles Security: A Lightweight Machine Learning Model to Detect VANET Attacks
Vehicular ad hoc networks (VANETs) aim to manage traffic, prevent accidents, and regulate various parts of traffic. However, owing to their nature, the security of VANETs remains a significant concern. This study provides insightful information regarding VANET vulnerabilities and attacks. It investi...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-06-01
|
Series: | World Electric Vehicle Journal |
Subjects: | |
Online Access: | https://www.mdpi.com/2032-6653/16/6/324 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Vehicular ad hoc networks (VANETs) aim to manage traffic, prevent accidents, and regulate various parts of traffic. However, owing to their nature, the security of VANETs remains a significant concern. This study provides insightful information regarding VANET vulnerabilities and attacks. It investigates a number of security models that have recently been introduced to counter VANET security attacks with a focus on machine learning detection methods. This confirms that several challenges remain unsolved. Accordingly, this study introduces a lightweight machine learning model with a gain information feature selection method to detect VANET attacks. A balanced version of the well-known and recent dataset CISDS2017 was developed by applying a random oversampling technique. The developed dataset was used to train, test, and evaluate the proposed model. In other words, two layers of enhancements were applied—using a suitable feature selection technique and fixing the dataset imbalance problem. The results show that the proposed model, which is based on the Random Forest (RF) classifier, achieved excellent performance in terms of classification accuracy, computational cost, and classification error. It achieved an accuracy rate of 99.8%, outperforming all benchmark classifiers, including AdaBoost, decision tree (DT), K-nearest neighbors (KNNs), and multi-layer perceptron (MLP). To the best of our knowledge, this model outperforms all the existing classification techniques. In terms of processing cost, it consumes the least processing time, requiring only 69%, 59%, 35%, and 1.4% of the AdaBoost, DT, KNN, and MLP processing times, respectively. It causes negligible classification errors. |
---|---|
ISSN: | 2032-6653 |