DNN Layer Specialization Through Sequential Training for Applications With Smart Road-User Interactions
The deployment of a large number of sensors in vehicles makes it possible to analyze road-user interactions which is crucial for most applications in vehicular scenarios. In this context, Deep Neural Networks (DNNs) are a powerful tool for recognizing complex data patterns stemming from interactions...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/11059952/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The deployment of a large number of sensors in vehicles makes it possible to analyze road-user interactions which is crucial for most applications in vehicular scenarios. In this context, Deep Neural Networks (DNNs) are a powerful tool for recognizing complex data patterns stemming from interactions between users and smart roads. However, the user behaviour changes depending on the road environment and the user features. Hence, it is not possible to have a unique DNN model that is effective for all the users, in all kinds of road environments. On the contrary, a specific model might be needed for each different user interacting with each type of environment. Unfortunately, this would imply the need to collect and process a very large amount of data for the training of each DNN model, which is not practical. Therefore, this work proposes a novel training technique called Sequential Training that partitions the Neural Network (NN) layers of the DNN model into two sets of layers that are trained so that one is specific of the user while the other is trained cooperatively to be specific of the road environment. The proposed training technique is realized by means of Vehicle-to-Infrastructure (V2I) communication in which the vehicle user can download the layers of the DNN model specific of the road environment where the vehicle is currently located. Such layers are then concatenated with the user-specific layers. |
---|---|
ISSN: | 2169-3536 |