Feedback-Based Validation Learning
This paper presents Feedback-Based Validation Learning (FBVL), a novel approach that transforms the role of validation datasets in deep learning. Unlike conventional methods that utilize validation datasets for performance evaluation post-training, FBVL integrates these datasets into the training pr...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-07-01
|
Series: | Computation |
Subjects: | |
Online Access: | https://www.mdpi.com/2079-3197/13/7/156 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper presents Feedback-Based Validation Learning (FBVL), a novel approach that transforms the role of validation datasets in deep learning. Unlike conventional methods that utilize validation datasets for performance evaluation post-training, FBVL integrates these datasets into the training process. It employs real-time feedback to optimize the model’s weight adjustments, enhancing prediction accuracy and overall model performance. Importantly, FBVL preserves the integrity of the validation process by using prediction outcomes on the validation dataset to guide training adjustments, without directly accessing the dataset. Our empirical study conducted using the Iris dataset demonstrated the effectiveness of FBVL. The Iris dataset, comprising 150 samples from three species of Iris flowers, each characterized by four features, served as an ideal testbed for demonstrating FBVL’s effectiveness. The implementation of FBVL led to substantial performance improvements, surpassing the accuracy of the previous best result by approximately 7.14% and achieving a loss reduction greater than the previous methods by approximately 49.18%. When FBVL was applied to the Multimodal EmotionLines Dataset (MELD), it showcased its wide applicability across various datasets and domains. The model achieved a test-set accuracy of 70.08%, surpassing the previous best-reported accuracy by approximately 3.12%. These remarkable results underscore FBVL’s ability to optimize performance on established datasets and its capacity to minimize loss. Using our FBVL method, we achieved a test set f1_score micro of 70.07%, which is higher than the previous best-reported value for f1_score micro of 67.59%. These results demonstrate that FBVL enhances classification accuracy and model generalization, particularly in scenarios involving small or imbalanced datasets, offering practical benefits for designing more efficient and robust neural network architectures. |
---|---|
ISSN: | 2079-3197 |