Ensembles of Gradient Boosting Recurrent Neural Network for Time Series Data Prediction

Ensemble deep learning can combine strengths of neural network and ensemble learning, which gradually becomes a new emerging research direction. However, the existing methods either lack theoretical support or demand large integrated models. In this paper, Ensembles of Gradient Boosting Recurrent Ne...

Full description

Saved in:
Bibliographic Details
Main Authors: Shiqing Sang, Fangfang Qu, Pengcheng Nie
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9438681/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1839619823450980352
author Shiqing Sang
Fangfang Qu
Pengcheng Nie
author_facet Shiqing Sang
Fangfang Qu
Pengcheng Nie
author_sort Shiqing Sang
collection DOAJ
description Ensemble deep learning can combine strengths of neural network and ensemble learning, which gradually becomes a new emerging research direction. However, the existing methods either lack theoretical support or demand large integrated models. In this paper, Ensembles of Gradient Boosting Recurrent Neural Network (EGB-RNN) is proposed, which combines the gradient boosting ensemble framework with three types of recurrent neural network models, namely Minimal Gated Unit (MGU), Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM). RNN model is used as base learner to integrate an ensemble learner, through the way of gradient boosting. Meanwhile, for ensuring the ensemble model fit data better, Step Iteration Algorithm is designed to find an appropriate learning rate before models being integrated. The proposed method is tested on four time-series datasets. Experimental results demonstrate that with the number of integration increasing, the performance of three types of EGB-RNN models tend to converge and the best EGB-RNN model and the best degree of ensemble vary with data sets. It is also shown in statistical results that the designed EGB-RNN models perform better than six baselines.
format Article
id doaj-art-54d93d38f41748d6a27288cfae5d9eb8
institution Matheson Library
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-54d93d38f41748d6a27288cfae5d9eb82025-07-22T23:00:59ZengIEEEIEEE Access2169-35362025-01-011312233112234010.1109/ACCESS.2021.30825199438681Ensembles of Gradient Boosting Recurrent Neural Network for Time Series Data PredictionShiqing Sang0https://orcid.org/0000-0002-7267-8326Fangfang Qu1https://orcid.org/0000-0002-5748-8372Pengcheng Nie2https://orcid.org/0000-0002-2298-1474Jiaxing Vocational and Technical College, Jiaxing, ChinaCollege of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, ChinaCollege of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, ChinaEnsemble deep learning can combine strengths of neural network and ensemble learning, which gradually becomes a new emerging research direction. However, the existing methods either lack theoretical support or demand large integrated models. In this paper, Ensembles of Gradient Boosting Recurrent Neural Network (EGB-RNN) is proposed, which combines the gradient boosting ensemble framework with three types of recurrent neural network models, namely Minimal Gated Unit (MGU), Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM). RNN model is used as base learner to integrate an ensemble learner, through the way of gradient boosting. Meanwhile, for ensuring the ensemble model fit data better, Step Iteration Algorithm is designed to find an appropriate learning rate before models being integrated. The proposed method is tested on four time-series datasets. Experimental results demonstrate that with the number of integration increasing, the performance of three types of EGB-RNN models tend to converge and the best EGB-RNN model and the best degree of ensemble vary with data sets. It is also shown in statistical results that the designed EGB-RNN models perform better than six baselines.https://ieeexplore.ieee.org/document/9438681/Gradient boostingLSTMGRUMGUensemble learningtime series data prediction
spellingShingle Shiqing Sang
Fangfang Qu
Pengcheng Nie
Ensembles of Gradient Boosting Recurrent Neural Network for Time Series Data Prediction
IEEE Access
Gradient boosting
LSTM
GRU
MGU
ensemble learning
time series data prediction
title Ensembles of Gradient Boosting Recurrent Neural Network for Time Series Data Prediction
title_full Ensembles of Gradient Boosting Recurrent Neural Network for Time Series Data Prediction
title_fullStr Ensembles of Gradient Boosting Recurrent Neural Network for Time Series Data Prediction
title_full_unstemmed Ensembles of Gradient Boosting Recurrent Neural Network for Time Series Data Prediction
title_short Ensembles of Gradient Boosting Recurrent Neural Network for Time Series Data Prediction
title_sort ensembles of gradient boosting recurrent neural network for time series data prediction
topic Gradient boosting
LSTM
GRU
MGU
ensemble learning
time series data prediction
url https://ieeexplore.ieee.org/document/9438681/
work_keys_str_mv AT shiqingsang ensemblesofgradientboostingrecurrentneuralnetworkfortimeseriesdataprediction
AT fangfangqu ensemblesofgradientboostingrecurrentneuralnetworkfortimeseriesdataprediction
AT pengchengnie ensemblesofgradientboostingrecurrentneuralnetworkfortimeseriesdataprediction