A hybrid CNN-BILSTM deep learning framework for signal detection of a massive MIMONOMA system

Non-orthogonal multiple access (NOMA) has been proposed as a replacement for orthogonal multiple access (OMA) in 6G networks to reduce latency, improve throughput and increase data rates. However, the most common technique for detecting NOMA in receivers, known as successive interference cancellatio...

Full description

Saved in:
Bibliographic Details
Main Authors: Mohamed A. Abdelhamed, Mennatalla Samy, Bassem E. Elnaghi, Ahmed Magdy
Format: Article
Language:English
Published: Elsevier 2025-09-01
Series:Results in Engineering
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2590123025019231
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Non-orthogonal multiple access (NOMA) has been proposed as a replacement for orthogonal multiple access (OMA) in 6G networks to reduce latency, improve throughput and increase data rates. However, the most common technique for detecting NOMA in receivers, known as successive interference cancellation (SIC), has limitations in error detection. Deep learning (DL) signal detection methods solve this problem. In the proposed hybrid model, a convolutional neural network (CNN) and bidirectional feed-forward recurrent neural networks (RNNs) are combined to improve error optimization. CNN is used to capture the input signal parameters of massive multiple input multiple output (MIMO)-NOMA systems. These extracted features are then fed into the time series bidirectional long short term (BiLSTM) network to estimate the received signal. The CNN-BiLSTM model is trained online using simulated channel data, and both Nadam and Adam optimizers are used to improve loss optimization during training. Traditional signal detection methods, DL-based techniques, BiLSTM, CNNs, and SIC-based maximum likelihood detection (MLD) methodology outperform by the proposed learning strategy. In addition, the results of the simulations show robustness when varying training parameters such as learning rates and minibatch sizes. The proposed learning strategy outperforms traditional signal detection methods, including standalone DL-based techniques, BiLSTM, CNN, and SIC-based maximum likelihood detection (MLD). Simulation results demonstrate that the CNN-BiLSTM model achieves a 60 % reduction in bit error rate (BER) for the far user (FU) and 55 % for the near user (NU) when high-priority (HP) bits are transmitted. Compared to traditional SIC-based MLD, the BER reduction for low-priority (LP) bits is 61 % for NU and 56 % for FU. Furthermore, compared to CNN and BiLSTM models alone for HP bits, the proposed model offers BER reductions of 50 % for NU and 44 % for FU over CNN, 35 % for NU and 30 % for FU over BiLSTM. For LP bits, the improvements are 48 % for NU and 44 % for FU over CNN, 36 % for NU and 30 % for FU over BiLSTM. The approach also shows robustness under varying training parameters such as learning rates and minibatch sizes.
ISSN:2590-1230