Distributed Compensation With Parallelizability for Polarization Mode Dispersion Based on Learned Digital Backpropagation

This study proposes a learned digital backward propagation (LDBP) algorithm that performs a distributed compensation of polarization mode dispersion (PMD) in parallel. The proposed LDBP algorithm uses the regular perturbation theory of a fiber-optic nonlinear Schrödinger equation to creat...

Full description

Saved in:
Bibliographic Details
Main Authors: Daobin Wang, Guangfu Li, Hui Yang, Wei Li, Ruiyang Xia, Chengqi Duan, Jianming Shang, Zanshan Zhao, Guanjun Gao
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Photonics Journal
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11060826/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This study proposes a learned digital backward propagation (LDBP) algorithm that performs a distributed compensation of polarization mode dispersion (PMD) in parallel. The proposed LDBP algorithm uses the regular perturbation theory of a fiber-optic nonlinear Schrödinger equation to create a deep neural network (DNN) with full parallelization capabilities. The proposed algorithm’s nonlinear compensation (NLC) performance is evaluated using numerical experiments on a 1,000-kilometer standard single-mode fiber link. The link uses a dense wavelength division multiplexing (DWDM) system with five wavelength channels, 64-QAM modulation, and a symbol rate of 32 GBaud/s per channel. The experimental results demonstrate that, even for low-complexity network training with identical optical power, the proposed method can provide a performance improvement of approximately 0.4 dB over lumped compensation methods. This indicates that parallelization, which improves the efficiency of NLC execution, does not reduce the proposed method’s advantage over lumped compensation methods. Finally, the improvement of the computational efficiency caused by parallelization is investigated. The results show that parallelization improves computational efficiency by about 67 times compared to serial execution. The findings of this paper provide a feasible solution for implementing NLC, which can significantly improve hardware efficiency in practice.
ISSN:1943-0655