PhD defence by Li Zhang
Machine learning techniques for next generation of optical communication
Abstract
The conjoining of photonics and machine learning has become a topic of great interest, especially to address new arisen problems that beyond the capabilities of conventional methods in next generation of optical communications. This doctoral thesis investigates how machine-learning algorithms seamlessly incorporate with optical systems to add new functionalities and to enhance performance in optical amplifiers design and optical performance monitoring.
Recent progress in fiber-optic communications has been driven both by ongoing explosion in data traffic and the availability of multiband optical amplification techniques. In particular, light propagation in wideband fibers comprises more complex stimulated Raman scattering (SRS)-induced multi-channel interactions and more significant wavelength-dependent characteristics. Control of physical phenomena in optical fibers opens opportunities to take advantage of nonlinear SRS effects for optical amplifiers design. Here, we demonstrate an accurate and efficient physics-informed neural networks (PINNs)-based solver of Raman coupled differential equations for predicting power evolution and gain spectra shaping in Raman fibers.
The training of the PINNs-based solver is governed by the underlying physics and solely supervised by the loss function constructed with measured scarce physical parameters of the fiber. This learned PINNs-based solver is particularly suitable for modeling nonlinear SRS-induced multi-channel power transfer among all pump- and signal- channels in practical fibers and enables ultra-fast gain response at the order of milliseconds. It also offers a new tool to evaluate the performance of practical fiber amplifiers before they are fabricated. A central research area in free-space optical communications is the monitoring and mitigating atmospheric turbulence-induced distortion of optical links.
However, optical channels that are severely distorted by the turbulence often exhibit strong spatio-temporal dynamics. In our work, a Gaussian Process Regression (GPR) predictor is trained to correlate the temporal properties of the turbulent channels using a sliding window, and then applied to predict turbulent channel states over future time steps in satellite-to-ground optical links. We show how a simplified predictor based on GPR can better capture and predict the dynamics and fine details of the time-variant turbulent channels that learned from the historical observed data compared to stacked-layers neural networks.
Supervisors
- Main Supervisor: Professor Darko Zibar, DTU Electro, Denmark.
- Co-supervisor: Assistant Professor Francesco Da Ros, DTU Electro, Denmark.
- Industrial supervisor: Erwan Pincemin, Orange Labs, Lannion & Emilio Riccardi and Marco Quagliotti at Telecom Italia Mobile, Torino.
Assessment committee
- Associate Professor Michael Galili, DTU Electro, Denmark. (Chair)
- Associate Professor Jaroslaw Piotr Turkiewicz, Warsaw University of Technology, Poland.
- Associate Professor Vicent Choqueuse, Ecole Nationale d'Ingenieurs de Brest (ENIB) and Laboratoire des Sciences et Techniques de l'Information, de la Communication et de la Connaissance, France
Master of the Ceremony
- Associate Professor Haiyan Ou, DTU Electro, Denmark.
Contact
Darko Zibar Group Leader, Professor dazi@dtu.dk