Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
IEEE journal of selected topics in quantum electronics, 2020-09, Vol.26 (5), p.1-7
2020

Details

Autor(en) / Beteiligte
Titel
All-Optical WDM Recurrent Neural Networks With Gating
Ist Teil von
  • IEEE journal of selected topics in quantum electronics, 2020-09, Vol.26 (5), p.1-7
Ort / Verlag
New York: IEEE
Erscheinungsjahr
2020
Link zum Volltext
Quelle
IEEE Xplore Digital Library
Beschreibungen/Notizen
  • Neuromorphic photonics came to the fore promising neural networks (NNs) with orders of magnitude higher computational speeds compared to electronic counterparts. In this direction, research efforts have been mainly concentrated on the development of spiking, convolutional and Feed-Forward (FF)-NN architectures, aiming to solve complex cognitive problems. However, in order to solve time-series classification and prediction complex tasks, state-of-the-art deep-learning models require in most cases the employment of Recurrent-NNs (RNNs) along with their gated variants, such as Long-Short-Term-Memories (LSTMs) and Gated-Recurrent-Units (GRUs). Herein, we experimentally demonstrate the first, to the best of our knowledge, all-optical RNN with a gating mechanism, laying the foundations for all-optical LSTMs and GRUs. The proposed layouts exploit a Semiconductor-Optical-Amplifier (SOA)-based sigmoid activation within a fiber loop and were validated using asynchronous Wavelength-Division-Multiplexed (WDM) signals with 100psec optical pulses. A SOA-Mach-Zehnder-Interferometer (SOA-MZI) gate was employed in the Gated-RNN version, with the RNN output defining the input signal fraction that is desired to enter the RNN. Finally, a complex NN architecture was trained using the FI-2010 financial dataset exploiting the proposed non-gated and gated-RNNs, showcasing in an outstanding F1 score of <inline-formula><tex-math notation="LaTeX">\text{41.68}\%</tex-math></inline-formula> and <inline-formula><tex-math notation="LaTeX">\text{41.85}\%</tex-math></inline-formula>, respectively, outperforming the Multi-Layer Perceptron (MLP) based models by <inline-formula><tex-math notation="LaTeX">\text{6.49}\%</tex-math></inline-formula> in average.

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX