Efficient Hardware Architectures for 1D- and MD-LSTM Networks
- Recurrent Neural Networks, in particular One-dimensional and Multidimensional Long Short-Term Memory (1D-LSTM and MD-LSTM) have achieved state-of-the-art classification accuracy in many applications such as machine translation, image caption generation, handwritten text recognition, medical imaging and many more. However, high classification accuracy comes at high compute, storage, and memory bandwidth requirements, which make their deployment challenging, especially for energy-constrained platforms such as portable devices. In comparison to CNNs, not so many investigations exist on efficient hardware implementations for 1D-LSTM especially under energy constraints, and there is no research publication on hardware architecture for MD-LSTM. In this article, we present two novel architectures for LSTM inference: a hardware architecture for MD-LSTM, and a DRAM-based Processing-in-Memory (DRAM-PIM) hardware architecture for 1D-LSTM. We present for the first time a hardware architecture for MD-LSTM, and show a trade-off analysis for accuracy and hardware cost for various precisions. We implement the new architecture as an FPGA-based accelerator that outperforms NVIDIA K80 GPU implementation in terms of runtime by up to 84× and energy efficiency by up to 1238× for a challenging dataset for historical document image binarization from DIBCO 2017 contest, and a well known MNIST dataset for handwritten digits recognition. Our accelerator demonstrates highest accuracy and comparable throughput in comparison to state-of-the-art FPGA-based implementations of multilayer perceptron for MNIST dataset. Furthermore, we present a new DRAM-PIM architecture for 1D-LSTM targeting energy efficient compute platforms such as portable devices. The DRAM-PIM architecture integrates the computation units in a close proximity to the DRAM cells in order to maximize the data parallelism and energy efficiency. The proposed DRAM-PIM design is 16.19 × more energy efficient as compared to FPGA implementation. The total chip area overhead of this design is 18 % compared to a commodity 8 Gb DRAM chip. Our experiments show that the DRAM-PIM implementation delivers a throughput of 1309.16 GOp/s for an optical character recognition application.
Author: | Vladimir RybalkinORCiD, Chirag Sudarshan, Christian Weis, Jan Lappas, Norbert Wehn, Li Cheng |
---|---|
URN: | urn:nbn:de:hbz:386-kluedo-77776 |
DOI: | https://doi.org/10.1007/s11265-020-01554-x |
ISSN: | 1939-8115 |
Parent Title (English): | Journal of Signal Processing Systems |
Publisher: | Springer Nature - Springer |
Document Type: | Article |
Language of publication: | English |
Date of Publication (online): | 2024/03/07 |
Year of first Publication: | 2020 |
Publishing Institution: | Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau |
Date of the Publication (Server): | 2024/03/07 |
Issue: | 92 |
Page Number: | 27 |
Source: | https://link.springer.com/article/10.1007/s11265-020-01554-x |
Faculties / Organisational entities: | Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik |
DDC-Cassification: | 6 Technik, Medizin, angewandte Wissenschaften / 621.3 Elektrotechnik, Elektronik |
Collections: | Open-Access-Publikationsfonds |
Licence (German): | Zweitveröffentlichung |