You are on page 1of 4

Neural Network Comparison

Software implementation (Python) Vs. Hardware Implementation


(FPGA)
• Introduction
In our Intelligent Control course, we are introduced to neural network
and its aplication specifically for forecasting.

• Scope
• Accuracy
• Time (if ephocs are increased)
Neural Network Implementation on FPGA
This overview of the literature provides a thorough review of the several researches that have
effectively used FPGAs for building neural networks. We aim to present a comprehensive overview
of the approaches, techniques, and results that have resulted from the combination of these two
fields by exploring the abundance of previous research. Our main goal is to comprehend the
numerous strategies used, the particular architectures implemented, and the performance metrics
achieved within several kinds of real-world applications.
Results
FPGA Architecture for Feed-Forward Sequential Memory Low-Cost Hardware Design Approach for Long Short- An FPGA implementation of a long short-term memory
Network targeting Long-Term Time-Series Forecasting Term Memory (LSTM) neural network
The proposed architecture demonstrates that the The proposed method for hardware implementation The paper proposes a hardware architecture for a
resource requirements do not increase exponentially of LSTM achieves the desired performance in Long Short-Term Memory (LSTM) Neural Network
as the network scale increases, indicating its classification tasks compared to the traditional that exploits parallelism and outperforms software
scalability and efficiency. method, as demonstrated through simulation implementations .
experiments using the MNIST and IMDB datasets.

The FPGA implementation of FSMN shows superior The proposed method utilizes fewer hardware The synthesized network achieves a 251 times speed-
accuracy compared to other models, such as LSTM, in resources and consumes less power compared to the up over a custom-built software network,
long-term time-series forecasting tasks. traditional method. It achieves a 16% reduction in demonstrating the benefits of parallel computation
area and consumes 1.546 W of power, while the
traditional method consumes 1.847 W.
The architecture achieves efficient and parallel The hardware implementation of the LSTM network
computation, enabling real-time processing and performs significantly better than the Python model,
adequate performance for forecasting tasks. with the ability to perform around 3.15 million
predictions per second compared to around 14
thousand predictions by the Python model
The implementation on FPGA provides a highly- The results highlight the relevance and effectiveness
parallel and efficient computation platform for FSMN, of the proposed hardware implementation,
enhancing its accuracy and scalability. showcasing the potential for hardware acceleration
in LSTM networks

The results of the paper highlight the effectiveness of


FSMN in capturing long-term dependencies in time-
series data and its potential for accurate long-term
time-series forecasting.
Discussions
• Limitation
• Sugestion

You might also like