You are on page 1of 2

A Novel and Efficient Training Accelerator for Two Means

Decision Tree

Abstract: Fast execution and excellent interpretability make decision trees (DTs) popular in
machine learning (ML) applications. We suggested a hardware training accelerator in our brief
since DT training takes a long period. Using a field-programmable gate array (FPGA) with a
maximum operational frequency of 62 MHz, the suggested training accelerator may be built and
tested. In order to reduce training time and maximise resource efficiency, the suggested
architecture makes advantage of both parallel and pipelined execution. C-based software
implementations are at least 14 times slower than a suggested hardware implementation. In
addition, a single RESET signal may be used to simply retrain the suggested architecture for the
next batch of data. Because of this mobile training, the gear may be used in a variety of settings.
Xilinx software may be used to implement it.

Existing System: Conventional multipliers are used in hardware architecture like pipeline
multiplier.

Proposed System: Vedic multiplier can be used.

Advantages:

 Less Area
 Low power consumption

Applications:

 Accelerators
 Image processing
 Signal processing
 Satellite communications
 Mobile communications

You might also like