ECE 793 SEMINAR
University of Massachusetts, Amherst
Department of Electrical & Computer Engineering
Where : Marston 132
When: Friday, Oct. 06, 2006, at 4:00 pm
Back propagation, a well known technique in artificial neural networks, is
computation intensive, and offers the scope to exploit a high degree of
parallelism. We present implementation of a one-dimensional linear systolic
array for the entire learning phase of the back propagation algorithm, after
which the architecture passes directly into the application phase. For a
neural network with P input neurons, Q hidden layer neurons and R output
neurons, this new architecture with Q processors, exhibits a running time of
(2P+R+max(Q,R)) per training instance. The architecture reports better
running time and, therefore, an improvement over the most efficient hardware
solution reported in literature to date.
We describe the back propagation algorithm using a very high level language (ALPHA) and perform design exploration with an automatic systolic array synthesis tool, called MMALPHA, to generate the RTL description of this architecture. We use a Field Programmable Gate Array (FPGA) coprocessor as the hardware implementation medium. We report the performance of the proposed architecture when applied to two real applications, viz. i> the exclusive-or problem; and ii> the cart-pole balancing problem. We report an operating frequency of 45 MHz (highest reported in literature to date) and a sustained throughput of 3.6 Giga Operations Per Seconds (GOPS) for our proposed architecture when implemented on a Xilinx Virtex-II FPGA, XC2V6000.