EECS 349, Machine Learning

  Insturctor: Prof. Doug Downey
By Majed Valad Beigi
majed.beigi@northwestern.edu
Northwestern University
home Introduction Design Back Propagation NN LAMSTAR NN SVM Results Conclusion

Offline HandWritten Character Recognition
 

You can download the full project report and source code here:
FullReport_Source Code

Back Propagation Neural Network

In this part, an Artificial Neural Network (ANNs) form the basis of an OCR which is trained using the Back Propagation algorithm. After converting the handwritten English characters into 5*7 or 6*8 matrices as explained earlier, these matrices can be fed to the ANN as input. After the Feed Forward Algorithm which gives workings of a neural network, the Back Propagation Algorithm performs Training, Calculating Error, and Modifying Weights. The BP algorithm starts with computing the output layer, which is the only one where desired outputs are available. The error rate in the output layer is calculated based on the difference between the desired output and the actual output. In this project, the result of BP ANN is a matrix of 1*52. The output being obtained from the BP ANN can be used to obtain one of the 52 (26 Upper-Case and 26 Lower-Case) alphabets of the English language.

(a) Structure:

The Back Propagation Neural Network implemented for the purpose of this project is composed of 3 layers, one input, one hidden and one output. For the 5*7 (6*8) matrices, the input layer has 35 (48) neurons, the hidden layer has 100 neurons, (the number of neurons in the hidden layer has been determined by trial and error) and the output layer has 52 neurons. The output layer is in fact a competitive layer (only one bit of the output becomes 1 for each class). For this project, the sigmoid function has been used as a non-linear neuron activation function:

Bias terms (equal to 1) with trainable weights were also included in the network structure. The structural diagram of the neural network is given in the following figure:

Figure 4: Schematic design of the back-propagation neural network.

(b) Network set-up:

The Back propagation (BP) learning algorithm was used to solve the problem. The goal of this algorithm is to minimize the error energy at the output layer. In this method a training set of input vectors is applied vector-by-vector to the input of the network and is forward-propagated to the output. Weights are then adjusted by the BP algorithm. Subsequently, these steps are repeated for all training sets. The algorithm stops when adequate convergence is reached.

o   Training algorithm:

To train the network to recognize the English Alphabet characters, the corresponding 5*7 (6*8) grids are applied in the form of 1*35 (1*48) vectors to the input of the network. Then the weights are calculated using the equations provided in the text book for the BP NN. The initial learning rate was experimentally set to 1.5 which is divided by a factor of 2 every 100 iterations and is reset to its initial value after every 400 iterations and the momentum rate is set to 0.95.

o   Testing algorithm:

For testing, the weights that were calculated during the training are used. The testing inputs are given in the form of a 1*35 (1*48) vectors for the corresponding 5*7 (6*8) grids. Character are considered recognized if all the outputs of the network were no more than 0.01 off their respective desired values.