At this project three different Convolutional Neural Netwroks for the hand-written single-digit image recognition problem of the MNIST dataset are proposed. Two different implementations take place in order to discern the effect of data augmentation on each model’s classification power: (1) training on pre-processed training data set, (2) training on pre-processed and augmented training data set. An additional training dataset of “augmented” images with an artificially increased sample size of 84,000 images is generated. With a stochastic gradient descent optimization method and a categorical cross entropy loss function each model is compiled and then trained with the number of training epochs and the size of batches set to 10 and 200 respectively.
The performance of each model on the test set is based on the provided score of Kaggle’s public leaderboard. Based on that score the most efficient CNN is proven to be CNN Model 1: “32CL3-32CL3-MPL2-DL0.25-64CL3-64CL3-MPL2-DL0.25-FCL128-DL0.5-FCL10”, achieving a 99.029% performance score when trained on the initial input data and a corresponding 99.114% when trained on the augmented data. An additional weighted Majority Rule Ensemble Classifier is implemented, combining the classification power of all the proposed CNN Models, achieving a 99.286% performance score at Kaggle’s public leaderboard (top 11%).