Note: This project is done as part of the #100MLProjects— 100 Projects to Proficiency. To know more about this challenge, check out — #100ML Projects Journey towards Mastery . All my source-codes are available in Github https://github.com/laxmena/100MLProjects
Note: If you are new to DataScience and want to learn how to build a Handwritten Digit Classifier, check out the ‘How to’ Section in this article.
Project-2 Goal: To build a Machine Learning model with a GUI that lets the user hand-draw a number on the screen, and the model predicts the digit.
Context: Handwritten Digit Classification is a classification problem, where the model tries to classify the handwritten digit to any of the 10 numbers(Starting from 0 to 9).
This is a supervised learning problem, and there is a widely popular dataset — MNIST Dataset, that comprises of 70K images of handwritten numbers and its labels. I have capitalized on that dataset for building this project.
Machine Learning Models:
Two Machine Learning Classification Algorithms are used in this project:
- K-Nearest Neighbors Classification
- SVM Classification
GUI for Real-Time Experience:
MNIST Dataset is extensively referred to in programming tutorials and books. They all divide the MNIST dataset into two parts. One part is used to Train the Machine Learning model. The other part is given to the ML model and is asked to predict the numbers. Predictions made by the ML model and the actual values are compared with each other, and that’s called the accuracy score.
Here, we don't have a way to test it in a real-time environment. We still use the same data from the dataset to evaluate our ML model. So, this project also includes a User Interface, where the user can draw any number on the canvas, and the ML model will try to guess what that digit is.
Lessons:
Saving the Model as a Local file
After building the KNN model, I used the ‘pickle’ library to save the ML model to a local file, by using the following command.
pickle.dump(classifier, <file>)
KNN model trained on 60K 28*28 images resulted in about 430MB file. The file size was really huge, and it would become very hard if we had to use this on an Application.
While searching for an alternate approach, that serializes ML models as a smaller file, Thameem Abbas suggested ‘joblib’, which allows compression.
joblib.dump(classifier, <filename>, compress=(‘gzip’, 3))
joblib saved the model to a file that weighed around 4MB, that was a significant improvement over the former.
Pre Processing the Image:
The process of building the GUI took longer than I expected. I didn't have any experience with Tkinter package, so this was new to me. Building the canvas and positioning the buttons were really easy, but I had to convert the drawing on the Canvas to an image that can be fed to the Machine Learning model was really interesting. I referred to several resources, documentation, and other GitHub repositories to understand what should be done.
- Drawing on the Canvas should be retrieved.
Co-ordinates of the Canvas were calculated and passed to the ImageGrab method as parameters, to accomplish this task. - Invert the grabbed Image.
The ML model is trained on images with Black background and white font. Whereas the grabbed image has white background and black font. - Resize the Image to 28*28
The ML model requires the input image to be in 28*28 dimensions, as it was trained only with 28*28 images. - Convert to Greyscale
Convert the image from RGB to Greyscale. - Store the image as Numpy data for further processing.
It was challenging to find how to use the ImageGrab function, with my limited knowledge in Tkinter.
Source Code:
GitHub Source: Project2 — MNIST Digit Recognition (https://github.com/laxmena/100MLProjects/tree/master/Project2%20-%20Digit%20Recognition)
Demo:
Here is a demo of the Handwritten Digit Classification in Action.
How To?
This section is a Tutorial section for beginners, amateurs, and other enthusiasts.
I would highly recommend you to install Anaconda distribution, as it bundles most of the required packages and dependencies needed for Machine Learning.
MNIST dataset can be found here: http://yann.lecun.com/exdb/mnist/
- K-Nearest Neighbors:
K-Nearest Neighbors is a Machine Learning model that tries to classify new value by comparing it with the values of its closest neighbors.
In the example below, the arrow points a new data point that needs to be classified into either white or black groups. ‘K’ in K-Nearest Neighbors refers to the number of neighbors. In this case, let's assume K=3.
So, the 3 nearest neighbors of the new data points are taken into consideration. 2 White points and 1 black point, based on the majority we classify the new data point as White. This is the KNN Algorithm.
In this project, This same KNN algorithm is used to classify a new Handwritten number into any one of the 10 Classes(Digits — from 0 to 9).
One of the main reasons for the popularity of Python is its extensive and powerful libraries that drastically reduces development time. We can capitalize on these libraries, and use them to fasten our development time.
Scikit Learn is a Popular Machine Learning library, that contains implementations of most of the Machine Learning Libraries. We can create an instance of KNN implementation from this library, and start training the model right away.
from sklearn.neighbors import KNeighborsClassifier#Create KNeighborsClassifier Object
classifier = KNeighborsClassifier(n_neighbors=4)
n_neighbors takes the number of neighbors value, which we can customize to our needs and performance requirements.
Other Machine Learning models can also be initiated similar to the way mentioned above, scikit-learn documentation is very good and easy to understand. At first, it may seem too messy and scary, but it’s one of the most decently structured documentations.
After creating the model comes the training part. We need to provide the Machine Learning model with data, so it can understand and recognize patterns in the data.
classifier.fit(X, y)
# X = Image Data
# y = Image Labels
‘fit’ method is a universal method in Scikit learn that refers to Train.
X = Image data stored in a two-dimensional array format
y = Real Values/Labels of the image.
After the model is trained, next is to use the model to predict new values. We can use the pre-known values to test the performance of the models. So that we can be confident when it is released in the production environment.
classifier.predict(new_imagedata_array)
‘Predict’ is a method that allows us to make predictions using our Machine Learning model. Like the fit model, predict also takes in a two-dimensional array of image data.
Here is my implementation of KNN Model for Handwritten digit recognition.
This ‘How to’ is to introduce and give you a basic understanding of how to build a KNN Machine Learning Model. This part is intentionally made abstract to impart enough knowledge and intuitiveness, which enables you to build your own models by tweaking and experimentaiton. I’m a strong believer of “Learning by Doing”.
Along with KNN, I have implemented SVM Classification and GUI in this project, which you can refer to at any time. Here is my link to GitHub repository for Project2:
I highly value your advice, comments, suggestions, or feedback. You can send them to me through mail (lakshmanan.meiyappan@gmail.com) or through LinkedIn
#100MLProjects #laxmena
#StaySafe