How to use TensorFlow – Belgium Traffic Sign Use Case

Introduction

As we know Deep Learning is becoming a part of our day to day. So what is Deep learning? It’s a sub module of AI that concerns and works with the structure of a brain.

It develops algorithms to function like a brain called Neural Networks or Artificial Neural Networks.  

As a learner you would have come across many articles or blogs related to this and we offer another approach to give you a beginner level idea on TensorFlow and its architecture.

TensorFlow is used to develop algorithms based on Deep learning. Simply put, TensorFlow is used to build, train and test Deep learning algorithms called as Models. TensorFlow consists of tensors used to construct the neural networks.

It does mathematical computations on the model, which can be visualized in a graphical representation called as computational graphs.

The word ‘TensorFlow’ is nodes in the neural network are called as “tensors” which are the multidimensional data arrays and “flow” represents the data flow between the tensors called as computations or functions.

Check out our Machine Learning and Deep Learning Services

Read More

TensorFlow computations are expressed as stateful dataflow graphs. Tensors- is a vector or object or arrays. It’s a framework from Google, which translates Machine Learning (ML) problems into functions on tensors.

We can use TensorFlow to solve Supervised, Unsupervised or any other ML problems.

Now let’s dig into TensorFlow…..

First what we see is how to start up with TensorFlow installation and environment setup to create our Tensors and Deep learning models ….

Installation

  1. Download Python package from the link https://www.python.org/downloads/
    1. Note: Download version equal to or less than 3.6 as TensorFlow does not support Python’s latest release 3.7 series.
    2. “Pip” package will be installed along with Python installation.
    3. Check its installation by the commands, open command prompt
      1. C:\> python –-version
      2. C:\> Python 3.6.5
      3. Pip 10.1
  1. Install Tensorflow using pip command as , you can use between CPU or GPU version depending on your machine
    1. C:\> pip install tensorflow
  1. Now we can create our tensorflow program and run them using the command,
    1. C:\> python filename.py

About the Dataset

That’s it, now you’re all set to design your first model using TensorFlow. Here I am going to use the very familiar dataset “Belgium Traffic signs” (you can download the dataset here) available on the internet.

Traffic is a very common problem faced in day to day life, where we encounter many symbols representing different meanings.

But sometimes, we are likely to come across few signs which are very common and resemble other signs.

Belgium Traffic signs dataset has large collection of traffic sign images captured very closely to see the signs alone.

It also has the problem of signs of similar meaning, so we are going to define a model to identify the similar signs and remove it from the dataset.

To do this we build a classification model, train it and make predictions on the test data to find the duplicates.

It has different signs of colored images categorized as 6 major groups and all images of traffic signs are grouped under 62 classes (00000 – 00061).

Each directory contains the corresponding training images and one text file with annotations, eg. GT-00000.csv. In total there are 4591 images for training.

On an average, for physically distinct traffic signs there are 3 images available. Images are trained along with their annotations present inside each class, used for predicting the class of test data.

Let’s check how many duplicate signs are present and how accurate our model is.

You can see below a sample of the images from the dataset, with labels displayed above the row of corresponding images.

There is also a definite disparity across classes in the training set, as seen in the histogram below.

Some classes have 200 images or lesser and on the other hand others have over 350. This means that our model could be prejudiced towards over-represented classes, particularly when it is not sure in its predictions. 

So we need to reduce the biased classes.  We also check the statistics of the dataset.

Now, you might also want to revisit the histogram that you printed out in the initial steps of your data exploration; You can easily do this by plotting an overview of all the 62 classes and one image that belongs to each class:

In the above image you can see the classes or labels with their total number of images classified under each category. In which we infer that the labels 22, 32, 38, and 61 definitely has a larger count.

We initially apply pre-processing steps to our images:

We identify the unique labels or signs on the training data and generate Labels for each unique class for further processing. Then we resize all the images to size of 32*32.

We convert our 3 channel image to a single grayscale image. Sample of Grayscale Training Set Images,

To build a model, we design a simple Neural network with flattened and fully connected layers.

It also consists of softmax function as its output layer along with some loss and accuracy functions. The network is trained using mini-batch stochastic gradient descent with the Adam optimizer.

The design step consists of the,

  • model name,
  • input format (e.g. [32, 32, 1] for grayscale),
  • convolutional layers configuration [filter size, start depth, number of layers],
  • fully connected layers dimensions,
  • number of classes.

We create a session to run our model using TensorFlow as backend to build the layers and run the network using tf.Session(), pass the model as a graph to the tensorflow object “tf”.

We will run the model over number of iterations called as Epochs and configure the model to run over the test data.

Then we evaluate the model and predict the labels for our test data. We also measure the accuracy of the predicted labels, as to how close was the prediction to that of the defined class in the annotation file.

Henceforth we have created a model to classify the traffic signs with good accuracy and prediction labels.

Leverge your Biggest Asset Data

Inquire Now

In the future, we can improve our model via including some regularization functions (like LDA, Dropout), data augmentation methods, optimizers and some modern architectures such as GoogLeNet’s Inception Module, ResNet, or Xception.

I hope you all had an experience to design, build and deploy a deep learning model using TensorFlow. Let’s move forward and play with the dataset in improving our model and get better results. Data Science is all about improvement and accuracy.