What is Machine Learning?

  1. Everything you need to know about Machine Learning
  2. An example training of students during exam
  3. Difference between traditional programming and Machine Learning
  4. How ML actually works
  5. The growth of Machine Learning
  6. ML’s Importance
  7. Deciding Which ML algorithm to use
  8. Types of ML algorithms
  9. Difference Between ML , AI vs DL

The term ‘Machine Learning’ was coined by the pioneer of artificial intelligence and computer gaming – Arthur Samuel. The definition he provided for the term ‘Machine Learning’ was – “The field of study that makes computers capable of learning without being explicitly programmed.”

In layman terms, the process of automating and improving the learning process of computers based only on their experiences without any actual programming taking place (with zero human assistance) is exactly what Machine Learning (ML) is. The start of this entire process is with feeding in quality data.

We then train our machines by building machine learning models. These models are built using the input data and various algorithms.

The algorithm we choose depends on the input data we have available at hand and also depends on the task that we are trying to automate.

Example: Training of students during exam.

Take the case of students studying for an exam. They usually don’t cram the entire subject in a day but, learn it over a period of time with complete understanding.

Before the time of examination, the students feed their machines (their brains) with a solid amount of high-quality data (being questions and answers from various books, online course material and lectures etc.).

Check out our Machine Learning and Deep Learning Services

Read More

What happens here is the students trains their brains with the input as well as the output meaning they understand what kind of logic they need to use to solve various kinds of questions.

When a student solves a question paper and compares the given answers with the answer key, the student improves performance gradually with the next attempt.

The performance increases and gains more confidence with the approach that is adopted.

This is how ML models are actually built. They machines are trained with data (the models are fed with both input and output data).

Following this, when the time comes, tests are performed where only the input is given, and the accuracy of the model is checked.

The output given by the model is checked with the actual output which has not been fed to the model.

Researchers and data scientists are working tirelessly to improve algorithms and techniques to make better ML models that will perform beyond expectation.

The difference between traditional programming and Machine Learning:

Traditional Programming Machine Learning
Input (DATA) + Logic (PROGRAM) is fed and is run on the machine to get the output. Input (DATA) + Output is fed. It is then run on the machine during training and then the machine creates its own Logic (PROGRAM). This can then be evaluated through testing.

How things work in reality:

When we take the case of online shopping, millions of users who have millions of varied interests that may be based on colors, brands, prices so on and so forth. All of us, while shopping online have had the tendency to search for multiple products.

When we search for a particular product frequently, our Facebook, web pages and search engines start showing us recommended products or similar products to the one we searched for or even offers related to the product we searched about.

There is no one sitting behind a system coding such tasks for each and every user. All these tasks are automatic. This is where Machine Learning plays a huge role.

Data scientists, researchers and machine learning experts build models making use of a good amount of quality data and the machines perform the tasks automatically and become better at it over time with more experience.

In the world of advertising, traditional advertising was done with radio, newspapers and magazines.

Today digital ads are ruling the world of advertising. Technology, especially machine learning has enabled to us to perform targeted advertising which is a far more efficient way to target the most receptive audience.

The healthcare industry also benefits a lot from Machine Learning. Scientists have prepared models to train machines which can detect cancer by analysing the slide-cell images alone.

For humans to perform this task, it would take ages. From the outside, the way machine learning functions while detecting cancer may look very simple.

You can see how machine learning is winning the battle against cancer here.

Doctors today are using machine learning to diagnose patients grounded on different parameters under due consideration.

A few other examples of machine learning in the real world are – IMDB ratings, Google Photos’ image recognition, Google Lens, where the image-text recognition model which is ML based can extract text from the images.

Gmail, the most used e-mail service also classifies e-mail into categories like social, promotion etc using text classification which is a wing of machine learning.

Why Machine Learning has been growing in the last few years?

The reason why machine learning is growing at an accelerated rate is because of the cheap computational power combined with large pool of data which can be used to train our models.

Why is Machine Learning important?

In recent years, Machine learning has been used to automate tasks that were considered tasks that could be done only by humans.

These tasks included text generation, image recognition, playing computer games and more.

Machine Learning and AI experts thought that it would take 10 years for a machine to beat the world’s best player at the board game Go. This was in 2014. Enter Google’s DeepMind.

They proved them wrong then and there. They showed the world that even in complex board game such as Go, machines could learn the ideal move at the appropriate time.

A little further down the timeline, the OpenAI team developed the Dota Bot which was capable to beating the world’s best Dota team.

The advances in the field of machines playing games are massive.

The economy and our living in general are going to be impacted by machine learning in more ways than you can imagine.

The possibility of work tasks and entire industries being automated arises. This will definitely change the whole job market landscape in a big way.

This is the ideal time to learn machine learning as many companies are hiring data scientists and engineers to get into the machine learning and AI space.

How Do You Decide Which Machine Learning Algorithm to Use?

There are many unsupervised and supervised machine learning algorithms available today. Each of these has a different approach to learning.

With this being the case, picking the right algorithm for your cause may seem a little overwhelming.

When it comes to choosing the right algorithm, there is no ideal method to go about it. In reality, choosing the right algorithm is partially trial and error.

Even the most qualified data scientists can’t be sure as to whether an algorithm will or won’t work without trying it out.

However, algorithm selection depends a lot on the type of data you are working with, the insights you want from the data and what you are going to do with these insights.

Machine learning algorithms can be broadly classified into 3 types:

1. Supervised Learning

How it works: This algorithm has an outcome/target variable (dependent variable). This variable is to be predicted with a given set of independent variables/predictors.

A function is then generated that maps inputs to the required output using these set of variables.

The training process will continue until the needed level of accuracy is achieved on the training data. Supervised learning examples – Random Forest, Decision Tree, Logistic Regression, KNN, Regression etc.

2. Unsupervised Learning

How it works: There is no target/outcome variable in this algorithm to predict or estimate. The main use of this algorithm is, clustering population in different groups.

The main use of this is to segment customers in different groups for particular intervention. Examples of unsupervised learning are – K-means, Apriori algorithm and more.

3. Reinforcement Learning:

How it works: With the use of this algorithm, the machine is trained to make particular decisions.

The working of this algorithm is like this – The machine continuously trains itself using trial and error in the environment which it is exposed to.

In order to make accurate decisions, the machine learns from past experience and makes use of the best possible knowledge. Markov Decision Process is an example of reinforcement learning.

Below are a few guidelines to help you choose between supervised and unsupervised learning:

  • If you wish to train a model with the aim of making a prediction – the future value of a continuous variable like stock price or temperature or a classification like identifying cars from video footage of the webcam – choose supervised learning.
  • If your need is to explore your data and if you want to train your model to split data into clusters i.e. internal representation – choose unsupervised learning.

When faced with any data problem, machine learning algorithms usually come to the rescue.

Listed below is a list of frequently used machine learning algorithms:

  • KNN
  • Dimensionality Reduction Algorithms
  • Linear Regression
  • Random Forest
  • Decision Tree
  • SVM
  • Naive Bayes
  • Logistic Regression
  • K-Means
  • knn
  • Gradient Boosting algorithms(CatBoost | GBM | XGBoost | LightGBM)

ML vs AI vs DL

Artificial Intelligence

John McCarthy defined artificial intelligence as the science of making machines intelligent. McCarthy is recognized as one of the godfathers of artificial intelligence.

Stated below are a few definitions of artificial intelligence:

  • Simulation of intelligent behaviour in machines achieved via computer science is artificial intelligence.
  • A machine’s capability to imitate human behaviour.
  • A machine being able to perform tasks the usually require human intelligence. These tasks may include – speech recognition, language translation, visual perception, decision making and more.
  • AI – There are many ways to simulate human intelligence. It is just that some methods like AI are more intelligent than others.

A bunch of if-then statements or even an intricate statistical model that maps sensory data to symbolic groups is what artificial intelligence can be.

The statements -if/then – are just explicit rules which have been programmed by human hand.

When the if/then statements are taken together, they are even called expert systems, knowledge graphs, rules engines or symbolic AI.

Together these are called GOFAI (Good, Old-Fashioned AI).

In the case of income taxes, the intelligence that rules machines mimic an accountant who has knowledge of tax code.

The accountant/intelligence takes the information you feed and runs the information through a bunch of static rules.

It then gives you the taxes as the amount of taxes you owe. It is called Turbo Tax in the US.

When a computer is usually designed by researchers experienced in AI and is successful, say at something like winning a game of chess, most people still don’t consider the AI really intelligent.

This is primarily because the internals of the algorithm are understood well. The reason behind this being, critics think intelligence should be exclusively human and intangible. The argument goes “True AI is whatever computers can’t do yet”.

Machine Learning: Programs That Alter Themselves

Machine learning can be called a subset of artificial intelligence. To put it rather simply, all machine learning counts as AI, but all AI does not count as machine learning.

To understand this with an example – symbolic logic such as expert systems, knowledge graphs and rules engines can all be categorized as artificial intelligence, however none of these can be called machine learning.

The major aspect that distinguishes machine learning from the expert systems and knowledge graphs is the ability it possesses to modify itself when it is exposed to more data.

This means that, in order to make changes, machine learning does not require human intervention and is dynamic by itself.

This makes machine learning less reliant on human expertise and makes it less brittle in the process.

Keeping in mind as to how Arthur Samuel defined machine learning – “The field of study that makes computers capable of learning without being explicitly programmed”, we can come to the conclusion that machine learning programs are not explicitly programmed into the machines like if/then statements.

In a way, ML programs modify or adjust themselves in response to the data they are exposed to.

A simple example to understand this is take machine learning to be a child who has come into this world. The child will adjust its understanding of the world in response to experience.

Arthur Samuel even taught a computer program to play checkers. Arthur wanted the program to play checkers better than he did.

This was obviously not something he could program explicitly. He finally succeeded in 1962 and the program was able to beat the Connecticut checkers champion.

The learning in machine learning means that the ML algorithms optimize continuously along a particular dimension.

This means that they either try to minimize error or they maximize the probability of their predictions turning out to be true.

This comes under 3 names/categories – loos function, error function or objective function. The objective function because every algorithm has a stated objective. You can decipher the gist of the value of a machine learning algorithm by asking what the objective function is.

The most logical question to pop into our head next would be – how do we minimize error? One way to do this would be to come up with a framework that will multiply inputs with the aim of making guesses keeping the inputs nature.

The products of the inputs and the algorithm are the different outputs/guesses. Usually, these guesses are way off the mark and turn out to be quite wrong.

If you have ground-truth labels relating to the input, you can measure how wrong the algorithm was by matching the output to the truth.

You can then use that error to modify your algorithm. Neural networks do exactly this. They continually measure the error and modify their parameters to achieve the stage where there is no lesser error.

In short, they can be called an optimization algorithm. If they are tuned right, error minimization is tremendous as they keep guessing and guessing and guessing.

Deep Learning: More Accuracy, More Math & More Computation

What is the subset of machine learning? – Deep Learning! Whenever anyone uses the term deep learning, they usually refer to deep artificial neural networks. Sometimes, they may also be referring to deep reinforcement learning.

What are deep artificial neural networks? They are a set of algorithms that have set new records in accuracy for complex problems like – recommender systems, image recognition, natural language processing (NLP), sound recognition and more.

An example of deep learning’s brilliance – deep learning is a major part of DeepMind’s AlphaGO algorithm.

This algorithm beat Lee Sedol, the former worlds champion of Go in 2016. It also later went on to beat the current world champion Ke Jie in 2017. Deep learning evolves on its own and this makes it very equipped to tackle changing environments easily.

The term ‘deep’ in deep learning is very technical. It actually refers to the many layers in a neural network.

Generally, a shallow network will have one so-called hidden layer and a deep one will have multiple hidden layers.

The advantage of a deep network is that the multiple hidden layers allow the deep neural network to study characteristics of the data in a set feature hierarchy.

This is because, simple characteristics (say, 2 pixels) combine again from one layer to the next to form characteristics that are more complex (say, a line) in nature.

It is as simple as having nets with many layers through which the data (characteristics) is passed.

The data is passed through extremely complex mathematical operations than nets with fewer layers. Therefore, this makes it computationally very intensive to train.

Intensivity in terms of computation is the hallmark of deep learning.

This the reason why GPUs are high demand today, solely because it takes that much computational power to train deep learning models.

So, you could apply the same definition to deep learning that Arthur Samuel did to machine learning – “The field of study that makes computers capable of learning without being explicitly programmed”– plus the added advantage of deep learning being, it results in higher accuracy.

Deep learning also requires more training time or hardware. Another added advantage is that Deep learning performs remarkably well on machine perception tasks that include unstructured data like blobs of text and pixel.



Author: Abhimanyu Sundar
Abhimanyu is a sportsman, an avid reader with a massive interest in sports. He is passionate about digital marketing and loves discussions about Big Data.