Listen to this article

Recently I completed a course on the topic of ‘Machine Learning’, and I would like to share some of the things I learnt from the course. So let’s get started!

What is Machine Learning?

Machine learning, in the words of Arthur Samuel (1959), is the ‘field of study that gives computers the ability to learn without being explicitly programmed.’ One example is a spam detection system for your email. For this, the computer must ‘learn’ to distinguish between spam and non-spam emails. Another example is a system to predict housing prices from some known characteristics of a given house.

Unlike humans, machines do not have ‘experiences’. Instead they learn from data, which in some sense is similar to the ‘experiences’ of humans. However, getting a large amount of training data is one way, but not the only way, to make a machine learning algorithm perform better.

We can divide machine learning algorithms into two broad categories based on the objective of the algorithm and the kind of data it receives: (i) supervised learning and (ii) unsupervised learning.

Supervised Learning

Source: Google Images
Working of a Supervised Learning Algorithm

In supervised learning, the training data has some ‘features’ along with an expected output value for each training example. The objective is to correctly predict an output for a new set of features. This type of problem is so called because the ‘right answer’ (the expected output) is given along with each training example.

Suppose you want to predict the price of a car. You have a large dataset with the features of many cars, such as engine size, mileage, etc., along with their prices. (Each of these cars is a training example.) You need to build an algorithm to serve your purpose. This is one example of a supervised learning problem. More precisely, it is a regression problem since the algorithm is predicting a continuous variable, namely the price.

Now consider the problem of spam classification. Similar to the previous example, you have a large number of emails. These too have many features (probably some words indicative of spam/non-spam) and the expected result (spam/non-spam). This is another supervised learning problem, but one that predicts only two discrete values. Hence, it is called a classification problem.

Unsupervised learning

Source: Google Images
Working of an Unsupervised Learning Algorithm

In unsupervised learning, the training data has a set of features, but no ‘expected output’ for any training example. The objective of such a problem is to find ‘structure’ in the data. In other words, the algorithm tries to find relationships among the different examples in the dataset. Two examples of unsupervised learning problems are clustering and dimensionality reduction.

  1. Clustering: The objective of a clustering problem is to group the data into a given number of ‘clusters’ having similar properties (features). For example, if you have a dataset of cars, you may want to group them into, say, 3 clusters based on similarity in their features.
  2. Dimensionality Reduction: Sometimes, it may happen that data in your dataset can be represented by a smaller number of features. For example, if you have two features called ‘height in inches’ and ‘height in cm’, then one of the features is redundant. (This could happen if you were collecting a lot of features and are not keeping track of them.) Dimensionality reduction converts such a dataset into an equivalent form that requires fewer features to represent it. In other words, it reduces the dimension of the dataset (hence the name).

In this post, we have seen some of the fundamentals of machine learning. My next post will explore some more aspects of machine learning – please stay tuned!

(Visited 647 times, 1 visits today)

Related Posts

3 thoughts on “Machine Learning: Introduction

  1. Nicely explained. For anyone starting out with Machine Learning this is helpful. I too have done a couple of machine learning intro courses online, but never really manage to apply it :/

Leave a Reply