Welcome back! In the previous post, I had talked about univariate linear regression, and defined the cost function as:
Here m denotes the number of training examples, (x(i), y(i)) denotes the i’th training example, and θ0 and θ1 are the parameters of the model. As we discussed earlier, we need to find the values of θ0 and θ1 so that J is minimized. But how do we actually do this? One algorithm that serves the purpose is gradient descent. The idea behind gradient descent is:
- Start with some initial θ0 and θ1 (say, θ0 = θ1 = 0)
- Repeatedly change θ0 and θ1 so as to decrease J(θ0, θ1) gradually, until we reach a minimum
Mathematical form
The mathematical form of the gradient descent algorithm is:
Let’s walk through this slowly. The := symbol denotes an assignment (a := b means ‘variable a is assigned the value b’) rather than an equality test (a = b means ‘are the values of a and b equal?’). The term:
is called a partial derivative. These terms (for j = 0 and j = 1) indicate the ‘slope’ of the graph in a 3D plot of the function J versus θ0 and θ1 at a particular point (such a plot might look like the one in the figure below). (More on this later.) Also, the parameter α (alpha) is called the learning rate. It controls the extent to which θ0 and θ1 change at every iteration, and is always positive.
Simultaneous Updates
Another important thing is that in the above algorithm, θ0 and θ1 must be simultaneously updated. To understand this, consider the two pseudocode fragments:
Fragment 1 simultaneously updates θ0 and θ1. Note that in this fragment, the original values of θ0 and θ1 are used to calculate the partial derivative terms. But fragment 2 is not equivalent to fragment 1. Why? In fragment 2, first θ0 is updated, and this updated value is used to evaluate the partial derivative in the third line. This is not what we want – the original values of θ0 and θ1 must be used to compute the partial derivatives – a simultaneous update.
Intuition
After reading the previous sections, one may ask, “But what is the algorithm actually doing? How can we understand that it is actually minimizing the cost function?” Let us try to understand this. For simplicity, we can reduce the problem to one dimension, so the cost function is a function of only one parameter, θ. The algorithm now reads:
(In one variable, the partial derivative is replaced by an ordinary derivative, d/dθ.) The derivative of the cost function at a point is equal to the slope of the tangent to the curve at that point. Now assume that the plot of J(θ) vs θ looks like the one in the figure below:
First assume that the initial value of θ is at point A in the figure. At this point, the slope of the graph (and hence the derivative) is positive. So the update on θ would look like:
which decreases θ (as α is always positive). If we look at the figure, this means we move closer to the minimum, which is what we want. Similarly, if we were initially at point B in the figure (negative slope), the update would then look like:
which increases θ, again moving it closer to the minimum.
Some additional points
Two additional points worth noting are:
- In the two variable case, the partial derivatives also measure the ‘slopes’ of the 3D plot of the cost function in two perpendicular directions, and hence the algorithm performs the same task.
- If the cost function plot has multiple local minima, it is not possible to predict in advance which one the gradient descent algorithm will converge to.
I have introduced the idea of gradient descent in this post. In the next post, I will continue talking about gradient descent and show how it can be applied to linear regression – please do stay tuned!