Activation Functions

Learn about the most popular activation functions for deep learning.

We'll cover the following

Activation functions are non-linear functions that determine the outputs of neurons. As we already discussed, each neuron accepts a set of inputs, multiplies them by the weights, sums them up, and adds a bias.

z=w1x1+w2x2+w3x+boz= w_1*x_1 +w_2*x_2 + w_3*x+ bo

Because that will result in a linear transformation, we then pass neurons through a non-linear function ff so we can capture non-linear patterns between our data.

a=f(w1x1+w2x2+w3x+3+bo)a= f( w_1*x_1 +w_2*x_2 + w_3*x+3 + bo)

Over the years, many functions have been proposed, each one with its strengths and weaknesses. In this lesson, we will discuss the most common ones.

Sigmoid

f(x)=11+exf(x) = \frac{1}{1+e^{-x}}

Get hands-on with 1400+ tech skills courses.