A Simple RNN Cell

As we already know, convolutional layers are specialized for processing grid-structured data (i.e., images). On the contrary, recurrent layers are designed for processing sequences.

To distinguish Recurrent NN (RNNs) from fully-connected layers, we call the non-recurrent networks feedforward NN.

The smallest computational unit in recurrent networks is the cell. The cell is the basic unit in recurrent networks.

Recurrent cells are NN for processing sequential data. They are usually small.

Recurrent models help us deal with time-varying signals, so always have the notion of time and timesteps in the back of your mind.

A minimal recurrent cell: sequence unrolling

One can create a minimal recurrent unit by connecting the current timesteps’ output to the input of the next timestep!

This is the core recent principle and is called sequence unrolling. Note that the unrolling can happen in any dimension, but we usually refer to time.

But why do we choose the time dimension?

We choose to model the time dimension with RNNs because we want to learn temporal and often long-term dependencies.

Thus, by processing the whole sequence timestep by timestep, we have an algorithm that takes into account the previous states of the sequence.

In this manner, we have the first notion of memory (a cell)! Let’s look at everything we said in a figure:

Get hands-on with 1400+ tech skills courses.