What is a Recurrent Neural Network?
Recurrent Neural Network(RNN) is a type of Neural Network where the output from previous step are fed as input to the current step. In traditional neural networks, all the inputs and outputs are independent of each other.
In cases like when it is required to predict the next word of a sentence, the previous words are required and hence there is a need to remember the previous words. Thus RNN came into existence, which solved this issue with the help of a Hidden Layer. The main and most important feature of RNN is Hidden state, which remembers some information about a sequence.
RNN have a “memory” which remembers all information about what has been calculated. It uses the same parameters for each input as it performs the same task on all the inputs or hidden layers to produce the output. This reduces the complexity of parameters, unlike other neural networks.
How do Recurrent Neural Networks work?
Training a typical neural network involves the following steps:
- Input an example from a dataset.
- The network will take that example and apply some complex computations to it using randomly initialised variables (called weights and biases).
- A predicted result will be produced.
- Comparing that result to the expected value will give us an error.
- Propagating the error back through the same path will adjust the variables.
- Steps 1–5 are repeated until we are confident to say that our variables are well-defined.
- A predication is made by applying these variables to a new unseen input.
Recurrent neural networks work similarly but, in order to get a clear understanding of the difference, we will go through the simplest model using the task of predicting the next word in a sequence based on the previous ones.
First, we need to train the network using a large dataset. For the purpose, we can choose any large text (“War and Peace” by Leo Tolstoy is a good choice). When done training, we can input the sentence “Napoleon was the Emperor of…” and expect a reasonable prediction based on the knowledge from the book.
So, how do we start? As explained above, we input one example at a time and produce one result, both of which are single words. The difference with a feedforward network comes in the fact that we also need to be informed about the previous inputs before evaluating the result. So you can view RNNs as multiple feedforward neural networks, passing information from one to the other.
Where can we use Recurrent Neural Networks?
RNNs are powerful machine learning models and have found use in a wide range of areas. It is distinctly different from CNN models like GoogleNet. In this article, we have explored the different applications of RNNs in detail. The main focus of RNNs is to use sequential data.
RNNs are widely used in the following domains/ applications:
- Prediction problems
- Language Modelling and Generating Text
- Machine Translation
- Speech Recognition
- Generating Image Descriptions
- Video Tagging
- Text Summarization
- Call Center Analysis
- Face detection, OCR Applications as Image Recognition
- Other applications like Music composition
What are the types of recurrent neural networks?
There are four types of recurrent neural networks:
- One to One RNN
- One to Many RNN
- Many to One RNN
- Many to Many RNN
One to One
A One to One RNN is basically the type of neural network that is known as the Vanilla Neural Network. It is used for general machine learning problems that have a single input and a single output.
One to Many
A One to Many recurrent neural network has a single input and multiple outputs.
Many to One
This type of recurrent neural network uses a sequence of inputs to generate a single output. If you’re looking for a good example of a many to one recurrent network, just think about Sentiment analysis in which a particular sentence can be classified as expressing positive or negative sentiments.
Many to Many
A Many to Many recurrent neural network uses a sequence of inputs in order to generate a sequence of outputs. If you want an example of a many to many recurrent neural network, machine translatation is a particularly good one.
What are the two issues that standard recurrent neural networks face?
1. The Vanishing Gradient Problem
Recurrent Neural Networks make it possible for you to model time-dependent and sequential data problems, like stock market prediction, machine translation, and text generation. But the issue that you will notice is that recurrent neural networks can be rather hard to train due to the gradient problem.
Recurrent Neural Networks have to deal with the problem of vanishing gradients. These gradients carry information that is utilized in the recurrent neural network. When the gradient becomes far too small, the parameter updates become insignificant. The learning of long data sequences becomes rather difficult because of this issue.
2. The Exploding Gradient Problem
When you are training a neural network, if the slope tends to grow exponentially rather than decaying, you are facing an Exploding Gradient. The exploding gradient problem arises when large error gradients accumulate, causing very large updates to the neural network model weights during the training process.
Some of the major issues caused by gradient problems are longer training time, poor levels of performance, and a lower degree of accuracy.
What is difference between CNN and RNN?
Convolutional neural networks (CNNs) are among the most common types of neural networks used in computer vision to recognize objects and patterns in images.
CNNs have unique layers known as convolutional layers that separate them from RNNs and other types of neural networks. The usage of filters within convolutional layers is one of the defining traits of CNNs.
The biggest difference between a CNN and an RNN is the ability to process temporal information (data that comes in sequences, like a sentence). While CNNs do not have the ability to effectively interpret temporal information, RNNs are specially designed to perform that task.