Table of content:
Micro-Syllabus of Unit 8 : Dynamic Driven Recurrent Networks (7 Hrs.) 15 marks fix
Introduction, Recurrent Network Architectures, Universal Approximation Theorem, Controllability and Observability, Computational Power of Recurrent Networks, Learning Algorithms, Back Propagation through Time, Real-Time Recurrent Learning, Vanishing Gradients in Recurrent Networks, Supervised Training Framework for Recurrent Networks Using Non Sate Estimators, Adaptivity Considerations, Case Study: Model Reference Applied to Neurocontrol
🗒️Note:→
# Introduction :
- Recurrent Neural Network(RNN) are a type of neural Network where the output from previous step are fed as input to the current step.
- In traditional neural networks, all the inputs and outputs are independent of each other, but in cases like when it is required to predict the next word of a sentence, the previous words are required and hence there is a need to remember the previous words.
- Thus RNN came into existence, which solved this issue with the help of a Hidden Layer. The main and most important feature of RNN is Hidden state, which remembers some information about a sequence.
- RNN have a “memory” which remembers all information about what has been calculated. It uses the same parameters for each input as it performs the same task on all the inputs or hidden states to produce the output.
- Thus, RNN converts the independent activations into dependent activations by providing the same weights and biases to all the layers, thus reducing the complexity of increasing parameters and memorizing each previous outputs by giving each output as input to the next hidden layer (see right part of the figure in next slide).
- Hence layers of neural network in right side can be joined together such that the weights and bias of all the hidden layers is the same, into a single recurrent layer.