Visualizations of Deep Learning, Recurrent Neural Networks, and Evolutionary Algorithms
Tuesday, August 23, 2011
Recurrent Neural Network State Trace Animation
Animation depicting the state of a recurrent neural network evolved to perform a balancing task (non-Markov double cart/pole). The hidden layer of the RNN is composed of two sinusoidal nodes, the states of which are being plotted through time, with some decayed traces to aid in visualization.
Here is an attempt at a similar style of visualization, but using 5 rather than 2 hidden neurons. The traces are of all the pairwise combinations of hidden neuron state. The combinations are: {(1,2), (1,3), (1,4), (1,5), (2,3), (2,4), (2,5), (3,4), (3,5), (4,5)}, for a total of 10 traces.
Above is a picture created by allowing a single trace to accumulate over about 6000 time steps.
This second image depicts the dynamics of an RNN evolved to perform on a more complex task; the 89-state maze, described here[PDF]. As can be quite obviously seen, the dynamics are far more complex than those of the pole balance task.
This third image depicts the RNN dynamics when evolved on the embedded Reber grammar.
Subscribe to:
Posts (Atom)