Note that I am using a sinusoidal activation function as opposed to the standard monotonic sigmoidal activation function, which gives the RNN a much greater information storage capacity.
RANDOM SEQUENCE GENERATION
The first task I had it perform was to simply generate a prespecified sequence of binary outputs, given no form of input. In order to perform this task, it needs to rely on its contextual memory in order to know "where" in the sequence it is. A typical target output sequence would look something like this:
0101110101001010110011101
Interestingly enough, given only a single neuron in the hidden layer as well as a single hidden neuron in the context layer, I found it is able to learn sequences of up to 25 in length perfectly.
NON-MARKOV TIC TAC TOE
I revisited an older experiment of mine. In it, the program is evolved to play tic tac toe against a hand-coded opponent of moderate skill (it cannot be beaten without being forked). It is also limited to only being capable of perceiving a single cell on the 3 by 3 tic tac toe grid at a time. In order to see more, it must move around from cell to cell, and it must create an internal representation of what it has already seen. In order to make a move, it must move to a legal empty cell, and output a 'halt' instruction (which is detected by the activation of a particular neuron in its output array). The fitness of the program is calculated as the percentage of games won. No credit is given for draws, making this a somewhat difficult task due to the discrete nature of the feedback.
Again, given only a single node in the hidden layer and a single node in the context layer, I have so far observed it capable of winning 48% of the games played. Keep in mind that even a 'perfect' hand-coded solution would not win 100% of the games, as by chance the opponent would sometimes play a series of moves that would, at best, only allow for a draw.
The best behavior I had previously seen on this task was 64% wins, however, that was with many more hidden nodes, and after many more runs of evolution.
The impressive thing, really, is that it is able to do anything at all with such scarce computational resources! My intuitions continue to be violated. I would not have believed this to be possible before actually seeing it working. And it makes me wonder why, if so much can be done with just a handful of adjustable parameters, do we need trillions of synapses in the human brain? My strong suspicion is that this complexity is not required to achieve the generation of complex behavior, but rather it is primarily needed for storing massive amounts of memory - for encoding the experiences accumulated during a human lifetime.