I found a very interesting podcast on iTunes, called Talking Robots talking about both Robotics and AI.
Just read that a Genetic Algorithm has been used to solve a billion-variable problem! The particular algorithm used is something called a Compact Genetic Algorithm (cGA), which uses a statistical model to generate new candidate solutions rather than more standard operators like crossover and mutation. This algorithm falls in the class of Estimation of Distribution Algorithms.
I've been experimenting with an java based open source data mining tool called Weka. I've been using it to try to find patterns in neural network fitness landscapes. As a first experiment, I randomly generated several thousand neural networks and tested them against Exclusive-OR. The value of their parameters and the resulting fitness was used to generate an ARFF file recognizable by Weka. I then analyzed the data using quite a few different tools, including trees, bayesian methods, linear regression, multilayer perceptrons, etc... My conclusion? Looking for patterns in the fitness landscape of a neural net is HARD. None of the dozen or so classifiers I used were able to do any better than chance (most did worse) after performing 10-fold crossvalidation. And this is when dealing with a space of only 9 dimensions, much smaller than anything practical.
I've been reading about the Hierarchical Bayesian Optimization Algorithm (hBOA). The idea behind this is to infer bayesian directed acyclic graphs (DAGs) based on a selected set of successful candidates and use that to generate new candidates (this is also an Estimation of Distribution Algorithm, as described above). By using bayesian DAGs, it is able to sample conditional dependencies, rather than being limited to a univariate model (like the cGA). This allows it to discover "building-blocks" of progressively greater complexity, and it is able to solve some very difficult problems, including Hierarchical If-And-Only-If (HIFF), which is impossible for other evolutionary algorithms. While this is indeed impressive, I am not personally convinced that many real-world problems possess such a structure (however, one example they have found so far is in finding ground-states for Spin Glasses). My strong feeling is that such a technique would not translate well into Neuro-evolution, because small groups of link-weights in a neural network cannot impart fitness without reference to the whole - in other words, I don't think that, in general, building blocks in artificial neural networks even exist, certainly not recurrent ones. [One possible counter-example to this intuition would be Enforced SubPopulations]. I may be proven wrong later, but that's how it looks to me right now.