Friday, June 03, 2005

Powerful AI + Biofeedback = ???

Imagine, if you will, two things:

First, the existence (at some unspecified point in the future) of some really good Machine Learning algorithms; perhaps not yet surpassing human intellect or even passing the Turing Test, but considerably more powerful than today's methods.

Second, imagine some relatively inexpensive and portable real-time brain monitoring technologies, perhaps not sensitive at the level of the individual neuron, but maybe sensitive enough to differentiate between cortical columns; in other words, far better than any current EEG or fMRI technologies.

Now let's imagine an interesting application that might be developed as a result of these hypothetical technological breakthroughs: a program that could generate pleasing novel music (or any other media form) in real-time based on the physiological/neurological state of the listener!

PHASE I: Data-Mining

Imagine that a listener, as the subject of this thought-experiment, is exposed to a set of songs. They listen to each song in turn, and then give some form of feedback concerning their reactions to that song; perhaps a simple thumbs-up or thumbs-down; perhaps a rating from 1 to 10. The particular form of feedback is inconsequential, however the more information that is given the better. During this listening session the listener has been attached to some sort of brain-monitoring device (as eluded to above) which has relayed a vast amount of information (possibly Terabytes, certainly Gigabytes) to the CPU. The CPU concurrently receives 3 forms of data: 1) the raw waveform information of the given audio file, 2) the real-time cortical activity patterns of the listener brain in response to this auditory stimuli, and 3) the consciously elicited response of the listener, recorded in some form of "rating".

The CPU now has the task of making two types of highly complex associative mappings. First, it needs to be able to correlate the auditory stimuli with the effect produced in the cortical dynamics of the listener brain. The mapping must be good enough to be at least somewhat predictive; i.e. the CPU should be able to make a decent guess about the effect that a new piece of audio will have on the neurological state of the listener. Second, the CPU needs to make an additional complex associative mapping between the cortical state of the listener and the conscious response elicited (e.g. thumbs-up or thumbs-down).

Waveform Information -> Measurable Cortical Dynamics -> Conscious Elicited Response

PHASE II: Experimental Generation of New Auditory Stimuli

Now the CPU can begin to experimentally generate new music, and can receive meaningful feedback in real-time about the success of it's efforts! [Note that there may be major differences between different individual brains! This means that the Phase I data-mining process would likely have to be repeated for each user.]

We will assume that the CPU's initial efforts would be utterly disastrous, but could presumably improve with time via evolutionary selective pressures combined with massive compute power (this is the future after all...)


The end result of this process would (hypothetically) be personalized music/sound-scapes that fluctuate and re-adapt in real-time depending on the minute moment-by-moment details of the listener's mood and state of attention, and would be custom-tailored to the brain-dynamics of that particular individual.

An offshoot of this idea might be the cultivation of particular states of consciousness such as 'alertness', 'relaxation', 'creativity', etc...

Also, the sensory stimuli need not (as mentioned above) be limited to audio...

No comments: