autotune and avatar

Although autotune is now used as a digital effect, it was originally used to correct pitch in songs. With its use, singers can sing in perfect pitch, so long as they are not too far off. Indeed, autotune does not need to work in real time, and at a high level, it is no different from an instrument synthesizer, but with the instrument sampled in real time. (Perhaps such a hybrid approach could render even more realistic real acoustic instruments, and make almost anybody a “great” music player.) As the automated portion of the autotune’s capability improves, less and less of the singer’s input is needed, and one finds less and less need for the perfect singer, and more and more need for the perfect song and its performance intention — this is, after all, the essence of a creative work — not the much valued virtuosity with which it is performed (for its “difficulty”).

A similar thing has been taking place in motion picture production, with computer assisted graphics taking over for effects and stunts. Lately, the production process for the movie Avatar has pushed this process to a mini-plateau of some sort. Avatar, as you may recall, is produced by sampling the expressivity of the actors on a body grid, then re-rendering in a very different way. Much like the human-controlled machines in the movie, the actors are just giving input to a machine, which follows the director’s desires. Again, as the technology improves, less and less of the actor’s input is needed, and eventually, they, like the singers, will be unnecessary.

When it comes to the creative fields, as it does — I believe — in any field, the evolution of automation technology diminishes birth advantages, allows compartmentalization of skills, promotes specialization of skills, and therefore equalizes opportunities. The beneficiaries are people who engage in true creativity of the mind, both in the arts and in the engineering of the technology, while the losers are the human “performers”, save for the few truly great ones, who will be needed to go through the dehumanizing experience of being sampled as input for a machine.

So… study what a machine cannot do, or, study how to make a machine do that.

fuzzy research

Every once in a while newspapers publish these “popular science” articles that promulgate the latest fads in psychology, anthropology, or some such “fuzzy” social science. Here is one: Did evolution make our eyes stand out? Researchers test ‘cooperative eye’ hypothesis in humans and apes.

The cooperative eye hypothesis is that human eyes have a lot of white for ease of cooperation just by looking at eye movement.

In a new study that is one of the first direct tests of this theory, researchers from the Max Planck Institute for Evolutionary Anthropology in Germany looked at what effect head and eye movements had on redirecting the gaze of great apes versus human infants.

In the study, a human experimenter did one of the following:

- Closed his eyes, but tilted his head up toward the ceiling
- Kept head stationary while looking at the ceiling
- Looked at the ceiling with both head and eyes
- Kept head stationary while looking straight ahead

Results showed that the great apes … were more likely to follow the experimenter’s gaze when he moved only his head. In contrast, the 40 human infants looked up more often when the experimenter moved only his eyes.

Now, look… something must have gotten lost or this is a piece of pointless research that says nothing. I don’t see how this is a test of the hypothesis at all. Human eyeballs are more visible than ape eyeballs, so humans are more used to following eyeballs and apes are more used to following heads out of necessity. But this has nothing to do with evolution, has it? How does it show that cooperation necessitates more visible eyeballs? And what about cats and owls, who also have highly contrasting eyeballs?