Facial Noise, mimetic recognition feedback

FacialNoise

Facial Noise, a performance artwork created by Noa Dolberg, performed by Azumi Oe, and with sound design by Khen Price, uses data from facial expressions to manipulate real-time generated sound. Using FaceOSC, a facial tracking software made by Kyle McDonald with openFrameworks, facial expression data is used as a raw material to modulate sonic objects and textures in Max MSP and Ableton Live. The performance is made in traditional Butoh, a style known for its controlled movements and which traditionally delves into the extremes of the human condition. During the performance, the face becomes a mask to nullify immediately recognisable human expressions and hide subjective emotive states. While the perception of expression is crucial in humans from almost the moment of birth in order to recognise emotional tendencies and comprehend social interaction, Facial Noise rewires this primeval and innate communication system to become a musical instrument. The mapping between facial muscle moviments and sound makes no attempt to create correspondences between representational sounds and recognisable expressions. It is an electro-facial hack where expression is harnessed for its raw abstracted form. The system is a good example of how the mimetic construct of a recognition/mapping system can have a feedback effect on the expressions, and indeed the language of the system itself. Paul Prudence

 

Facial Noise, mimetic recognition feedback