Presentation: Speculative Voices and Machine Learning
FREE ENTRY
Description
The tradition of vocal portraiture has recently been reinvigorated by contemporary machine-learning models such as Speech2Face and Wav2Pix, which seek to generate the face of a speaker from their vocal signal. Rather than facilitating the forensic reconstruction of an individual's 'true' face, these models speculate about the relationship between the vocal signal and physical facts about the human vocal apparatus in order to generate an 'averaged' subject. This lecture-presentation by researchers and artists Murad Khan and Martin Disley outlines a set of adversarial attacks upon Speech2Face to perturb the generated face, providing vocal signals which deform the hallucinated anatomy and create speculative physiologies capable of resisting numeralisation.