Abstract
Ethnographic and discourse perspectives on timbre in digital lutherie practices
Timbre, pitch, and timing are often relevant in digital musical instrument (DMI) design. Compared with the latter two, timbre is neither easy to define nor discretise when negotiating audio representations and gesture-sound mappings. In this talk, I will summarise findings across three studies: 1) semi-structured interviews with 29 instrument makers from commercial, research, independent, and artistic backgrounds, including composers and performers who build bespoke instruments as well as live coders; 2) a hackathon to prototype tools for supporting timbre exploration in DMI design, and an ethnographic study to explore how participants engaged with the notion of timbre and how their conception of timbre was shaped through social interactions and technological encounters; 3) a corpus assisted discourse analysis of “timbre” in NIME and ICMC proceedings. We examine how timbre and timbre control is constructed through, for example, the psychoacoustical model of “timbre space,” sound programming languages and synthesis tools, machine learning and AI, and other trends in DMI design, reflecting on how describing timbre actually guides how it is used and understood.
Learning and neural mechanisms of haptic communication
Many tasks such as physical rehabilitation, vehicle co-piloting or surgical training, rely on physical assistance from a partner. We have studied how humans physically interact with each other in such scenarios. By examining the behaviours of individuals when their right hands are physically connected, we could show how these partners use haptic information enables humans to estimate partners’ motor plan and use it to improve one own performance. Embodied as a robot partner, this model was verified as it induced the same improvements in motor performance as a human partner. In this presentation I will in particular analyse how the interaction is influencing the learning of a tracking task, and the underlying brain processes.
Bio
Charalampos Saitis studied mathematics in Athens and computational musical acoustics in Belfast, and obtained a PhD in music technology from McGill University. He is currently assistant professor in digital music processing at the Centre for Digital Music, Queen Mary University of London, where he leads the Communication Acoustics Lab. His research concerns the ways people perceive sound and technologies for improving musical communication between humans and between humans and machines. He is a founding member of the International Conference on Timbre and acted as co-editor for the scientific volumes Timbre: Acoustics, Perception, & Cognition (2019) and Musical Haptics (2018).
Dr Katja Ivanova is a Lecturer in Human-Machine Interaction at Queen Mary University of London and a Research Associate in the Human Robotics Group, at Imperial College London. Her main research interests are in multimodal human-robot interaction and haptic communication between agents as part of user-centred robotics for medical applications.