WEBVTT 00:00:00.000 --> 00:00:00.570 align:middle line:90% 00:00:00.570 --> 00:00:03.810 align:middle line:84% We have been looking at how to record music related motion, 00:00:03.810 --> 00:00:06.140 align:middle line:84% and to get data streams from this. 00:00:06.140 --> 00:00:09.770 align:middle line:84% And we've also seen that motion and emotion 00:00:09.770 --> 00:00:12.120 align:middle line:90% are closely connected. 00:00:12.120 --> 00:00:14.195 align:middle line:84% So I'd like to ask you Alexander, 00:00:14.195 --> 00:00:17.680 align:middle line:84% do you think it's possible to identify 00:00:17.680 --> 00:00:21.770 align:middle line:90% human emotion from motion data? 00:00:21.770 --> 00:00:23.990 align:middle line:90% I think the short answer is yes. 00:00:23.990 --> 00:00:26.950 align:middle line:90% But it's very difficult to do. 00:00:26.950 --> 00:00:29.776 align:middle line:84% But still I think it's important to think about that we have 00:00:29.776 --> 00:00:30.900 align:middle line:90% many different layers here. 00:00:30.900 --> 00:00:34.490 align:middle line:84% So one thing is the actual physical motion that we have 00:00:34.490 --> 00:00:37.100 align:middle line:84% and that we can measure with different types of motion 00:00:37.100 --> 00:00:38.340 align:middle line:90% capture systems. 00:00:38.340 --> 00:00:41.040 align:middle line:84% But then to understand more about what actually happens 00:00:41.040 --> 00:00:44.570 align:middle line:84% emotionally, how we would perceive this in our brain, 00:00:44.570 --> 00:00:45.780 align:middle line:90% it's a long way. 00:00:45.780 --> 00:00:47.510 align:middle line:84% And it's many, many layers in between. 00:00:47.510 --> 00:00:50.740 align:middle line:84% So that's why I think, when we work with this as researchers, 00:00:50.740 --> 00:00:52.180 align:middle line:84% we need to do this systematically 00:00:52.180 --> 00:00:54.950 align:middle line:84% and to look at the different layers separately. 00:00:54.950 --> 00:00:58.580 align:middle line:84% And for example, go from the continuous physical motion 00:00:58.580 --> 00:01:00.650 align:middle line:84% to look at how we perceive this as an action 00:01:00.650 --> 00:01:01.900 align:middle line:90% with the beginning and an end. 00:01:01.900 --> 00:01:04.455 align:middle line:84% And then from there again it's possible to look 00:01:04.455 --> 00:01:07.340 align:middle line:84% at kind of how we experience these, 00:01:07.340 --> 00:01:10.111 align:middle line:84% some of the more expressive and emotional qualities in this. 00:01:10.111 --> 00:01:11.610 align:middle line:84% And then obviously here we also need 00:01:11.610 --> 00:01:15.490 align:middle line:84% to combine both quantitative methods and techniques, 00:01:15.490 --> 00:01:16.510 align:middle line:90% but also qualitative. 00:01:16.510 --> 00:01:20.010 align:middle line:84% Because of course the interpretation of the data 00:01:20.010 --> 00:01:22.500 align:middle line:84% is very much connected to the cultural background, 00:01:22.500 --> 00:01:24.710 align:middle line:84% the context that this is situated 00:01:24.710 --> 00:01:26.420 align:middle line:90% in, et cetera, et cetera. 00:01:26.420 --> 00:01:30.510 align:middle line:84% So the long answer it's difficult, and the short answer 00:01:30.510 --> 00:01:33.450 align:middle line:90% is that yes, it's possible. 00:01:33.450 --> 00:01:35.010 align:middle line:84% Ok, so then we know that research 00:01:35.010 --> 00:01:39.520 align:middle line:84% is going on, on how to identify human emotion from motion. 00:01:39.520 --> 00:01:45.360 align:middle line:84% But what do you think, is it possible to identify emotion 00:01:45.360 --> 00:01:46.740 align:middle line:90% from motion? 00:01:46.740 --> 00:01:49.670 align:middle line:84% And in what way are the two related? 00:01:49.670 --> 00:01:50.315 align:middle line:90%