WEBVTT 00:00:00.000 --> 00:00:01.320 align:middle line:90% 00:00:01.320 --> 00:00:04.380 align:middle line:84% We have been looking at many different types of technologies 00:00:04.380 --> 00:00:09.730 align:middle line:84% for capturing human motion or even muscle sensing, 00:00:09.730 --> 00:00:12.560 align:middle line:90% and other types of data. 00:00:12.560 --> 00:00:15.330 align:middle line:84% These are used for the analysis and for more 00:00:15.330 --> 00:00:18.000 align:middle line:90% scientific purposes. 00:00:18.000 --> 00:00:21.386 align:middle line:84% In our group here, at the University of Oslo-- and also, 00:00:21.386 --> 00:00:22.760 align:middle line:84% around the world-- many people is 00:00:22.760 --> 00:00:26.650 align:middle line:84% using the same type of technologies to create music. 00:00:26.650 --> 00:00:31.780 align:middle line:84% And one word that describes this that's often used 00:00:31.780 --> 00:00:35.030 align:middle line:84% is that of "NIME," which stands for "New Interfaces 00:00:35.030 --> 00:00:36.820 align:middle line:90% for Musical Expression." 00:00:36.820 --> 00:00:40.580 align:middle line:84% And here, one of the ideas is that we use technology 00:00:40.580 --> 00:00:44.560 align:middle line:84% to create sounds and music, but this also 00:00:44.560 --> 00:00:46.690 align:middle line:84% requires that we work with the mappings 00:00:46.690 --> 00:00:50.280 align:middle line:84% from then different types of motion to sound. 00:00:50.280 --> 00:00:53.550 align:middle line:84% And Kristian, you have a sensor here now. 00:00:53.550 --> 00:00:55.340 align:middle line:84% And can you just tell me a little bit 00:00:55.340 --> 00:00:56.849 align:middle line:84% about how you have been approaching 00:00:56.849 --> 00:00:58.640 align:middle line:84% this when you have been trying to make this 00:00:58.640 --> 00:00:59.970 align:middle line:90% into a musical instrument. 00:00:59.970 --> 00:01:01.025 align:middle line:90% Sure. 00:01:01.025 --> 00:01:06.050 align:middle line:84% I've been looking at this armband called the MYO, which 00:01:06.050 --> 00:01:11.020 align:middle line:84% has eight sensors that captures the muscle tension of the lower 00:01:11.020 --> 00:01:14.710 align:middle line:84% arm in addition to gyroscope and accelerometer, 00:01:14.710 --> 00:01:20.240 align:middle line:84% which allows you to extract the orientation of the device. 00:01:20.240 --> 00:01:23.850 align:middle line:84% And I've been using this for a musical instrument 00:01:23.850 --> 00:01:27.220 align:middle line:84% to control sound that's already there, 00:01:27.220 --> 00:01:29.680 align:middle line:84% sort of process with sound effects 00:01:29.680 --> 00:01:34.266 align:middle line:84% and also to trigger sounds, like piano sounds or drum sounds. 00:01:34.266 --> 00:01:37.752 align:middle line:90% [SOUNDS PLAYING] 00:01:37.752 --> 00:01:45.720 align:middle line:90% 00:01:45.720 --> 00:01:49.206 align:middle line:90% [MUSIC PLAYING] 00:01:49.206 --> 00:01:52.692 align:middle line:90% 00:01:52.692 --> 00:01:56.178 align:middle line:90% [MUSIC PLAYING] 00:01:56.178 --> 00:01:59.664 align:middle line:90% 00:01:59.664 --> 00:02:03.150 align:middle line:90% [MUSIC PLAYING] 00:02:03.150 --> 00:02:09.126 align:middle line:90% 00:02:09.126 --> 00:02:12.612 align:middle line:90% [MUSIC REVERBERATING] 00:02:12.612 --> 00:02:17.592 align:middle line:90% 00:02:17.592 --> 00:02:21.078 align:middle line:90% [MUSIC REVERBERATING] 00:02:21.078 --> 00:02:26.058 align:middle line:90% 00:02:26.058 --> 00:02:29.544 align:middle line:90% [MUSIC REVERBERATING FASTER] 00:02:29.544 --> 00:02:33.528 align:middle line:90% 00:02:33.528 --> 00:02:37.014 align:middle line:90% [DRUM SOUNDS PLAYING] 00:02:37.014 --> 00:02:40.998 align:middle line:90% 00:02:40.998 --> 00:02:44.484 align:middle line:90% [DRUM SOUNDS PLAYING] 00:02:44.484 --> 00:02:48.468 align:middle line:90% 00:02:48.468 --> 00:02:51.954 align:middle line:90% [DRUM SOUNDS PLAYING] 00:02:51.954 --> 00:02:53.946 align:middle line:90% 00:02:53.946 --> 00:02:57.432 align:middle line:90% [MUSIC PLAYING] 00:02:57.432 --> 00:03:02.412 align:middle line:90% 00:03:02.412 --> 00:03:05.898 align:middle line:90% [MUSIC PLAYING] 00:03:05.898 --> 00:03:10.878 align:middle line:90% 00:03:10.878 --> 00:03:14.364 align:middle line:90% [MUSIC INTENSIFYING] 00:03:14.364 --> 00:03:20.140 align:middle line:90% 00:03:20.140 --> 00:03:23.680 align:middle line:84% It's quite effective in how it works. 00:03:23.680 --> 00:03:29.800 align:middle line:84% With low latency and very high-sampling rate on the data. 00:03:29.800 --> 00:03:34.240 align:middle line:84% How is this connected to kind of the studies you have been 00:03:34.240 --> 00:03:36.460 align:middle line:90% doing on sounds and actions? 00:03:36.460 --> 00:03:39.080 align:middle line:84% How is this related to working-- I mean, the other way 00:03:39.080 --> 00:03:40.110 align:middle line:90% around, really? 00:03:40.110 --> 00:03:43.570 align:middle line:84% What is interesting is, when you know how people relate movement 00:03:43.570 --> 00:03:46.590 align:middle line:84% to sound, you can use that knowledge 00:03:46.590 --> 00:03:51.710 align:middle line:84% to map the data from this device into a synthesiser. 00:03:51.710 --> 00:03:56.090 align:middle line:84% For instance, if you were to control some sort of tone going 00:03:56.090 --> 00:03:57.980 align:middle line:84% up and down, maybe you would want 00:03:57.980 --> 00:04:02.690 align:middle line:84% to use this armband going up and down to control that pitch. 00:04:02.690 --> 00:04:05.370 align:middle line:84% So if you know that people relate pitch 00:04:05.370 --> 00:04:07.520 align:middle line:84% to vertical movement up and down, 00:04:07.520 --> 00:04:09.530 align:middle line:84% maybe you want to use that information 00:04:09.530 --> 00:04:17.180 align:middle line:84% and let this movement be mapped to pitch on your synthesiser. 00:04:17.180 --> 00:04:19.980 align:middle line:84% One thing that many people often are a little bit confused about 00:04:19.980 --> 00:04:23.025 align:middle line:84% is this-- when we were talking about technologies that you 00:04:23.025 --> 00:04:24.650 align:middle line:84% have, either you can make an instrument 00:04:24.650 --> 00:04:27.510 align:middle line:84% or you can make more of a musical device. 00:04:27.510 --> 00:04:29.580 align:middle line:90% What is your take on this? 00:04:29.580 --> 00:04:31.310 align:middle line:84% Well, I guess that's a big debate, 00:04:31.310 --> 00:04:37.650 align:middle line:84% but I don't see why we should confine musical instruments 00:04:37.650 --> 00:04:39.590 align:middle line:90% to very narrow things. 00:04:39.590 --> 00:04:42.295 align:middle line:84% I would say, if you use this to control a DJ mix, 00:04:42.295 --> 00:04:44.370 align:middle line:90% it's still a musical instrument. 00:04:44.370 --> 00:04:47.290 align:middle line:84% It's not some sort of musical device only. 00:04:47.290 --> 00:04:50.670 align:middle line:84% A musical instrument can be many things. 00:04:50.670 --> 00:04:53.350 align:middle line:84% So one of the things that we see when 00:04:53.350 --> 00:04:56.230 align:middle line:84% we work with music technology for creating new music, 00:04:56.230 --> 00:04:58.930 align:middle line:84% is that it is possible to use knowledge 00:04:58.930 --> 00:05:02.719 align:middle line:84% from our scientific studies also for creative studies. 00:05:02.719 --> 00:05:04.260 align:middle line:84% And what we also learned is that it's 00:05:04.260 --> 00:05:08.290 align:middle line:84% possible to use really any type of technology to create 00:05:08.290 --> 00:05:10.510 align:middle line:90% new musical instruments.