WEBVTT 00:00:00.000 --> 00:00:02.910 align:middle line:84% Why do we do calibration of eye trackers? 00:00:02.910 --> 00:00:05.080 align:middle line:90% Why it is so important? 00:00:05.080 --> 00:00:08.010 align:middle line:84% Well, it's a necessary step before any recording 00:00:08.010 --> 00:00:11.400 align:middle line:84% that you do with eye tracking for pupillometry, 00:00:11.400 --> 00:00:14.970 align:middle line:84% because it establishes a link between the position 00:00:14.970 --> 00:00:17.730 align:middle line:84% of the pupil in the eye and where in the world 00:00:17.730 --> 00:00:19.020 align:middle line:90% the person is looking. 00:00:19.020 --> 00:00:23.910 align:middle line:84% So what will happen if we don't do the calibration? 00:00:23.910 --> 00:00:25.868 align:middle line:84% Well, in many cases, actually, the system 00:00:25.868 --> 00:00:27.660 align:middle line:84% will not even allow you to take a recording 00:00:27.660 --> 00:00:32.110 align:middle line:84% because it's just not able to record meaningful information. 00:00:32.110 --> 00:00:36.330 align:middle line:84% But if the calibration is not well done, 00:00:36.330 --> 00:00:40.657 align:middle line:84% then you're just going to get poor information about where 00:00:40.657 --> 00:00:41.490 align:middle line:90% a person is looking. 00:00:41.490 --> 00:00:43.560 align:middle line:84% So you think they're looking in one spot, 00:00:43.560 --> 00:00:47.310 align:middle line:84% and they're actually looking in entirely different spots. 00:00:47.310 --> 00:00:49.980 align:middle line:84% How do you calibrate a stationary eye tracker? 00:00:49.980 --> 00:00:52.710 align:middle line:84% Is there any specific protocols we follow to do it? 00:00:52.710 --> 00:00:54.450 align:middle line:84% Yeah, there's a pretty standard protocol, 00:00:54.450 --> 00:00:56.485 align:middle line:90% which we can demonstrate. 00:00:56.485 --> 00:00:57.735 align:middle line:90% Would you like to have a seat? 00:00:57.735 --> 00:01:00.840 align:middle line:90% 00:01:00.840 --> 00:01:03.600 align:middle line:84% So this is how we would normally carry out 00:01:03.600 --> 00:01:06.780 align:middle line:84% a calibration and recording with the stationary eye tracker. 00:01:06.780 --> 00:01:10.440 align:middle line:84% Here, we have, this is the actual stationary eye 00:01:10.440 --> 00:01:12.870 align:middle line:84% tracker, the camera that's capturing his eyes, 00:01:12.870 --> 00:01:16.320 align:middle line:84% and he's going to fix his eyes on the screen. 00:01:16.320 --> 00:01:18.570 align:middle line:84% And the software is going to present him 00:01:18.570 --> 00:01:21.540 align:middle line:84% with a series of visual targets on the screen 00:01:21.540 --> 00:01:23.310 align:middle line:90% in different locations. 00:01:23.310 --> 00:01:26.820 align:middle line:84% And he's supposed to keep his head still and follow 00:01:26.820 --> 00:01:29.350 align:middle line:90% the targets with his eyes. 00:01:29.350 --> 00:01:30.555 align:middle line:90% So we'll start this now. 00:01:30.555 --> 00:01:37.150 align:middle line:90% 00:01:37.150 --> 00:01:40.240 align:middle line:84% And as this unfolds, he should keep his eyes 00:01:40.240 --> 00:01:43.120 align:middle line:84% as precisely as possible in the middle of each target 00:01:43.120 --> 00:01:45.190 align:middle line:84% and once we come to the end of this, 00:01:45.190 --> 00:01:50.500 align:middle line:84% now the system has a few samples of positions of the pupil 00:01:50.500 --> 00:01:53.690 align:middle line:84% and where that corresponded to in the world. 00:01:53.690 --> 00:01:55.930 align:middle line:84% So now it can generalise from that information 00:01:55.930 --> 00:01:59.590 align:middle line:84% and determine where he's looking anywhere 00:01:59.590 --> 00:02:03.150 align:middle line:84% on the screen on the area that was calibrated. 00:02:03.150 --> 00:02:14.000 align:middle line:90%