Bj?rn Dahl Kristensen
Navigating and maintaining social affiliation is a critical task of human life. If parsing the social world into affiliative groups form a core, generative mechanism of the evolved human mind, even preverbal infants may differentiate instances of inclusion and exclusion. On the one hand, whether novel agents are socially included or excluded may serve as an important cue of their value as social partners and so preverbal infants might expect that third-party observers of exclusion will themselves continue to discriminate by avoiding the previously excluded. On the other hand, emerging evidence points to a preverbal sympathy response for victims of aggression, suggesting that preverbal infants might instead expect third-party observers to sympathetically approach the victims of exclusion. Here, we present 13-18 m.o. infants to animated depictions of a repeatedly included agent, and a repeatedly excluded agent, testing whether they expect a neutral observer to approach and affiliate with the included or excluded agent.
Bineeth Kuriakose
In today's digital landscape, virtual collaboration is increasingly prevalent, presenting unique challenges in understanding group dynamics due to the absence of traditional non-verbal cues like gaze behavior. This project seeks to address these challenges by examining how post-session gaze analytics can reveal patterns of cognitive overload and disengagement during structured collaborative activities.
Participants will engage in brainstorming sessions and other interactive tasks on a virtual collaborative platform. Key metrics, including fixation duration and shared attention, will be analyzed to assess group engagement levels. Using cost-effective webcam-based eye-tracking tools, we will collect gaze data from small groups and meticulously annotate it with significant events crucial for interpreting collaborative interactions.
The dataset generated from this research aims to advance eye-tracking technology and enrich our understanding of virtual collaboration. By exploring post-session gaze analytics, this project endeavors to enhance comprehension of group dynamics within virtual environments, offering valuable insights for both educational settings and technological development.
Erik Kjos Fonn
Normative concerns about distributive fairness lie at the core of both human societal living and political life. This project aims to test how third-party expectations about distributive fairness develop across the lifespan, from infancy to adulthood. Since pupillometry can be used with both adults and infants, I propose to use a pupillometry-based Violation-of-Expectation method that should, in principle, allow for direct comparisons on a large developmental scale. This approach may shed light on the earliest developmental origins of humans' conception of fairness. Additionally, demonstrating that identical methods can be used to test and compare third-party expectations across the lifespan, from infancy to adulthood, methodologically sets the stage for stringent tests of theories of developmental stability and change.
Luca Onnis
By recording where and how long individuals look at different stimuli on a screen, researchers can infer cognitive processing in language learning and processing. This research utilizes eye movements across three studies. Experiment 1 measures fixations while participants listen to a novel miniature language (ML) with words visually mapped onto specific screen locations. The ML requires learning non-adjacent syntactic dependencies between the first and last word in each sentence (e.g., “pel _ jic” in “pel kicey jic”). Anticipatory looks to the correct screen locations before the last word appears can reveal implicit non-adjacent learning without explicit tasks. Experiment 2 uses a blank screen paradigm to test whether grammar learning can be grounded in visual perception. We analyze spontaneous eye movements while participants listen to the same auditory ML without visual mappings. We expect participants to develop eye movement trajectories to specific screen locations if they learn the non-adjacencies. Experiment 3 combines both paradigms to investigate whether learning the ML with spatial support first (auditory + visual modalities) benefits subsequent learning of a second ML presented only in the auditory modality. These experiments support the hypothesis that abstract aspects of grammar can be perceptually grounded in the visual modality.
Audun Rosslund
The COVID-19 pandemic massively changed the context and feasibility of experimental and developmental research. This new reality, as well as considerations about sample diversity and naturalistic settings for developmental research, highlights the need for solutions for online studies. To this end, we (Lo et al. 2024) created e-Babylab, an open-source browser-based tool to conduct unmoderated online studies, with a focus on infants and children. E-Babylab is equipped to a machine-learning gaze estimation algorithm, that enables researchers to deploy eye-tracking experiments at a distance, where participants (infants, children and adults alike) can take part online, using their computer’s webcam to capture gaze movements (see Steffan et al., 2024 for an initial validation study). The aim of this current project would be to fully characterise, and improve, the temporal and spatial accuracy of this webcam-based eye-tracking approach with a sample of 18-month old infants.
Franziska K?der
Attention-Deficit/Hyperactivity Disorder (ADHD) diagnoses in adults have increased significantly in recent years. However, while the effect of ADHD on pragmatic abilities is well-studied in children, research on the communication abilities of adults with ADHD is surprisingly scarce. This study aims to compare the communicative behaviors of adults with and without ADHD in naturalistic dialogues, addressing a critical gap in existing research. Both verbal (topic shifts, interruptions, excessive talking, disfluencies) and non-verbal features (gaze patterns, pupil metrics) will be explored. Forty dyadic interactions involving adult native speakers of Norwegian will be recorded, with participants wearing eye-tracking glasses. Pairs will consist of either one adult with ADHD and a neurotypical adult (TD) or two neurotypical adults. Conversations will be minimally prompted to maintain naturalness. After the interaction, participants will complete questionnaires assessing their interaction experiences, ADHD symptoms, and pragmatic abilities. In the analysis, several pragmatic, gaze, and pupil measures will be analysed and compared across groups. Crucially, the effect of linguistic events such as interruptions or topic shifts on eye measures will be explored. Systematically mapping both verbal and non-verbal communication differences for the first time is a crucial step towards a better understanding of how ADHD affects social interactions.
Natalia Kartushina
Infant-directed speech (IDS) – the register many adults adapt to when talking to infants – is characterised by a reduced speech rate, exaggerated pitch and extension of corner vowels. These features are considered to help infants in the task of language learning, with some scholars suggesting that IDS is a human universal – a form of ‘natural pedagogy’. Yet, emerging evidence from less studied populations suggests that IDS can vary according to languages and dialects. While many previous studies examined the relationship between IDS and word comprehension and production in infants and report inconsistent results, only few studies examined the relationship between the acoustic features of IDS and infants’ sound representations; yet speech input has been claimed to shape early sound representations. To address this gap in research, we will investigate infants’ discrimination of the vowel contrast /i/-/u/ in an eye-tracking task and examine its relationship to the acoustic features of the same sounds produced in IDS. Findings from this project will contribute theoretically to our growing knowledge of the role of IDS in language development, especially for Norwegian which is an understudied language.
Julien Mayor
How do young infants begin to crack the code of language? By their first birthday, they are already mapping sounds to meanings, but the mechanisms behind this early feat remain elusive, and studies that point to specific strategies are hampered by confounds inherent in typically-used experimental designs, where two pictures are presented side-by-side and infants are prompted to look at either of them. In the current project, we will investigate whether and how contextual specificity aids word comprehension in Norwegian infants using pupillometry. That is, instead of presenting infants with two visual objects that make gaze-patterns prone to biases from saliency or familiarity differences between the objects – we will present infants with single objects, paired with auditory prompts that are either matched, contextually related, or contextually unrelated to the visual stimulus, and we will examine the time-course of pupil dilation as an index of cognitive effort driven by the auditory condition.
Ane Theimann
This project aims to investigate the predictive abilities of Norwegian-speaking adults through two eye-tracking experiments focused on semantic and action prediction. By examining both types of prediction, the study seeks to determine whether a link exists between adults' abilities to predict semantic information and their ability to predict actions. In a prior study with toddlers, no predictive link was found between semantic and action predictions, suggesting this ability may not yet be fully developed in young children. However, preliminary findings in a control group of 20 adults revealed a promising link between these predictive abilities. This small sample size, while insufficient for publication, supports the potential for significant findings in adults. Therefore, this project proposes to expand the study to a larger sample of 65 adults, with the number of participants determined by a power analysis to achieve reliable results. The findings from this research will advance our understanding of prediction in language processing and contribute to the broader fields of psycholinguistics and cognitive science. This study is an extension of Ane Theimann’s doctoral research, based on findings from one of the studies in her dissertation conducted under the supervision of Franziska K?der, Monica Norvik, and Nivedita Mani.
Bruno Laeng
Rotoreliefs are kinetic displays that, by use of rotating motion, enhance the impression of depth in two-dimensional patterns. The depth perceived with these patterns is also bistable, i.e., reversible; that is, a same pattern can be seen as a tunnel, or concave surface. It is hypothesized that oculomotor system responds to these illusory depth patterns by generating the same ‘reflexive’ responses that are typical with real 3D volumes: 1) if the relief appears convex the observer, the eyes converge toward the illusory near position and, concomitantly, the pupils constrict; 2) if the relief appears concave to the observer, the eyes diverge toward the illusory far position and, concomitantly, the pupils dilate. These changes will follow the reported (by key presses) changes in depth by the observers during each trial.