Analysis of Large-Scale Assessment Data
Thursday 1, Friday 2 and Monday 5 to Friday 9 October 2020
Dr. Isa Steinmann from TU Dortmund University, Germany
This workshop aims to communicate both knowledge and hands-on analytical skills in the field of international large-scale assessment (ILSA) data. ILSAs like PIRLS (Progress in International Reading Literacy Study), TIMSS (Trends in International Mathematics and Science Study), or PISA (Programme for International Student Assessment) provide unique opportunities to investigate both substantive and methodological research questions. ILSAs are conducted in recurring cycles and in numerous countries. They assess data on different levels, often including education systems (e.g., curriculum information, quantity of schooling), schools (e.g., school resources, composition), teachers (e.g., qualification for teaching, teaching practices), students (e.g., achievement in standardized tests, learning motivations), or familial backgrounds (e.g., socio-economic status, home support for learning). The aim of the international design of the tests and questionnaires is to make educational inputs, frameworks, and outputs comparable across countries and time. By implication, these studies face numerous methodological challenges and provide a plethora of interesting research opportunities.
This workshop addresses the following ILSA-related topics:
- Overview and central aims of ILSAs
- Sampling and data collection
- Target populations
- Sampling procedures and representativeness of the data
- Instrument development
- Conducting the surveys
- Quality assurance
- ILSA data
- Achievement tests
- Questionnaire data
- Comparison of educational systems
- National extensions
- Research using ILSA data
- Consequences of the data structure and sampling design
- Substantive research questions
- Methodological research questions
The workshop encompasses lecture and interaction formats, group work, as well as practical exercises in the R environment. After a successful completion of the course, the students
- have a broad knowledge about ILSAs’ backgrounds, methods, and scope,
- can critically read and interpret results of ILSA reports and ILSA-based studies,
- can handle methodological peculiarities of ILSA data in own analyses, and
- can develop own ILSA-based research questions and approaches.
The workshop covers seven days (09:00-16:00h). After five days, students are required to hand in an outline of a written assignment. The outlines will be discussed and individual feedback will be provided. These outlines have to be approved before the students submit the final versions of their assignment papers. The deadline for submitting the papers is 30 November 2020.
Participation in the course requires the formal criteria described on the course page as well as good knowledge about basic statistical methods. The participants are expected to bring laptops with administrator rights on which the newest version of R is installed.
Go back to the semester page to see the reading list
Response Processes Data in Assessment
Dr Bryan Maddox, Associate Professor at the University of East Anglia, England
Monday 23 to Friday 27 November, 2020
This workshop [5 day workshop] will introduce the use of response process data in the design and validation of computer-based tests including gamified assessments. This will include the theory and methods for collection and analysis of multi-modal process data, and practical workshops. The workshop locate the use of process data in test validation, including its significance in 'The Standards in educational and psychological measurement' and associated theoretical literature on validity including Evidence Centered Design. It will include content on the evaluation of User Experience and User Interfaces in computer-based testing design. We will consider how expanded sources of process data can be included in validation processes, and how they can be incorporated as 'extensions of the test' as performance data, including the use of multi-modal process data in data mining techniques, artificial intelligence and machine learning. By the end of the workshop the participants will have gained practical knowledge and skills in the capture and analysis of process data, and be expected to be able to locate the use of process data theoretically in validity theory and in assessment design. The final day of the workshop will involve presentations by the participants on the use of process data in assessment design and validation.
Go back to the semester page to see the reading list
Methods for Causal inference in Educational Research
Professor Jan-Eric Gustafsson, University of Gothenburg, Sweden and Professor II, University of Oslo
8 full workshop days from late January through February 2021. The workshops will consist of lectures and hands-on applications, generally in R.
The main purpose of the workshop is to give an introduction to techniques for making credible causal inferences from observational data and how such techniques can be used in educational research. While the randomized experiment is recognized as a superior design for determining the causal effect of a treatment on an outcome, it is often the case that ethical, practical, and economic reasons prevent such designs to be used to answer causal questions. However, within different disciplinary fields alternative techniques have been developed which under certain assumptions allow causal inferences to be made from observational data. Some examples of such methods are instrumental variables (IV) regression, regression discontinuity (RD) designs, difference-in-differences (DD), and propensity scoring matching. The workshop aims at developing participants’ skills to choose and apply appropriate techniques for answering causal questions on the basis of observational data, as well as to critically review educational research which aims to make causal inferences.
In the first part of the workshop the distinction between causal and non-causal research questions is introduced, and the reasons why it is generally impossible to answer causal questions through analyzing associations among observed variables are made explicit. The so called potential outcomes framework is introduced as a set of tools to understand causal inference in terms of counterfactual comparisons. Issues in causal inference are demonstrated through simulating and analyzing data with traditional regression techniques. In the second part of the course three frequently used approaches for answering causal questions are treated in some detail, namely RD designs, IV regression analysis and DD. The logic upon which these approaches are based is presented, and the use and interpretation of the techniques is illustrated with both simulated and real data in the R system. In the third part of the course structural equation modeling and propensity score matching techniques are presented as methods which use conditioning on observed and latent variables to prevent bias in estimates of causal effects.
A detailed schedule will be available on the Spring 2021 semester page in late November.