Time filter

Source Type

Arnhem, Netherlands

Van Der Kleij F.M.,Cito | Eggen T.J.H.M.,Cito | Eggen T.J.H.M.,University of Twente | Timmers C.F.,Saxion University | Veldkamp B.P.,University of Twente
Computers and Education

The effects of written feedback in a computer-based assessment for learning on students' learning outcomes were investigated in an experiment at a Higher Education institute in the Netherlands. Students were randomly assigned to three groups, and were subjected to an assessment for learning with different kinds of feedback. These are immediate knowledge of correct response (KCR) + elaborated feedback (EF), delayed KCR + EF, and delayed knowledge of results (KR). A summative assessment was used as a post-test. No significant effect was found for the feedback condition on student achievement on the post-test. Results suggest that students paid more attention to immediate than to delayed feedback. Furthermore, the time spent reading feedback was positively influenced by students' attitude and motivation. Students perceived immediate KCR + EF feedback to be more useful for learning than KR. Students also had a more positive attitude towards feedback in a CBA when they received KCR + EF rather than KR only. © 2011 Elsevier Ltd. All rights reserved. Source

Bechger T.M.,Cito | Maris G.,Cito | Maris G.,University of Amsterdam

This paper presents an IRT-based statistical test for differential item functioning (DIF). The test is developed for items conforming to the Rasch (Probabilistic models for some intelligence and attainment tests, The Danish Institute of Educational Research, Copenhagen, 1960) model but we will outline its extension to more complex IRT models. Its difference from the existing procedures is that DIF is defined in terms of the relative difficulties of pairs of items and not in terms of the difficulties of individual items. The argument is that the difficulty of an item is not identified from the observations, whereas the relative difficulties are. This leads to a test that is closely related to Lord’s (Applications of item response theory to practical testing problems, Erlbaum, Hillsdale, 1980) test for item DIF albeit with a different and more correct interpretation. Illustrations with real and simulated data are provided. © 2014, The Psychometric Society. Source

Van Der Schee J.,VU University Amsterdam | Notte H.,Cito | Zwartjes L.,Vereniging Leraars Aardrijkskunde
International Research in Geographical and Environmental Education

An important question for geography teachers all over the world is how to define, stimulate and test geographic literacy. Although modern technology is no guarantee of quality, it offers new possibilities for teaching and testing, as can be seen in contemporary geography learning/teaching units using digital maps and interactive tests. Tests such as the International Geography Olympiad and the Dutch GEA test for geographic literacy are starting points for an international discussion about the quality of geography teaching and for the development of a new international geography test. © 2010 Taylor & Francis. Source

De Klerk S.,EX plain | De Klerk S.,University of Twente | Veldkamp B.P.,University of Twente | Eggen T.J.H.M.,Cito
Computers and Education

Researchers have shown in multiple studies that simulations and games can be effective and powerful tools for learning and instruction (cf. Mitchell & Savill-Smith, 2004; Kirriemuir & McFarlane, 2004). Most of these studies deploy a traditional pretest-posttest design in which students usually do a paper-based test (pretest) then play the simulation or game and subsequently do a second paper-based test (posttest). Pretest-posttest designs treat the game as a black box in which something occurs that influences subsequent performance on the posttest (Buckley, Gobert, Horwitz, & O'Dwyer, 2010). Less research has been done in which game play product data or process data itself are used as indicators of student proficiency in some area. However, the last decade researchers have started focusing on what is happening inside the black box to an increasing extent and the literature on the topic is growing. To our knowledge, no systematic reviews have been published that investigate the psychometric analysis of performance data of simulation-based assessment (SBA) and game-based assessment (GBA). Therefore, in Part I of this article, a systematic review on the psychometric analysis of the performance data of SBA is presented. The main question addressed in this review is: 'What psychometric strategies or models for treating and analyzing performance data from simulations and games are documented in scientific literature?'. Then, in Part II of this article, the findings of our review are further illustrated by presenting an empirical example of the - according to our review - most applied psychometric model for the analysis of the performance data of SBA, which is the Bayesian network. Both the results from Part I and Part II assist future research into the use of simulations and games as assessment instruments. © 2015 Elsevier Ltd. Source

Discover hidden collaborations