This project has three main goals. First, researchers will develop an assessment of probabilistic reasoning—a concept found in many states' mathematics standards that is difficult to assess with traditional methods. Second, they will develop a concept inventory to advance educational assessment methodology, something that has traditionally used models that are not well-suited to measure misconceptions based on learning theories. Third, the team will develop a truly formative assessment system to support teachers by allowing them to diagnose student misconceptions and plan for targeted instruction.
Setting: The DICE system will be developed and validated in U.S. middle grades classrooms across a variety of U.S. geographic areas and urbanicities.
Sample: Across five different studies and data collection activities, the DICE project will engage over 5000 6th to 8th grade students (the majority of whom will participate in large-scale administrations of the DICE to pilot and calibrate its items) and over 60 educators teaching grades 6–8.
Assessment: The DICE will assess five student misconceptions related to understandings of probability and chance. These are: (1) the conjunction fallacy (the misconception that probabilities give the proportion of outcomes that will occur), (2) the outcome approach (the misconception that later random events compensate for earlier ones), (3) availability bias (the misconception that more trials increase the probability of certain outcomes), (4) representative bias (the misconception that every sample of a population must be representative of that population), and (5) equiprobability bias (the misconception that all outcomes are equally likely). To assess these misconceptions, the DICE will use selected response test questions with response options that specifically target or reflect certain misconceptions. Teacher feedback reports will produce student profiles reflecting the probability that individual students reason using each of the five misconceptions.
Research Design and Methods: Developing and validating the DICE will involve several studies and data collection activities. During the first year, the research team will develop approximately 75 assessment items, which will vary in their exact format and in which specific misconceptions they target. Once the items are developed the team will collect expert reviews from advisors and expert teachers, both to improve the items, and to provide test content validity evidence. The team will also conduct in-depth cognitive labs (48 students) and innovative laboratory studies including eye-tracking and affective state detection (50 students), both to improve the items and to collect response process validity evidence. Once the items have been reviewed and revised, the team will conduct large-scale administrations of the DICE items in years 2 and 3 (2500 students each) to assess the items' psychometric properties and provide internal validity evidence for the overall assessment. In years 2 and 3, the team will also conduct focus groups with approximately 50 practitioners who will assist in creating, critiquing, and revising the DICE feedback reports and interpretive guides. Finally, in years 3 and 4, the researchers will conduct two experimental studies (involving 12 teachers and 60 students) to collect external validity evidence to support claims that misconception diagnoses from the DICE are meaningful, coincide with expert diagnoses, and are sensitive to development and changes in probabilistic reasoning.
Control Condition: Due to the nature of this project, there is no control group.
Key Measures: The DICE itself is the primary measure used in this study. Other metrics include researcher-developed methods for analyzing eye-tracking data and identifying students' affective state during cognitive labs, and for diagnosing misconceptions during the experimental studies in the final years of the project.
Data Analytic Strategy: Across their various studies and activities, the researchers will analyze results using a variety of methods, including qualitative and quantitative analyses, psychometric modeling with the Scaling Individuals and Classifying Misconceptions (SICM) model, and machine learning.
Associate Professor, Educational Psychology
North Carolina State UniversityRoger Azevedo
North Carolina State UniversityJessica Masters
Research MattersLisa Famularo