Skip to page content

Diagnostic Inventories for Cognition in Education (DICE)

A formative assessment tool for middle school probability

Probability can be a tough concept for students to understand. Yet, it is foundational for understanding statistics topics typically taught in the middle school curriculum and for making informed decisions in the real world.

The researchers of the Diagnostic Inventories for Cognition in Education (DICE) project developed two assessments that target reasoning covered in the statistics and probability domains of most states’ middle grades math standards as well as in the Common Core State Standards. The assessments were designed to be truly formative, meaning that they can be used by teachers and students during instruction to inform ongoing teaching and learning.

We hope that our assessments will help teachers pinpoint the individual concepts that each student is struggling to understand. Our assessments use a new method to provide feedback about students’ overall performance and identify students who may be reasoning with misconceptions. This method allows us to provide valid and reliable feedback about student reasoning for the purpose of informing instructional decisions and closing gaps in student learning.

About the Assessments

The DICE assessments are designed to diagnose middle-grade (6-8 grade) students’ misconceptions in their probabilistic reasoning. Students do not need to have already studied statistics or probability to take the DICE assessments.

The DICE assessments can each be taken in about 15 minutes and are called Exploring Probability and Chance and Data.

Exploring Probability

The Exploring Probability assessment is designed to assist teachers in helping students develop foundational understandings about probability studied in middle school. Specifically, in middle school, students should:

  1. Use information from a sample space (all possible outcomes) to make judgments about the probability, or likelihood, of an event. This is aligned with two standards from the Common Core:
  • CCSS.MATH.CONTENT.7.SP.C.5: Understand that the probability of a chance event is a number between 0 and 1 that expresses the likelihood of the event occurring. Larger numbers indicate greater likelihood. A probability near 0 indicates an unlikely event, a probability around 1/2 indicates an event is neither unlikely nor likely, and a probability near 1 indicates a likely event.
  • CCSS.MATH.CONTENT.7.SP.C.7 A: Develop a uniform probability model by assigning equal probability to all outcomes and use the model to determine probabilities of events.
  1. Understand that if a theoretical probability is known or assumed, in a sequence of random, independent events (ex: series of individual tosses of a fair coin), past results do not influence the outcome of future events. This is aligned with two standards from the Common Core:
  • CCSS.MATH.CONTENT.7.SP.C.5
  • CCSS.MATH.CONTENT.7.SP.C.6: Approximate the probability of a chance event by collecting data on the chance process that produces it and observing its long-run relative frequency and predict the approximate relative frequency given the probability.

The Exploring Probability assessment consists of 15 questions that can be used to diagnose whether or not students hold common misconceptions that can hinder their learning of the above concepts and standards. Each question on the assessment has one or more incorrect responses that are tagged with one of the following common misconceptions corresponding to the above conceptions:

  1. Everything Equally Likely. No matter what the sample space is, all outcomes are equally likely to occur. This can manifest itself in several ways. A common error occurs when there are two possible types of outcomes (e.g., blue or red marbles), but not in equal parts in the sample space (e.g., 3 blue and 5 red marbles) and students claim that since there are only two possible colors, they must be equally likely to occur.
  2. Ignoring Independence. Past results can influence future results. This can manifest itself in reasoning about continuing a perceived pattern in results. This reasoning could also be related to the recency of results. After a streak of events that do not represent the assumed population (negative recency), an event that will “even out” the probability distribution is considered more likely. If an outcome appears many times in a row (positive recency), it is “hot” or “on a roll” and students may believe it will occur more often now.

Chance and Data

The Chance and Data assessment is designed to assist teachers in helping students develop foundational understandings about important concepts related to comparing probability of events from two sample spaces (theoretical probability) or two events that could occur with different sample sizes (empirical probability). Specifically, in middle school, students should:

  1. Use information from one or two sample spaces to make appropriate judgments and comparisons about the probability, or likelihood, of an outcome by considering the number of possibilities for a specific outcome as well as the total number of all possible outcomes. This is aligned with one standards from the Common Core:
    • CCSS.MATH.CONTENT.7.SP.C.5: Understand that the probability of a chance event is a number between 0 and 1 that expresses the likelihood of the event occurring. Larger numbers indicate greater likelihood. A probability near 0 indicates an unlikely event, a probability around 1/2 indicates an event is neither unlikely nor likely, and a probability near 1 indicates a likely event.
  2. Recognize that empirical results (empirical probabilities) from repeated sampling will vary from expected when a theoretical probability is known, and they expect MORE variability in lower sample sizes and LESS variability with higher sample sizes. This is aligned with one standards from the Common Core:
    • CCSS.MATH.CONTENT.6.RP.A.3.D: Use ratio reasoning to convert measurement units; manipulate and transform units appropriately when multiplying or dividing quantities.

This assessment consists of 14 questions that can be used to diagnose whether or not students hold common misconceptions that can hinder their learning of the above concepts and standards. Each question on the assessment has one or more incorrect responses that are tagged with one of the following common misconceptions corresponding to the above conceptions:

  1. Ignoring Relative Frequencies. Comparing sample spaces only by comparing frequencies of possible outcomes. Students who ignore relative frequencies make probability judgments by only comparing frequencies, and reason that if you have more (or less) of a certain outcome in the sample space, regardless of the probability for an outcome, that this leads to a higher (or lower) chance of an event occurring. A student may also attend to the total number of outcomes and believe that a smaller total would increase the chance of a particular outcome.
  2. Illusion of Equality. No matter what the sample sizes are, all outcomes are equally likely to occur in both samples. Given empirical data of two possible results (typically with very different sample sizes) students make a judgment that resulting events are equally likely to occur (Ex: thinking that 3 heads in 5 fair coin tosses is just as likely to happen as getting 300 heads in 500 tosses). Students might consider two empirical results to have the same likelihood of occurring based on reasoning in several different ways such as proportional or 50-50 reasoning.

Who Can Use the DICE Assessments

The DICE Assessments are designed for teachers of 6-8 grade mathematics, but the assessments are freely available to all teachers.

How to Use the DICE Assessments

The online assessments and reporting system are designed to provide teachers with results about specific misconceptions that their students exhibit on the assessments. The primary tool in the DICE assessments is the online diagnostic class report. For more information about our online diagnostic report and how to use that information, please view our online document (PDF) .

To access the online diagnostic report for your students, you must follow these steps:

Step 1: Administer a paper-and-pencil version of the assessment

Here are links to download PDFs of printable versions of the assessments:

Step 2: Input your students’ responses into the provided template files

Input your students’ responses into the CSV template for each assessment you administered. Here are links to download CSV files of the templates for each assessment:

How to use the CSV templates:

  • In the CSV files, the first row for “purple1” is an example to show what your rows will look like. Delete the purple1 row after you understand how to fill in the spreadsheet.
  • Add your students’ responses by entering the letters that correspond with the response options they selected. Record skipped responses as “S”.
  • Do not change the column names. This is important! If you change the column names, then the online diagnostic report will not work.
  • For “Username,” you can use whatever information you want to identify your students.
  • Save your CSVs as “ExploringProbability_[yourname][yourschool].csv” and “ChanceAndData[yourteacheridnumber].csv”.

Step 3: Upload your students’ responses to obtain the online diagnostic report

Visit the DICE website to access the assessment specifications and upload your CSV files from Step 2. After uploading a CSV, you will immediately see the online diagnostic report for your class.

Who Developed the DICE Assessments and Reporting System?

The DICE project was conducted by researchers from the University of Georgia, Research Matters, North Carolina State University, and University of Central Florida. The DICE project was funded by the Institute of Education Sciences (IES).
Laine Bradshaw, PhD

Principal investigator
Associate professor of quantitative methodology
Department of Educational Psychology
University of Georgia

Research interests:

  • Diagnostic classification modeling
  • Diagnostic assessment design
  • Student misconceptions in probabilistic reasoning
  • Innovative assessment technology

Lisa Famularo, PhD

Co-principal investigator
Co-founder and partner
Research Matters, LLC

Research Interests:

  • Innovative assessment
  • Education policy
  • Design of tools and programs to support teaching and learning of students from underserved populations
  • Bridging the gaps between research, policy, and practice

Hollylynne Lee, PhD

Co-principal investigator
Professor of mathematics and statistics education
Department of Science, Technology, Engineering, and Mathematics Education
NC State University

Research interests:

  • Teaching and learning statistics and probability
  • Teacher education
  • Online teaching and learning
  • Design of technological tools for supporting learning in mathematics and statistics

Jessica Masters, PhD

Co-principal investigator
Co-founder and partner
Research Matters, LLC

Research Interests:

  • Conducting quantitative and qualitative research in K-12 education
  • Technology-enhanced assessment and diagnostic assessment
  • Middle-grades mathematics
  • Teaching computer science

Roger Azevedo, PhD

Co-principal investigator

Madeline Schellman

Graduate research assistant

Hamid Sanei

Graduate research assistant

Daryn Dever

Graduate research assistant

Journal Publications and Conference Presentations

Lee, H. S., Sanei, H., Famularo, L., Masters, J., Bradshaw, L., & Schellman, M. (2023). Validating a Concept Inventory for Measuring Students’ Probabilistic Reasoning: The Case of Reasoning Within the Context of a Raffle. Journal of Mathematical Behavior, 71. Available online.

Sanei, H. S., & Lee, H. S. (2021). Attending to students’ reasoning about probability concepts for building statistical literacy [Paper]. Proceedings of the International Association of Statistics Education Satellite Conference, available online.

Lee, H. S., Famularo, L., Masters, J. Bradshaw, L., & Sanei, H. R. (2019). Students’ reasoning about probability in the context of a raffle. In J. M. Contreras, M. M. Gea, M. M. López-Martín y E. Molina-Portillo (Ed.S..), Actas del Tercer Congreso Internacional Virtual de Educación Estadística . Available online.

Lee, H. S. (April 2019). Development of diagnostic assessments in probability reasoning for middle grade students [Paper presentation]. Annual Research Conference of the National Council of Teachers of Mathematics, San Diego, CA.

Bradshaw, L., Famularo, L., Lee, H. S., & Masters, J. (April 2018). Designing diagnostic inventories of cognition in education. In Learning from our students’ mistakes: Using information from incorrect, incomplete, and inefficient student responses [Symposium presentation] American Educational Research Association conference, New York City, NY.

Institute of Education Sciences logo The research reported here was supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R305A170441 to University of Georgia. The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education.

Contact Info

© University of Georgia, Athens, GA 30602
706‑542‑3000