Skip to page content

Automatic Measurement of Students' Reasoning Patterns in Scientific Argumentation (Auto-SPIN-SA)

  • Title: Student Reasoning Patterns in Science (SPIN-Science)
  • Award period: 4 years (07/01/2024 – 06/30/2028)

For this project, researchers will collaboratively develop, refine, and validate an artificial intelligence (AI) supported classroom assessment tool to measure middle-school students’ reasoning patterns when engaging in the practice of scientific argumentation about ecosystem phenomena. The AI-supported assessments will target the measurement of key components required by the practice of scientific argumentation and support various students’ reasoning patterns when students engage in argumentation. The measure will embed AI models in classroom assessments to afford real-time diagnosis of students’ reasoning patterns and immediate feedback.

This project aims to address three needs the field has:

  1. A validated means to assess students’ scientific argumentation ability since the adoption of the Next Generation Science Standards,
  2. Tools to measure English learners’ (EL) language uses in reasoning and scientific argumentation, and
  3. An instrument that measures and tracks in real time how well students make use of class instruction to develop argumentation ability over extended time.

Project Activities

Researchers will iteratively develop AI-supported classroom assessments and refine them through usability and two rounds of classroom studies (pilot and field studies). These assessments will delineate various reasoning patterns in students’ arguments. Researchers also will expand AI models to capture intended meaning in a broad range of linguistic features, including those of ELs, when they are engaged in scientific argumentation activities. The team will conduct a series of validation studies to investigate the cognitive, inferential, and instructional validity of the AI-supported assessments of student argumentation.

Structured Abstract

Research will take place in middle schools in California, South Carolina, and New Jersey, and researchers will select schools with significant EL student populations to participate in the pilot and field studies. Approximately 60 grade 6-8 math classrooms will participate in the piloting and experimental classroom studies. Participating schools will consist of at least 25% minorities and 25% low-income students. The assessment developed in this project will measure students’ scientific argumentation competence and apply natural language processing analysis to measure key argumentation components. An evidence-centered design approach will be applied to ensure the validity of assessments.

Research Design and Methods

Researchers will develop the assessment tool through two parallel strands of iterative development, feedback, and refinement. The first strand focuses on the development and refinement of the AI-based tool, and the second strand aims to develop assessment tasks that allow students to demonstrate their argumentation skills in science practices.

Researchers will first collect cognitive validity evidence in a cognitive lab study with 50 middle school students (including 20 ELs) to gather user experiences and student and teacher interviews to evaluate the design of the intervention and explore how well the AI-based tool functions in terms of classifying reasoning patterns and understanding how students respond to AI feedback.

To investigate inferential validity, researchers will conduct two rounds of classroom studies (pilot and field studies) to explore how teachers and students use the tool and determine how reasoning patterns change when students are engaged with tasks that include AI feedback tools and teacher dashboards. The pilot study will include two or three middle school teachers and approximately 200 students (including at least 50 ELs). The field study will include approximately 1,000 students and 18 science teachers.

To examine instructional validity, researchers will conduct classroom observations from selected classes from the pilot and field studies to investigate how teachers use AI-supported assessments and how they make use of the student data generated from these assessments to inform their instruction to facilitate students’ scientific argumentation.

To triangulate with the classroom observation data, researchers will conduct interviews with selected science and EL teachers to exam how well teachers perceive and use AI-supported assessments to support their students’ scientific argumentation learning and how well differential decisions and actions are supported by these assessment tasks.

Products and Publications

This project will result in a fully developed and validated AI-supported classroom assessment tool to measure middle-school students’ reasoning patterns when engaging in the practice of scientific argument about ecosystem phenomena. This project will also result in peer-reviewed publications and presentations as well as additional dissemination products that reach education stakeholders such as practitioners and policymakers.

Our Team

Xiaoming Zhai

Principal investigator

Ninghao Liu

Co-principal investigator

Shuchen Guo

Postdoctoral research scientist

Ehsan Latif

Postdoctoral research associate

Xuansheng Wu

Graduate student

© University of Georgia, Athens, GA 30602
706‑542‑3000