Skip to page content

Summary

Performance-based constructed-response items often prohibit timely feedback and hinders science teachers from using these assessments. However, artificial intelligence demonstrates great potential to meet this challenge. To discuss this practice further, experts in assessment, AI, and science education will gather for a two-day conference at the University of Georgia to generate knowledge of integrating AI in science assessment.

Abstract

The framework for K-12 science education has set forth an ambitious vision for science learning by integrating disciplinary science ideas, scientific and engineering practices, and crosscutting concepts, so that students can develop competence to meet the STEM challenges of the 21st century. Achieving this vision requires transformation of assessment practices from relying on multiple-choice items to performance-based knowledge-in-use tasks.

Such novel assessment tasks serve the purpose of both engaging students in using knowledge to solve problems and tracking students’ learning progression so teachers can adjust instruction to meet students’ needs. However, these performance-based constructed-response items often prohibit timely feedback, which, in turn, has hindered science teachers from using these assessments. Artificial intelligence (AI) has demonstrated great potential to meet this assessment challenge. To tackle this challenge, experts in assessment, AI, and science education will gather for a two-day conference at the University of Georgia to generate knowledge of integrating AI in science assessment.

The conference is organized around four themes:

  • AI and domain specific learning theory
  • AI and validity theory and assessment design principles
  • AI and technology integration theory
  • AI and pedagogical theory focusing on assessment practices

The conference allows participants to share theoretical perspectives, empirical findings, as well as research experiences. It can also help identify challenges and future research directions to increase the broad use of AI-based assessments in science education. The conference will be open to other researchers, postdocs, and students via Zoom. It is expected that conference participants establish a network in this emergent area of science assessment. Another outcome of the conference, Applying AI in STEM Assessment, will be published as an edited volume by Harvard Education Press.

The Discovery Research PreK-12 Program (DRK-12) seeks to significantly enhance the learning and teaching of science, technology, engineering, and mathematics (STEM) by preK-12 students and teachers through research and development of innovative resources, models, and tools. Projects in the DRK-12 program build on fundamental research in STEM education and prior research and development efforts that provide theoretical and empirical justification for proposed projects.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the foundation's intellectual merit and broader impacts review criteria.

Sponsor

National Science Foundation Conference Grant DRK-12 Program
$49,995

Principal Investigator

Xiaoming Zhai

Assistant Professor, Mathematics, Science, and Social Studies Education

Joseph Krajcik, CREATE for STEM Institute Director and Professor, Michigan State University

Active Since

August 2021