Skip to page content

Is there a better way to test reading comprehension?

  |   Kathryn Kao   |   Permalink   |   Kudos,   Research,   Spotlight,   Students and Faculty

In schools across the country, students are taught to read paragraphs and then answer questions about the text. But a new grant for a University of Georgia researcher may show different—and ultimately better—ways to test students' reading comprehension.

While past research on test-taking strategies has focused solely on college and high school students, a new $1.4 million grant awarded by the Institute of Education Sciences will examine the extent to which reading comprehension tests really measure the reading comprehension skills of school-aged students.

Scott Ardoin, professor and head of the department of educational psychology in the College of Education, will lead a four-year, four-part study to help educators better understand the test-taking behavior of third-, fifth- and eighth-grade students.

By analyzing factors such as eye movement, reading achievement, working memory and motivation, his team will determine whether adjustments are needed to improve the effectiveness of reading comprehension assessments and instruction by teachers.

"The eye-tracking data we collected as part of a previous grant-funded project led us here because the data allowed us to recognize that some students were not reading the entire text," said Ardoin, who also serves as co-director of the Center for Autism and Behavioral Education Research at the College.

With improvements in eye-tracking technology, researchers can now do more than assess how many questions students answer correctly—they can observe exactly how students read text and respond to questions.

However, if reading comprehension tests are meant to measure a student's ability to understand the passages they're reading, how can educators ensure its effectiveness if one student is reading the passage and another is just searching for the answers in the text?

"We know that individuals read text differently based on their purpose for reading," he added. "If students' purpose for reading a text is simply to answer comprehension questions, they are likely to read the text differently than if their purpose for reading the text is to develop an understanding of the text. Their response accuracy is also likely to differ based on their purpose of reading."

Ardoin also found that test-taking strategies taught by test-prep companies and school districts, which are meant to help students, might also impact performance. Currently, these strategies are inconsistent and some might actually negatively impact students. Unfortunately, due to a lack of research, little is known regarding what strategies are helpful and which might be detrimental.

"Preliminary data suggest that reading questions before the text, as some students are taught, could be detrimental, especially for students with poor working memory," said Ardoin. "Unfortunately, ineffective—possibly detrimental strategies—are more likely to be given to those students who are already struggling to read.

Additionally, different test characteristics, like text length, text type and question type, may cause the same student to earn drastically different scores on two separate reading comprehension tests. As a result, while the first study will examine eye movements to measure test-taking behavior, the remaining studies will manipulate the question-response format to see if assessments can be improved to more accurately measure reading comprehension.

By varying the types of questions students need to answer, such as multiple choice or short answer questions, and controlling whether students have access to passages when responding, Ardoin and his team, which includes Katherine Binder at Mount Holyoke College in Massachusetts, can analyze if different test formats might provide a better measure of students' comprehension skills.

"One study with college participants showed that having to respond to a short answer question resulted in participants reading the passage much more thoroughly than if they just had to answer multiple choice questions," said Ardoin. "By just asking one short answer question, we might be able to encourage students to engage in behavior that's more reading like."

The ultimate goal, added Ardoin, is to help students read for meaning so they can provide instructors with a better understanding of their comprehension skills. As a result, the study may also impact classroom instruction by altering what teachers know about individual students' reading skills and what reading comprehension strategies they teach.

© University of Georgia, Athens, GA 30602
706‑542‑3000