SBAC Creates Barriers for High-Poverty Districts, Not a Valid Measure of Student Growth

CEA Director of Policy & Research Donald Williams at today's meeting of the state Mastery Examination Committee.

October 19, 2016

New research showing that the Smarter Balanced Assessment Consortium (SBAC) test disproportionately disadvantages students and teachers in high-poverty districts was distributed to members of the state Mastery Examination Committee today.

The study of 600 teachers, conducted by Abacus Associates for the Connecticut Education Association, underscores mounting concerns by legislators, educators, parents, and others about the test's validity, fairness, and negative impact on students—particularly those in high-poverty districts and those with limited access to computers.

CEA Director of Policy, Research, and Reform Donald Williams, who spoke at the Mastery Examination Committee meeting, also expressed strong concerns about the test's failure to measure student growth or to account for summer loss. While summer learning loss varies across subjects, grade levels, and socioeconomic groups, students on average score lower on assessments administered after summer break than on those given at the end of the school year.

"Summer loss is a real problem and is more pronounced in some districts. If we're looking at measuring growth from April to April, you can't measure whether there was summer loss. I think that is deeply flawed," he said.

Williams told the committee, "We will be disagreeing fundamentally if some in the group believe SBAC is designed to measure growth."

A more appropriate way to capture accurate information about students' academic growth and needs, he said, is to measure students' growth from September to June.

Commenting on SBAC's shortcomings, Joseph Cirasuolo, executive director of the Connecticut Association of Public School Superintendents (CAPSS), acknowledged, "It gives you a score at the end of the game; it doesn't tell you why you won or lost."

Williams added, "We're not saying we should not have a mastery exam. We're not saying we should not use the mastery exam. But use the test for what it was designed for: to give a big-picture snapshot that could influence programming, professional development, curriculum, interventions, and other factors—not as a high-stakes determinant of school rankings, administrator rankings, or teacher rankings."

He concluded, "We want a variety of indicators of growth. We want a rich, holistic set of indicators of student growth and development that provide a full picture."

Back to Top