ANALYSIS OF NJ DEPARTMENT OF EDUCATION'S NEW SCHOOL PERFORMANCE
REPORTS FINDS FAULT ON MANY LEVELS WITH METHODOLOGY AND INACCURATE DATA
On
April 10, 2013, the NJ Department of Education (DOE) released their much
anticipated, and much delayed, School Performance Reports, which replace the
State School Report Cards.
In
releasing the Performance Reports, the DOE claimed they will "help users
better understand school performance in the context of state performance and
the performance of similar 'peer schools.'"
The
Performance Reports, however, fail to live up to this claim. Unlike the School
Report Cards, the new reports are dense, confusing and needlessly complex. NJ
school administrators have already raised serious concerns about inaccurate
data and the convoluted and controversial school "peer rankings."
Most
importantly, the complexity of the Performance Reports defeats their basic
purpose: to give parents and taxpayers key information about the overall
performance of their public schools and districts - successes, gains and
challenges. Instead, the reports use very complicated methods of sorting and
comparing individual schools with "peer" groupings, statewide
averages and other benchmarks. This complexity makes the reports difficult, if
not impossible, for parents, concerned citizens, lawmakers and others to
understand and use to engage in positive efforts to support New Jersey's public
education system.
The
cornerstones of the new Performance Reports are comparisons of individual
schools' test scores, graduation rates and other indicators with schools that
supposedly share similar student enrollment characteristics. The DOE has
decided to no longer use District Factor Groups (DFG) for comparison. DFGs
placed districts, not schools, into eight groups based on the socioeconomic
conditions of the communities they served. Instead of the DFGs, the DOE is
using a methodology called "Propensity Score Matching," which creates
a list of "peers" for each school in New Jersey, grouping schools
together based on shared demographic characteristics, namely student poverty,
limited English proficiency, and Special Education classification.
However,
the DOE has made some questionable analytic decisions that result in
comparisons among schools that actually vary quite dramatically in terms of
their student makeup. This variation in so-called "peer" groupings of
schools has generated confusion and frustration among local educators and
stakeholders.
In
addition, the DOE took the additional step of comparing each individual school
to both its "peers" and the state overall using percentile ranks. The
reports compare a school's position relative to other schools using a scale
from zero to ninety-nine, representing the percentage of "peer
schools" that school is outperforming.
The
DOE's use of this method creates problems because percentile ranks are
relative, or in other words a zero-sum game. A school can only be seen as
successful, or "highly performing," if it is outpacing its "peer
schools," regardless of its actual achievement. The DOE's failure to
provide an adequate context for these rankings means users will have no idea
about the absolute distance between a school ranked at the bottom and one
ranked at the top. The schools may vary widely in performance, or hardly at
all. Without offering any additional data on the range of scores, the user is
unable to determine how meaningful those rankings are.
The
DOE then goes further by labeling schools using an even broader categorization
of the percentile rankings. The computer-generated "school
narratives" assign schools to one of five performance categories ranging
from "very high" to "significantly lagging." This means
that, regardless of absolute achievement, many schools are labeled as
"lagging" simply because they are on the lower end of their peer
group, not because they are underperforming in any meaningful sense.
For
example, if a school has a proficiency rate of 95%, but the majority of its
peers score even higher, this school will have a low percentile ranking and
will be labeled as "lagging," despite a high level of achievement. In
another scenario, a school may have a proficiency rate of 75% and a low peer
percentile rank, but could be separated from its top performing peer by just a
few, or as many as 25, percentage points.
In
choosing to present the data in this way, the DOE has created a framework of
competitive rankings and an emphasis on labeling performance as
"lagging," even among the state's highest performing schools. The
reports do not give parents clear information to realistically judge their
children's schools' performance, and they burden school administrators with the
unforgiving task of explaining the complicated and sometimes contradictory
classifications.
"The
over-emphasis on complex rankings is consistent with NJ Education Commissioner
Christopher Cerf's continuing narrative of 'failing public schools' when, in
fact, New Jersey's public schools are among the best in the nation," said
David Sciarra, ELC Executive Director. "Rather than helping facilitate
community conversations and collaborative efforts to improve our schools, the
new Performance Reports are clearly designed to justify the Christie
Administration's agenda of cutting State investment in public education and
imposing heavy-handed, top-down interventions from Trenton."
Using
the DOE's own labeling, the new Performance Reports are "significantly
lagging."
Education Law Center Press Contact:
Sharon Krengel
Policy and Outreach Director
No comments:
Post a Comment