Reading the coverage of New York City’s Renewal program, it would be easy to conclude that the program isn’t working for most schools.
And on Thursday night, the New York Times continued in that vein, with a story about the turnaround program headlined: “For $582 Million Spent on Troubled Schools, Some Gains, More Disappointments.”
The story is framed as a fact-check on Mayor Bill de Blasio, who said at a press conference that Renewal schools are showing signs of progress since they had outpaced the city’s average growth in English and math scores.
In evaluating that claim, the Times points out that despite gains at some schools, most Renewal schools have actually not made progress closing the gap between their original scores three years ago and the city average. Some of the program’s fiercest critics seized on the analysis.
But according to three academics who study school performance, two of whom have studied the Renewal program’s impact, the Times’ characterization of the program as producing spotty results is problematic for the same reason de Blasio’s original claim of success doesn’t hold water. That’s because comparing Renewal test score data to city averages is poor evidence of whether the program is working.
The Times’ analysis can’t actually establish a causal effect.
The Times frames its analysis this way:
“To track the effects of the program, which gives schools a longer day and access to special services like vision care for students or mental health supports, The New York Times analyzed Renewal school performance on the 2016 and 2017 tests, as compared with the 2015 scores.”
The phrasing suggests that it’s reasonable to infer “the effects of the program” from test score changes, which is simply not possible, according to Thomas Dee, director of Stanford’s Center for Education Policy Analysis.
That’s because establishing a program’s effect depends on a model that can sort out what would happen to test scores without the program at all. One way to isolate that effect could be to compare low-performing schools that didn’t make it into the Renewal program with those that did, and study the difference in scores between the two groups of schools. But neither the education department nor the Times analysis attempted to do that, making those claims about the program’s impact misleading, experts said.
“There’s often this tacit assumption that we’ve learned about the true effect of the program from comparisons like this, and any researcher worth their salt will tell you that’s not the case,” Dee said.
Amy Virshup, a Times metro editor, defended the story’s analysis. “The piece never presumes to judge the success or failure of the Renewal program based on the ELA and math test results,” Virshup wrote in an email. “Judging whether Renewal is working or not would require many more data points and much more analysis.”
Renewal schools may be serving different students than when the program started.
Another reason the test scores could be misleading is that it has been well-reported that Renewal schools have lost a significant share of their students, continuing an enrollment drop-off that has persisted for years.
And since higher-performing students may be more likely to find a new school, it’s plausible that Renewal schools are serving a more challenging student body than when the program started.
“In general, the students that are most [likely to leave] are those who are higher performing,” said Marcus Winters, who wrote a report about the Renewal program for the conservative-leaning Manhattan Institute.
If that’s true, year-over-year test score comparisons wouldn’t be completely fair, since they would simply pick up changes in which students are served by Renewal schools — instead of the program’s real effect.
“The Times really didn’t do anything to ensure that their comparison to other schools was really comparable,” said Teachers College professor Aaron Pallas.
There’s no mention of rigorous research that has attempted to show causal effects.
Two researchers have tried to suss out whether the Renewal program is creating positive academic changes, and have reached different conclusions.
In an analysis that compares Renewal schools to similar ones that didn’t enter the program, Pallas found the program had essentially no effect on graduation rates or test scores. Meanwhile, using a different statistical model, the Manhattan Institute’s Winters found that the program is actually creating meaningful academic benefits.
The Times doesn’t cite either of those analyses, which would complicate the picture. The results of those research efforts suggest the Times’ description of mixed results is certainly plausible, but it isn’t directly supported by the data analyzed in the story.
Still, Winters said, the Times analysis is worth doing, as long as there are caveats, missing in this case, about what it can and can’t explain. “I don’t think it’s the definitive analysis of what’s going on,” Winters said. “But it’s not nothing.”
This article was originally posted on Chalkbeat, a nonprofit news site covering educational change in public schools.