Saturday, December 11, 2010

Charter School Accountability

A common criticism lobbied at charters is that they can pick and choose who comes to their school, and they can more easily remove students from their school. This leads to artificial inflation of performance, for they are not doing the same job that traditional public schools are because they have that ability.

This can be seen in the first comment on The Notebook article in response to Marc Mannella's publication of the KIPP Open Book. The comment writes:

"Marc

Thank you for your post. I like the concept of an Open Book. Maybe you can do a follow up post where you share both your struggles and strategies to retain your students. I noticed in your Open Book Executive Summary , that KIPPs retains 87%-90% of your students. What are the reasons you loose your students? What does the quanative and qualative data say about those students? What percentage of students leave because KIPP is not the right educational fit?

There is this perception that some charter schools dump students that dont fit their mold.

It is great that you are willing to be transparent to settle these perceptions."


While it seems that this commenter took the time to navigate to here, the commenter did not take the time to read the graph that KIPP clearly presents right next to the retainment data: in 2008-9, KIPP lost 2% of its students because they moved and 10% for "other reasons," defined as "e.g. transportation, parents deciding the school was not a good fit for their student." No where on the open book, interestingly enough, can you find total enrollment figures, so you cannot determine how many students that is. As it is, though, this is kind of a non-starter-- 10% leave the school in one year for reasons other than moving? That does not seem to be an enormous amount, especially since we have no other baseline to go off of. However, a more pointed question to ask would be to break this 10% down in terms of overall enrollment figures, especially because KIPP says that they track all of this (footnote 3). That would please the critics and me.

Otherwise, minus some abysmal graphs (average growth purports to show one number between the two years, yet this shows 5th and 8th grade scores... are these just the average scores from each and the growth is between the two? Then what do the individual bars represent? And all of this contradicts the subtitle, "Average of 2004 - 2010 PSSA Data," which says that they subtracted two averages?), the document represents a supremely excellent step to quiet critics. Public schools don't make their teacher retention data available-- KIPP does, and it throws in some resumes and administrator pay scales to boot.


Thursday, November 25, 2010

Senate Bill 441

In my second post on Pennsylvania's Race to the Top, I wrote on why Pennsylvania lost. I repeatedly mentioned Senate Bill 441, which among other things would have allowed approved alternative routes to certify teachers, thereby limiting traditional routes stranglehold licensing teachers, a notorious cash crop for them. Specifically, I wrote that:

...the stalled passage of State Bill 441 directly caused 10.6 of the 42.6 points to be lost in sub-criteria D1... the application comments state, 'yet despite votes in the house and senate that were overwhelmingly supportive of technical aspects of the bill subsequent senate/house reconciliation held up its passage. Because the law has yet to be passed and could still be a victim of politics medium points are awarded.' And the reviewers then withheld further points, and rightly so: the bill already was a victim of politics—why should they believe it wouldn’t later on?

Well, so what happened with the bill?

On November 22, it was presented to the governor-- completely stripped of any language that was promised at the time of the Race to the Top application. Take a look for yourself. Absolutely everything of any substance is gone, but at least some doctors can now sign some new forms.

Earlier I sent an email to Senator Vance, the sponsor of the bill, inquiring that "I have been following senate bill 441, and I was wondering at what point all language relating to alternative routes to certification was stripped from the bill and why it was done." We'll see if she responds.

What is sad here is that the application reviewers were completely right-- SB 441 went from something promising to something else altogether, further vindicating the decision NOT to award Pennsylvania any Race to the Top money.

Saturday, November 20, 2010

College Bound

Recently, the Inquirer wrote this article that states that "Only 49 percent of Philadelphia graduates ever enroll in two- or four-year colleges."

Let's do a little work with that percentage. First, it's 49 percent of Philadelphia high school graduates-- so how many Philadelphia high schoolers graduate?

Well, a Notebook article told us that the "district's four-year, on-time graduation rate for the freshman class that started in fall 2006...[is] 57 percent."

OK, so only 57% of Philadelphia public high schoolers graduated, and of the graduates, only 49% enrolled in college. The actual percentage, then, of Philadelphia public high school students who attend college is not 49% but 28% (49% of 57).

So, if you walk into a Philadelphia high school classroom of 20 students, about 5, maybe 6, of them will go to college.

Friday, November 5, 2010

Education and Statistics

Recently The Notebook published a piece by Dale Mezzacappa that illustrates the misuse of educational statistics.

The data show that "about a quarter of students initially referred for expulsion by their schools since August 2009 were not ultimately expelled by the School Reform Commission," and that since April "more than one-third of the cases brought to the SRC did not result in expulsion."

That is all the data show. The 1/4 number comes from a 13 month period. the 1/3 number comes from April to current. Over the 13 months, the exact number is 76 out of 324 students not expelled. The author presents no n for the 1/3 number.

Using these numbers, the author goes on to assert that, "still, the reversals point to the possibility that schools are recommending students for expulsion without learning enough about the circumstances of the incident, or that the District’s zero tolerance discipline policy leaves too little discretion to school administrators and hearing officers in the earliest stages of the process."

Unfortunately, the numbers do not point to that at all. In fact, the numbers really do not show anything.

1) There is no baseline measurement for a previous 13-month period. How do we know if 1/4 is high, low, or the same?
2) There is no baseline measurement for April-October (the 1/3 number). How do we know if 1/3 is high, low, or the same?
3) It is impossible to compare April, 2010-October, 2010 to August, 2009-October, 2010 (the 1/4 to 1/3 "increase"). How do we know that this "increase" is not due to external effects, such as the end of school and the ensuing beginning of the school year?
4) There is no n presented for the April-current number. How do we know if there were 3 or 100 expulsion cases?

When a commenter at the bottom pointed this out, Paul Socolar explained that no baseline exists and then asserted that "the suggestion that expulsion data need to be compared to expulsions from the same month or time of year in a previous year might make sense if expulsion cases were handled in a timely manner..." That makes little sense, and the commenter responded yet again with the point that, if the study was done on a longer term, periodic or cyclical effects could be assumed in both set and comparisons could be made.

However, no one responded to that. Why? Probably because it is right. Furthermore, it is right and essentially undoes the supposedly statistical support that launched the premise of the article. Unfortunately, that premise was valid and important, but once again, sloppy misuse of data undermine a legitimate point.

Pennsylvania's Loss in Race to the Top (August 27)

It seems everyone is quick to blame Pennsylvania’s loss in Race to the Top on the lack of union buy-in. For example, Rendell, who maintains that “in all fairness, we should have won,” despite Pennsylvania’s application falling over thirty points short, said that, “if all Pennsylvania State Education Association and the American Federation of Teachers locals in the state had "bought in" to the application, "it would have been very helpful." Senator Piccola also said, "Clearly, our application was not strong enough. It lacked an education empowerment piece, and it needed total buy-in from teachers unions and school districts. Only one-third of school districts and teachers unions signed on to the [Race to the Top] application at a time when we need all hands on deck for a major sea change in education." (http://www.post-gazette.com/pg/10237/1082446-454.stm) So that’s it, then: the unions caused Pennsylvania to lose the Race to the Top.

Actually, not quite. In a simple and quick analysis of the scores for the second phase, one finds that on section A (“State Success Factors”), which deals mostly with union buy-in, Pennsylvania scored 103.2 out of 125 points, which is an 82.5%, and actually places it higher than a winning state (Rhode Island). On the other hand, for section D (“Great Teachers and Leaders”), of which I wrote of the many perplexing flaws and weaknesses in the application, Pennsylvania scored 95.4 out of 138 points, which is an astonishingly low 69.1%. This percentage places Pennsylvania 15 percentage points lower than the lowest winning state, and it is the only state in the top twenty to score in the 60s. In fact, Pennsylvania lost more than half of its total points in this one section (42.6 out of 82.4 points lost). Politicians should drop the pointing of fingers at the Local Education Agencies (LEAs), and instead examine why they, and the students of Pennsylvania, actually lost.

But that would require them to point the finger directly at themselves: the stalled passage of State Bill 441 directly caused 10.6 of the 42.6 points to be lost in sub-criteria D1. Rendell minimized this loss when he “noted, however, that [the failed passage] cost the state 12 points, not enough to clear the 440-point hurdle,” (note: I am entirely unaware of where this “12 points” figure comes from as Pennsylvania scored 10.4 in a 21 point category concerning this sub-criteria). Yes, that is true—but if Pennsylvania scored 417.6 in the second round and 440 is Rendell’s mark, that means that, if we use Rendell’s “12 points” quote, the stalled passage accounted for over half of the 22.4 points we could have used to hit that 440 mark. And that is quite a different way to look at this stalled passage.

And this gets even more frustrating when one examines why this bill stalled. In an article from June 4, Eric Boehm of The Bulletin neatly writes how (http://thebulletin.us/articles/2010/06/04/news/local_state/doc4c098ad446bd4050600565.txt) political squabbling led by Senator Piccola caused SB 441 to be sent back to the House. Senator Piccola, in a misguided attempt to “strengthen” an already strong bill, inserted language entirely detached from the intention of the legislation (related to the state takeover of the Harrisburg School District). Rightly so, it was rejected, for it had little, if anything, to do with the intent of the original bill. If Senator Piccola had not inserted his amendment and the bill were passed, Pennsylvania would have been about halfway there to the 440-point mark. And this is not conjecture: the application comments state, “yet despite votes in the house and senate that were overwhelmingly supportive of technical aspects of the bill subsequent senate/house reconciliation held up its passage. Because the law has yet to be passed and could still be a victim of politics medium points are awarded.” And the reviewers then withheld further points, and rightly so: the bill already was a victim of politics—why should they believe it wouldn’t later on?

But that still leaves around another 10-12 points to go. If Pennsylvania had simply paid attention to the first phase’s application comments, they could have knocked out half of those: in sub-criteria D4 (iii), which concerns the expansion of successful educator preparation programs, I pointed out that the second application essentially restates the first application even after the first application reviewers noted weaknesses. And there are no surprises here: Pennsylvania lost 5points, and the second application reviewers tersely noted, “there was some discussion but little explanation of how the state plans to expand effective programs. Lowpoints were awarded expanding effective programs.”

To knock out the remaining 5-7 points to get to the 440 points, Pennsylvania could have improved on any of the remaining criteria in section D: in D2, we lost 12.8 points; D3, 8.2 points; and D5, 6 points. That totals 27 total points lost in these three other sub-criteria in D-- which is 5.2 more points lost than in all of Section A--, and a small improvement in any would have put Pennsylvania into the winner’s circle.

Pennsylvania's Race to the Top (July 23)

Race to the Top, a contest that awards money to states demonstrating the greatest effort to reform public school education, has garnered headlines all over the nation. So far two states have won, mostly due to one category, “Great Teachers and Leaders,” the 9 criteria of which account for 138 of the 500 possible points. Regardless of whether the contest is abolished completely for the second round, Pennsylvania’s failure in the first round can be traced back to two specific sub-criteria in this category, D1 and D4, which deal with providing high-quality and improving existing pathways for new teachers and principals.

The main criticisms of D1 and D4 were that “none of [the alternative teacher education] programs operate independently from institutions of higher education,” “[no alternative routes to principal preparation] currently exist in the state,” and that “there is no indication that successful programs will be provided a greater share of state dollars...” Essentially, no educator preparation programs exist outside of the education establishment, and if they did, there are no plans to reward or replicate successful programs.

First, whoever wrote Pennsylvania’s first application was apparently unaware that Teach for America and The New Teacher Project both operate in their state as neither organization was mentioned. In the second application they appear as two alternative routes mentioned in the first sentence of the appropriate section. We’re off to a good start.

Then, in a direct answer to Zach Blattner’s (MGA, 2010) editorial, bill 441, which currently is undergoing reconciliation, enables that “teachers and principals who complete alternative routes will have the same certification as individuals who complete traditional routes” (p. 450 of Appendix 2). This is a definite improvement.

Yet then the application stumbles. Both TFA and TNTP train teachers, not principals. While the second application highlights that 441 would allow new routes for principals, no new program is explicitly identified. In fact, the most detailed the second application gets is that 441 would enable “Principal Certification Programs operated by entities other than institutions of higher education”—which are?

One would hope that those in charge are aware of New Leaders for New Schools—and the potential points up for grabs from involving them. The organization, regarded as one of the premier alternative principal education programs, is not included in the application, a startling oversight given what the organization could have brought to the table: the CEO is a lead advisor to the Race to the Top.

Things only get worse from there. In response to encouraging successful educator preparation programs, the first application’s reviewer wrote that “the applicant’s only plan to support and extend effective programs… is through increased demand for their students prepared through those programs” and then is even so nice as to suggest a better reward: money. Ditch the market approach—if you know some program is preparing excellent teachers, give them more money. So how does Pennsylvania respond?

Well, it does not. While the second application ambiguously writes that when alternative programs “offer especially promising results, the Department will work with the program to increase its recruitment and expand its enrollment,” it goes on to state that “we do not provide any specialized funding to teacher preparation programs directly” but will hope that demand will increase enrollment, essentially repeating the first application (Narrative and Budget, D-46, 47).

Pennsylvania’s refusal to fund successful alternative programs is puzzling: why insist that they be left empty-handed?

A possible answer appears in the second application. This states that PA “invests over a half a billion dollars annually in Higher Education” (Narrative and Budget, D-47), which mostly flows to The Pennsylvania State System of Higher Education—a network of fourteen state colleges offering education degrees. In section D on page 6, the application also writes that “a majority of our needs are met through traditional pre-baccalaureate programs,” which apparently means performing as low as 19th on NAEP nationwide rankings and having a school district that comprises nearly a tenth of statewide enrollment rank 15th out of 18 large urban districts in performance. Yet the state chooses to fund these programs.

And this is a costly choice. Pennsylvania’s refusal to fully endorse alternative preparation routes—both through their inability to name a principal training program and their refusal to fund, or even consider funding, successful teacher education programs—may stand in the way of up to $500 million dollars, funding the state could desperately use at this time.

Race to the Top applications are currently being reviewed, and winners will potentially be announced near the end of the Summer if the contest funding is not used to prevent teacher layoffs.