A Reply to NCTQ’s Defense of its Rating System

Paul E. Peterson
Year of publication: 
June 28, 2013

In my June 25 blog post, I reported that effective Florida teacher preparation programs received no better ratings by the National Council on Teacher Quality (NCTQ) than ineffective ones.

For example, the graduate program at the University of Florida turned out teachers who were particularly effective in lifting student achievement in math but it received no better a NCTQ rating than other less effective schools. Florida Atlantic did noticeably less well than the University of Central Florida at turning out graduates effective at teaching reading, but Central Florida got the lower rating from NCTQ.

I cautioned readers not to draw strong conclusions, however, because the number of teacher preparation programs for which I have the necessary information is very limited.

In NCTQ’s reply to my blog post, my caution concerning the paucity of available information is repeated. NCTQ then defends its study’s ability to identify teacher preparation effectiveness by making two points:

a. My data cannot be taken as a check on NCTQ’s rating system, because it comes from a period stretching from 2002 to 2009, while NCTQ did its evaluation in 2012. It is true that programs that might have been effective in the past could have slipped, while ineffective ones have leaped forward. But institutions of higher education with their heavily tenured faculties do not alter their patterns of operation quickly and easily. It is more likely that NCTQ’s rating system is inaccurate than that Florida teacher preparation programs have changed dramatically since the graduates we observed were trained.

b. My data that show St. Petersburg College’s undergraduate teacher training program is doing at least as well as, and probably somewhat better than, other undergraduate teacher training programs in Florida. As NCTQ correctly points out, that finding does not necessarily contradict their extremely low rating—they put a danger sign to warn students away–of the college’s graduate program, the aspect of the college’s offerings NCTQ evaluated. Still, it is unlikely that a college with an apparently better than satisfactory undergraduate training program is simply disastrous when it comes to handing out the M. A. In all likelihood, the mission, staff, and operations of the B. A. and M. A. training programs overlap.

c. The good news is that we agree that much more work is needed to identify appropriate indicators of teacher preparation effectiveness across the nation. I invite others to look empirically at teacher preparation program effectiveness in other states in ways similar to the methodology we employed in Florida. Perhaps they will find that NCTQ’s rating system is dead on and that our Florida results are an exception to the general pattern. If that turns out to be the case, then we would, indeed, have a powerful tool for enhancing teacher preparation programs. But we will not know much about teacher preparation effectiveness until we can link teacher training directly to student achievement.

-Paul E. Peterson