The Implementation Audit: How to Stop Education Stimulus Fund Waste Before It Starts

Posted April 14, 2009 at 11:11am

Whether the new federal education funding stimulates innovation and student achievement, as President Barack Obama and Education Secretary Arne Duncan hope, or another round of “spray and pray— initiatives in already-overwhelmed school systems is an open question. In order to tilt the odds in favor of the former, the Department of Education should insist that school systems have an implementation audit in place before they spend a nickel of federal funds. [IMGCAP(1)]This will mean a fundamental shift in the notion of educational accountability from a litany of test scores to a meaningful combination of student results and measurable observation of teaching and leadership actions.The implementation audit addresses three questions that are too rarely considered in education spending today. First, what are we already doing? When I have conducted initiative inventories in school systems, I invariably find schools that are not implementing programs that the system-level leadership believed were universally applied. I also find programs, curricula and teaching methods from earlier geological eras that system-level leadership had assumed were long since abandoned. It is folly to play new initiatives without knowing what is happening at the school level, and the answer to that question is found not in strategic planning and vision statements, but in direct observation and inquiry at the school level.The second essential question is, what is the range of implementation of present programs? Too much educational evaluation relies on the binary fallacy — programs are either implemented or not. In the world of education, however, the differences within the group of schools where an educational intervention is supposedly present can be as significant as the differences between those schools and those without any program at all.In one analysis we recently conducted of a reading curriculum, for example, system leaders believed that the program was “universal and non-negotiable.— In fact, the time devoted to this particular universal program ranged from 45 minutes per day in some schools to 180 minutes per day in others. The differences in intervention for students for whom the program was insufficient ranged from zero minutes per day to 120 minutes per day. The range of training of faculty, adherence to program norms and supervision were all similarly large.A better approach is to create now — before funds are spent — a scoring guide that reflects a range of implementation. At the highest level, for example, one might expect to see consistent application, faithful implementation of the intervention model, diligent oversight, frequent reflection on data and midcourse corrections, and the collection of evidence related to student responses to the intervention. At the lower end of the spectrum would be those schools where the only indicator of implementation was “delivery— — the boxes of material and stultifying training delivered to seething teachers and distracted administrators. When I have evaluated this range of implementation — typically four distinct levels — I find that the level of implementation in schools can be as significant or more so as the identity of the program that is being implemented.The third question is perhaps the most obvious, yet least frequently asked: What is the relationship between deep implementation and gains in student achievement? In our studies of more than 2,000 schools, we have found some practices where deep implementation is strongly associated with improvements in reading, math, science, social studies and writing. We have learned that other practices have little relationship. We have found a few practices — such as scores of poorly defined goals and unmonitored strategies buried inside of hundred-page “school improvement plans— — to be inversely related to achievement gains. That is, many long and elegantly written strategic plans are “strategic— only, in the sense that Napoleon might have used the term at Waterloo. If school leaders are going to monitor what administrators and teachers are really doing, then they cannot claim to have dozens of “strategic priorities.— The good news is that cutting-edge systems already have proven that educational accountability can be more than a litany of test scores. They have essentially said, “Mr. President, we’ll see you one and raise you 10.— They report not only their reading and math scores on an annual basis, but also gather information on the performance of students, teachers and administrators throughout the year. They regularly analyze the relationship between student results and the underlying causes of those results, including not only the characteristics of the students but also the teaching and leadership strategies that are within the direct control of the schools. These are the schools that will be conducting “science fairs for adults— this summer, so that the entire community can see the relationship between student results and the educational strategies in use by the schools. They candidly admit their mistakes, share their successes and challenge one another to make a difference. Moreover, they know that as a fundamental ethical principle, no child in their systems will be more accountable than the adults. These are the schools that will be ready to respond to the president’s challenge to spend the stimulus money wisely. The others — and I fear that they are legion — will have data only on the test scores, economic status and ethnicity of the students. When their results are again low, they will explain the first factor by a combination of the second and third, and another hundred billion dollars will be down the drain.Douglas Reeves is founder of the Leadership and Learning Center.