Sunday, February 10, 2013

Questioning the value of observations . . .

This Education Week article is one that reinforces my concern with the new teacher evaluation model in our state.  Unlike models in other states, ours does not mandate a specific percentage of the final score being driven by student performance on state or other assessments.  My concern is with a possible move in this direction during this legislative session with SB 5246 that would among other things require weighting student growth to consist of 50% of the summative performance of teachers and principals for at least three of the evaluation criteria.

What troubles me with the Education Week article is that in other states the first cycle of implementation of these new models is not yielding the results expected or desired by policy makers.  The results are not changing prior practice related to ratings and are not differentiating between "good" and "poor" teachers.


In Michigan, 98 percent of teachers were rated effective or betterRequires Adobe Acrobat Reader under new teacher-evaluation systems recently put in place. In Florida, 97 percent of teachers were deemed effective or betterRequires Adobe Acrobat Reader.

Principals in TennesseeRequires Adobe Acrobat Reader judged 98 percent of teachers to be "at expectations" or better last school year, while evaluators in GeorgiaRequires Adobe Acrobat Reader gave good reviews to 94 percent of teachers taking part in a pilot evaluation program.

Merriam-Webster
It is still to early to draw conclusions from this early data, but I am concerned with what may be perceived as a weak component and one that is yielding these high rankings, the observation.  Regardless of the model, it will be difficult for building administrators to radically change their practice and ratings in a short period of time and still maintain a positive culture of change and growth.  In this situation, value added analysis gains credibility.


"Value added" is a statistical method of estimating the effect of a teacher's instruction on his or her students' test scores.

Tennessee's data released last summer show, for instance, that observers gave only 0.2 percent of teachers the lowest score, compared to quantitative measures that put 16.5 percent of teachers in that category.


Differences such as these in Tennessee, feed into those advocating for value added and the use of student assessment data to remove the focus from teacher growth to student performance.  Those advocating for SB 5246 can use the data to support their position because as reformers point out, there should be far more teachers rated "needs improvement" than we are currently experiencing.  Even though we don't as of yet have data in our state, this information will be used to support this reform effort.  Though I believe that we need to focus on student performance, I question this becoming the focus of what is stated to be a growth model.

For our model to be reliable, we must ensure that administrators and others can document quality instructional practice, collaboratively with the teacher identify an area of growth, and have the capacity to provide the feedback and support necessary for success that sustains over time.  From our experience we know that this is difficult to achieve, but it is a commitment that we make to our teachers as we move into this new evaluation model.






1 comment:

Jonathan said...

How will the district be able to make all these observations and also potentially support the 16.5% of teachers identified?

As if administrators didn't have their hands full before...