SOUTH PORTLAND – One of the parables of our culture is “The Seven Blind Men and the Elephant,” in which each person senses and describes a part of the elephant — correctly! — but none describes the elephant.

There is a parallel between this parable and the series of articles describing student testing and teacher evaluation, and the relationship between these events.

None of us is blind, but none of us describes the elephant. In essence, what is the problem? The issue seems to be poor student performance.

What is the solution? Assuming that there is an “elephant” out there, we need to pause and look at it because what I sense is a mix of issues knotted together causing poor student performance and, if this is valid, the relationship between teacher evaluation and student performance needs a different perspective.

Let me offer an historical perspective. At the turn of the 20th century, there was a focus on increasing the efficiency of business and the production of goods.

This brought about a movement labeled “Taylorism,” after its chief proponent, Frederick Winslow Taylor, in which efficiency experts with timers and clipboards and time-motion studies identified ways of making production lines more efficient.

It was effective for business. It was a data-driven philosophy. At the same time, educators were pressured to use some of the same techniques.

They tried it, and the results were miserable. This is documented in George Callahan’s 1962 study, “Education and the Cult of Efficiency.”

He documents the alarming lengths to which school administrators went, particularly in the period from 1910 to 1930, in sacrificing educational goals to the demands of business procedures.

This, I propose, is the same business philosophy driving the current interest in measuring teachers based on the “product” — student performance as a measure of progress.

We are dancing around the elephant but not recognizing that a business philosophy exists and now we are trying to engineer how to employ it without examining if it is applicable.

Let’s start with an observation — students are not products.

Today’s focus is on standardized testing and using the outcomes as a measure of teacher effectiveness.

Data-driven models have become more popular, and presumably provide more efficiency, with the introduction of computers and the capacity to store, retrieve and “mine” data.

But, availability doesn’t mean applicability. Let me repeat — students are not products.

The educational process is a slow, developing, non-linear personal event, with uneven beginnings and uneven outcomes.

We all like to believe that all children are capable of being educated, which may well be true but to accomplish this, we would need to tend to the individual’s capabilities. Mass production with zero-defect standards doesn’t do this.

Business models don’t produce zero defects; they simply take flawed products off line and don’t distribute them. Consider this parable for our students.

Also, competition between companies or between sports teams, professional and amateur, may be important for our economic system but for learning, and a sense of one’s ability to learn, competition is corrosive.

My sense of the elephant only adds to the aggregate of all the other observations. Perhaps one day, a clear description of the elephant may occur. I’m proposing that the issue being presented is not the issue to address.

Rather, the focus needs to be a more comprehensive assessment of what contributes to poor student performance.

If an analogy must be used, consider this: In professional basketball, in which our current Secretary of Education Arne Duncan once participated, the player who takes the last shot in a close game, and either misses it for a loss or makes it for a win, is seen as the goat or the hero by the general public.

Those on the team, however, are aware of the cumulative errors (bad passes, missed shots, poor decisions, weak coaching strategies, etc.) that in the aggregate, led to the close game.

All of that combined led to the situation that presented the player with the “win-or-lose” shot.

Let’s not evaluate the teacher — the last “shot” for the student — based on the cumulative events leading to the student being in that classroom.

We need to examine the model driving the decision, not the decision.


– Special to the Press Herald


Only subscribers are eligible to post comments. Please subscribe or to participate in the conversation. Here’s why.

Use the form below to reset your password. When you've submitted your account email, we will send an email with a reset code.