Over-testing?

Just a quick observation in response to a recent post on another education blog. Over the past few years, there has been continuous focus on “over-test” of students, from curriculum-based type assessments such as DIBELS to, of course, full-blown state tests that “count.” A recent blogger felt that, in her experience, the education system had crossed the line past “over-testing.”

My response is far from novel, but I believe an important one: If “over-testing” has indeed occurred, it’s quite easy to peg the blame on all testing. That is, because too many tests have been given, every single one is welcomed with spite and animosity. Yet another one. The recent emphasis on using data in education has even been under fire because, at times, it’s misused.

If you can’t already see my point, here it is: Data is GOOD. Assessments are GOOD. Sure, too many is bad, and using data in a bad way doesn’t work, but when we’re attacking assessments and data, let’s stay focused on that. Instead, we’re seeing more and more posts condemning it all – tests, assessments, etc.

Question: If we add another test, will you view it as inevitably bad? Is there no chance that it could be effective? If we integrate data in a new way into our educational framework, is it bound to overburden, or does it stand the chance of helping?

It’s easy to get frustrated with “too much” and stand levying the blame on each individual component, rather than remembering that is isn’t every particular assessment measure that’s broken, but a flaw in the overall system in which those individual components are implemented.

Written by

Bobby Caples graduated from USF, and eventually went on to expand the concept of community-based behavior support in other settings, include the organization he founded called YouthBASE (Greenville, SC). The concept of YouthBASE was much like the concept of his thesis, only on a larger scale: Take ideas and strategies that had found success in other fields beyond after-school programs, and import them.