Search This Blog

Sep 3, 2010

Assessing the assessments

Speaking forcefully to an audience with which one does not share a long history is dangerous. One subconsciously refers to one’s own experiences, and the layers of meaning associated with it. The audience refers to its collective experiences, and to the semiotic fields created by it. It is like carrying a conversation from one company to the next; you might be right in substance, but have an undesired effect. I would like to apologize to the Assessment Committee, the Director of Assessment, and all those involved in the developing of the School’s and programs’ assessment system, if I sounded dismissive of the work they have done so far or have planned for the future. It was not my intention at all. The work they have done so far is very impressive, and is certainly one of the much better examples I have seen or heard about. That is why I am still very confident we will get through accreditation by NCATE and RIDE next year, although with some considerable effort. My intention was only to encourage all faculty members to take charge and ownership over their parts of the assessment system, and make it a priority to use the data for actual decision making, and to improve what seems to be too burdensome or ineffective. That is the difficult part – to make all these instruments and data sheets actually work.

Most schools of education around the country are going through more or less the same journey. It started with NCATE’s new standards developed some 15-20 years ago, and requiring institutions to build comprehensive assessment systems, which rely on performance data. That was light-years ahead of the rest of higher education, and no one knew exactly what they want. NCATE made a huge mistake of requiring too much and being too specific (they are trying to fix it now, with various degrees of success). As a consequence, most schools, especially large and complex ones, scrambled to produce some data – any data to satisfy the expectations. Because there was very little incentive or tradition to collect and use data, many faculty treated it as a burden, as another hassle from the Dean’s office. No one had good technology to quickly aggregate and return data back to faculty. As a result a combination of not-so-good quality of data with late or difficult to read data reports emerged. By the quality of data I mean just how informative it is.

If I were given a task to develop a student teaching evaluation instrument, which must cover a number of SPA standards, plus a good number of state standards, I just made a long list of indicators, and check marks, with a rubric spelling out each indicator at 3-5 different levels. To begin with, those standards are not always well-written. Then I was not paid for doing this, and no peer review was conducted. I produced something that looks good and covers a lot of ground, but… let’s just say, not very useful. In the end, I got “flat” data – every student is OK or excellent, on every indicator. We also tend to mingle the function of passing students for the class with the function of providing them with meaningful feedback: the former is high stakes, and discourages honesty; the latter should be kept private, and merciless. Formal evaluation and coaching do not mix well. OK, so you I this report, with boring data I myself produced and inputted, and I lose faith in the whole enterprise of assessment, so I tend to be even less honest and less careful providing the data next time. That creates a vicious cycle I like to call the compliance disease. It is not because someone did a poor job; we all got it, because of the institutional restraints we operate in.

Most thoughtful assessment folks across the country understand the problem, to a various degree. However, they lack explicit mechanisms of fixing it. For one, there is only so much you can push on faculty before they rebel. You just convinced everyone to collect and report data, and now what?... Come again?... You want us to go back and revise all instruments one more time? But it is imperative that faculty own assessments. It is very hard for an assessment coordinator to openly challenge instruments designed by faculty, because the authority is supposed to flow from faculty members through elected members of the Assessment Committee, to the assessment director and to the dean. But authority is a funny thing – everyone says they want more of it, but no one really wants to have it. Many assessment coordinators have recognized the symptoms long time ago, and are now moving to the next generation of assessment systems. My aim was really to help Susan, the Assessment committee and program coordinators in what they are already doing, not to hinder their important work. Again, my apologies if at the meeting I did not express my full confidence in them.

What would the next generation of assessment look like? It will have fewer, simpler, more practical but more robust instruments, very selective but very focused collection of data, efficient technological platforms (such as Chalk and Wire) for instant input, analysis, and dissemination of data, and firmly institutionalized process of using data to improve instruction. But most importantly, it will require a change in the culture of assessment. The new culture will have faculty being active participants, fully engaged into constant re-design of instruments, and not passively taking orders from the Dean’s office. The last thing we want is compliance for the sake of compliance (we also do not encourage rebellion for rebellion’s sake). What we want is engaged critical minds that share the purpose, and are in dialogue about the means. We need to get this assessment thing right, because there is simply no other way to proof our worth to society. We need to be confident that our measures make sense to us and to our students. Then they will make sense to any accrediting agency.

No comments:

Post a Comment