Carolyn, our NCATE Queen, and I were dancing with and around data a lot. Our dependable work studies have been diligently punching in numbers; our smart graduate assistants played with spreadsheets and produced nice tables. Program reports are due soon, so we are having a close look at what information we have gathered, and which of it makes any sense.
We have learned several things so far. One is that when you set up some data collection process, you don’t really know if the data in the end is going to be useful. You also never realize what will be missing. When it is aggregated, it looks differently than when you are looking at a single item or few items. Data can look really boring, when there is no variation, and everyone is proficient. Data can look weak, because it does not prove what it is supposed to prove. We also realized that data collection must be systematic from the get go: we collect too much data, actually. Each individual sheet or form has been added sometime in the past for what seemed to be a good reason, but now many serve no purpose, or are never used for anything. So, when you have too much data, you end up spending more time digging out what is somewhat useful from what is obsolete. So, it should be very limited, very focused, and have some validity. Not just what statisticians like to call concept validity, but what I would like to call the gut feeling validity: can we actually believe it measures what we say it measures? Can we stand by it?
In the institution of our size, bureaucratic procedures for data collection are crucial. Someone has to visualize the journey a piece of paper makes, and find that critical point where we can get a copy of it, and then enter it into a database. A lot of things could go wrong here: an instructor may forget to turn his or her sheets; a staff person can be recent and not realize that certain piece of paper needs to be collected, or may not know what it looks like. A paper may be filed improperly, or not filed at all, then the information may never be entered into the database, so we have to pull paper out of files, and enter it. Time also plays tricks with us: “I believe I turned it in to A,” says B about an event that happened many months ago. “I don’t remember receiving anything from B,” says A. Both suspect C might have the stuff, but C is no longer working with us C says he turned everything to D, who is also gone and out of reach, so I go into the D’s office where stuff might be, but find nothing. End of search. Now this may look like a lot of incompetence, but it is not. Data collection is a complex process highly vulnerable to error and to organizational changes. It easily disintegrates under pressures of time, large volume, and lack of strong motivation. Data needs evolve constantly, because of changes in various laws, program revisions, turnover of instructors, administrators and staff, and changes in technology.
However, the most important reason for our difficulties with data is that colleges have not learned yet to deal with accountability data. Of course, teacher education is on the forefront of the accountability movement. Most of our A&S colleagues are really behind us, and may have no idea at all about any of this. Most are making their baby steps in learning to dance this dance. However, even for NCATE accredited institutions like ours, the data collection challenge is still relatively new. Institutions have different scale of time: what is a long time for an individual, maybe just a blink in institutional time. While individuals can learn things quickly and remember what they have done, institutional capacities and institutional memories are very different – not as quick, not as reliable, and heavily dependent on writing things down. Having someone highly competent around does not necessarily solves the organizational problem.
In the end, a lot of data comes out a bit unconvincing. I treat it as a learning experience: I certainly learned a lot about dancing with data in this NCATE cycle, and many of my colleagues did the same. My worry is how to make the institutional memory and skills stronger. So, OK, we are starting fresh in this coming academic year. We need not only to revise the list of data items we collect, and revise out instruments; we need not only develop logistics for collecting and analyzing it, but also somehow make sure this process is sturdy enough to withstand changes. When we have new faculty, new secretaries, new work studies, etc., how will they know what to do with data and why we’re doing it? Next time we change something in information collection, how will that information spread? Who will make sure little pieces of data come together? How do we make this process less time consuming and therefore less expensive? And most importantly, how on Earth do we collect only meaningful data, and stop collecting crap WITHOUT failing our next NCATE review?
I am fairly confident we will pass most of this cycle, partly because Carolyn and others did a great job setting data collection in motion before I ever got here, and the process of actually writing the reports is well organized. Partly I am confident because NCATE has shown appreciation to the challenges of the institutional learning curve, and was not indifferent to the issues specific to large units. So, this is not a grade anxiety, but thinking about converting this whole accountability dance into something we can actually enjoy and look good doing.
No comments:
Post a Comment