Search This Blog

Sep 24, 2010

The information puzzle


This week, I spent quite a bit of time playing with information. I was finally able to edit directly the School’s site (it will take another couple of days to publish the updated version), we were able to launch the bare-bones site for  NCATE and RIPA Institutional reports (it is called http://RICreport.org), and we had another go at the on-line student teaching application. I actually enjoy this kind of work immensely. Every time a simpler, more straightforward way of conveying information is found, it makes me happy. Where does it come from? I am not sure; perhaps a hobby, an inclination.
Sometimes I wonder if a dean should be spending his time cleaning up the School’s website. Not normally, not routinely. But at this point of my life here, it is extremely useful. Understanding of information flows is understanding of the organization. Understanding something is simply organizing one’s thoughts, telling a coherent story about it.
Here is an example: NCATE and RIPA reports are both due in May. They have somewhat similar content, but very different structures. For example, NCATE wants to know about our technology resources in Standard 6, while RIPA  - in Standard 2. We of course, could write two separate reports, but the problem is – each has to come with hundreds of pieces of evidence. It just becomes a logistical nightmare to collect and organize all of this stuff. However, we figured out that a website does not have to linear, and it allows the same document to be easily attached to two different outlines. Why is it important? Well, if you are working on the description of technology, we must wait until you’re done to incorporate it into the report, and you would put it in two different places. And then we discover an error, or additional piece of information – we then need to edit both places, and make sure it still connects to the previous and subsequent text. A website, however, can be used by all the members of the team as a working instrument – many pages can be edited at the same time, and retain their links.
Anyway, for me it is like a puzzle or chess – a somewhat abstract game of solving information flow problems. But in the meanwhile, I think I start to understand what we actually need to collect and how we should present the good work we do. I would not like to do it all the time – meeting with people, talking, listening are still by far more important and enjoyable parts of my work. But I like my puzzles, too. 

Sep 17, 2010

Rainy mornings, worthy projects, and good stories

Autumn is not here yet, but you can smell it. The tiny pungent aroma of wet leaves, still deciding whether to turn or not. The slow lazy rain openly invites all procrastinators and homebodies to stay put, get a Netflix movie, and do nothing. It is wonderful morning for me to break my usual frenetic pace and just think about things.
One of the issues I tried to tackle this week is that of our various partnerships, grants, and public service projects. Which ones should we support, and which we should not? And to what extent can we do it? All wondering comes from ignorance. Several requests for different kinds of support made me realize I have no method of deciding.
What if you found out your Dean has used School’s money to support a particular charity; let’s just say www.iorphan.org , which I happen to like. Just cut a check from one of our accounts, and sent it to them. Would that be OK? - Of course, not. I do not have faculty and administration consent, and there is nothing in our mission that would justify this kind of expenditure. Note, the project is undoubted worthy, and deserves support. But the intrinsic worth of a project is not enough.
OK, what if you found out we provide reassigned time for someone who offers free or deeply discounted classes to teacher of… let’s say Anthropology, in Rhode Island? This feels closer to what we do, and perhaps should be supported. But maybe not? The job of a Dean is really not that closely supervised, and I am not likely to be questioned on decisions like these. However, I always want to have a good story as if somebody asked.
So, let’s slice it. First, any kind of material support should be connected to our mission, which is, if you have forgotten, “is to prepare education and human service professionals with the knowledge, skills, and dispositions to promote student learning and development.” Anthropology teachers pass the test, but Russian orphans do not.
Second, the needs of the community are immeasurable, but we have very limited resources. The public (represented by the Board of Governors) wants us to keep tuition low. If we teach for free, or offer deep discounts to one group of students, how is it fair to other groups of students? For example, we run a graduate class for Anthropology teachers for $50 per credit, and charge other students $342 per credit. The latter are, in effect, subsidizing the former. But did we ask students if they like to help out? Did we ask the taxpayers of Rhode Island if they would like to help out Anthropology teachers specifically, at the expense of, say English teachers? No, we did not. Therefore the project cannot drain resources from other programs, and it should at least pay for itself, no matter how worthy it is.
However, things get complicated when a subsidized project actually has direct benefits to our main programs. For example, if we manage to get a status of Peace Corps Fellows site, it may help us recruit completely new population students, which will increase revenues and help everyone. Just the free advertisement of RIC on their website is probably worth good money.
Here is another example of a paradoxical logic, from my previous institution. A colleague was asking for substantial reassigned time to edit a major national journal with 40,000 copies circulation. How is this not a pet project? How does it benefit the rest of us? I argued that every time the journal is printed, 40,000 people will see the name of the institution on its cover page. This kind of publicity costs a lot, and we are getting a great deal doing it for a few thousand dollars a year needed to replace him. Should everyone who edits a journal get the same perks? Of course not, the logic of equality does not apply here. A small journal with only a few dozen readers will not provide nearly enough exposure to justify the cost. It is also easier to edit.
That’s some of the thinking that goes into assessing all the worthy projects for material support.

Sep 10, 2010

Incentives and the Goldilocks Zone

Much of this week, I have been thinking and talking about an incentive system for off-campus programs. I was also reading several recent books on higher education Educational Theory asked me to review. All books express concern over commercialization of the nation’s colleges. The decline in public funding forces many universities into endless pursuit of revenues, and may undermine their public purpose. It is all true, and there are many things to worry about. However, let’s look at a typical state college as a form of labor arrangement. It works reasonably well for traditional students who come on campus to get a degree. There is a well-defined distinction between instructors and a range of support services, from IT to the Bursar, to health center, the library, etc. Each specializes on one function, and because we concentrate a large number of students on campus, the economies of scale make it all work.
This arrangement fails spectacularly, when we are trying to go into the world of working professionals, such as teachers, or school psychologists, or principals. They don’t want to come to campuses anymore, and expect educational services to be available either at or close to their work places, or on-line. They want education to fit into their very busy schedules, families and commutes. These needs dictate cohort-based, hybrid or online, flexible schedule, but high quality programming from an accredited, reputable institution. But to put together and to see through a successful cohort, we need to send someone to another location, and be a jack of all trades: a marketer, a recruiter, a cashier, a mobile library and bookstore representative, an academic advisor, and a registrar and financial aid officer. While many faculty members actually can do all of these things, it is entirely unclear why they would. A full time faculty is guaranteed a teaching load and a stable salary on campus; it is entirely unreasonable to ask people to increase their workload.  
It takes a different economic model, and a different system of compensation to get the off-campus behemoth moving. Many universities across the nation have realized it, and established cash-funded programs, financially distinct from state-funded programs. It goes something like this: a group of faculty believe there is a need for a graduate program at a specific location. They use their own social and professional networks, find out exactly what people want and need, and then create a cash-funded cohort. The institution decides whether the project is financially viable and academically rigorous (because remember, our reputation is our most valuable asset). After that, the initiator(s) do most of the leg work recruiting students, helping them to register, to buy books, to use campus technology, etc. In exchange, the cohort coordinator and instructors are paid stipends. At the end of the program, whatever profit the program generates, is divided up between the originating unit and the central administration.
The model works well, but it needs a careful balance. If the incentives are too strong, it may suck the life out of existing on-campus programs. Full-time faculty members become too preoccupied with cash-funded operations; they also tend to convert some viable on-campus programs into off-campus ones, just because pay is a better. If the cash-funded operations empty your campus, you end up wasting significant resources. It is unlikely to happen, because of the constant demand for traditional undergraduate experience, but it may.
If the incentives are too weak, they do not generate the needed level of initiative and effort. If you’re running out of space and capacity on-campus, and do not grow off-campus, you’re also losing opportunities and hurt your institution. The cash-funded programs need to be in this Goldilocks zone – not too hot, and not too cold. It also needs to be highly predictable. If you keep changing the rules every year, people will avoid taking risk.
Another inevitable side-effect of any “capitalist” system is inequality: some units just have naturally more opportunity to earn supplemental income than others. If you see a colleague next door buying laptops and cameras, and you have nothing but the bare paycheck, you start feeling unloved and forgotten. So the deal must have some way of sharing the riches, or it will collapse. Some honest conversations need to take place on what exactly does one promise to do, if one accepts the cash-funded program stipend. Those working exclusively on campus will then know exactly what they don’t have to do, because of the campus support services. There are other nuances. For example, you need to make the cash-funded courses be available as both in-load and overload, otherwise staffing flexibility is greatly reduced. To do that, you need a protocol for transferring money back from cash-funded accounts into the state-funded ones. Other quirks and deformations are possible, and you can only do so much to anticipate them. And we chronically lack time to do anything in a measured way, with all precautions. To start something in the Summer of 2011, we need recruit students in November. To recruit students, you need a clearly defined program. To get to the program, you need an incentives policy in place. To get a policy, you need to talk with at least a dozen people, and more than once. 

Sep 3, 2010

Assessing the assessments

Speaking forcefully to an audience with which one does not share a long history is dangerous. One subconsciously refers to one’s own experiences, and the layers of meaning associated with it. The audience refers to its collective experiences, and to the semiotic fields created by it. It is like carrying a conversation from one company to the next; you might be right in substance, but have an undesired effect. I would like to apologize to the Assessment Committee, the Director of Assessment, and all those involved in the developing of the School’s and programs’ assessment system, if I sounded dismissive of the work they have done so far or have planned for the future. It was not my intention at all. The work they have done so far is very impressive, and is certainly one of the much better examples I have seen or heard about. That is why I am still very confident we will get through accreditation by NCATE and RIDE next year, although with some considerable effort. My intention was only to encourage all faculty members to take charge and ownership over their parts of the assessment system, and make it a priority to use the data for actual decision making, and to improve what seems to be too burdensome or ineffective. That is the difficult part – to make all these instruments and data sheets actually work.

Most schools of education around the country are going through more or less the same journey. It started with NCATE’s new standards developed some 15-20 years ago, and requiring institutions to build comprehensive assessment systems, which rely on performance data. That was light-years ahead of the rest of higher education, and no one knew exactly what they want. NCATE made a huge mistake of requiring too much and being too specific (they are trying to fix it now, with various degrees of success). As a consequence, most schools, especially large and complex ones, scrambled to produce some data – any data to satisfy the expectations. Because there was very little incentive or tradition to collect and use data, many faculty treated it as a burden, as another hassle from the Dean’s office. No one had good technology to quickly aggregate and return data back to faculty. As a result a combination of not-so-good quality of data with late or difficult to read data reports emerged. By the quality of data I mean just how informative it is.

If I were given a task to develop a student teaching evaluation instrument, which must cover a number of SPA standards, plus a good number of state standards, I just made a long list of indicators, and check marks, with a rubric spelling out each indicator at 3-5 different levels. To begin with, those standards are not always well-written. Then I was not paid for doing this, and no peer review was conducted. I produced something that looks good and covers a lot of ground, but… let’s just say, not very useful. In the end, I got “flat” data – every student is OK or excellent, on every indicator. We also tend to mingle the function of passing students for the class with the function of providing them with meaningful feedback: the former is high stakes, and discourages honesty; the latter should be kept private, and merciless. Formal evaluation and coaching do not mix well. OK, so you I this report, with boring data I myself produced and inputted, and I lose faith in the whole enterprise of assessment, so I tend to be even less honest and less careful providing the data next time. That creates a vicious cycle I like to call the compliance disease. It is not because someone did a poor job; we all got it, because of the institutional restraints we operate in.

Most thoughtful assessment folks across the country understand the problem, to a various degree. However, they lack explicit mechanisms of fixing it. For one, there is only so much you can push on faculty before they rebel. You just convinced everyone to collect and report data, and now what?... Come again?... You want us to go back and revise all instruments one more time? But it is imperative that faculty own assessments. It is very hard for an assessment coordinator to openly challenge instruments designed by faculty, because the authority is supposed to flow from faculty members through elected members of the Assessment Committee, to the assessment director and to the dean. But authority is a funny thing – everyone says they want more of it, but no one really wants to have it. Many assessment coordinators have recognized the symptoms long time ago, and are now moving to the next generation of assessment systems. My aim was really to help Susan, the Assessment committee and program coordinators in what they are already doing, not to hinder their important work. Again, my apologies if at the meeting I did not express my full confidence in them.

What would the next generation of assessment look like? It will have fewer, simpler, more practical but more robust instruments, very selective but very focused collection of data, efficient technological platforms (such as Chalk and Wire) for instant input, analysis, and dissemination of data, and firmly institutionalized process of using data to improve instruction. But most importantly, it will require a change in the culture of assessment. The new culture will have faculty being active participants, fully engaged into constant re-design of instruments, and not passively taking orders from the Dean’s office. The last thing we want is compliance for the sake of compliance (we also do not encourage rebellion for rebellion’s sake). What we want is engaged critical minds that share the purpose, and are in dialogue about the means. We need to get this assessment thing right, because there is simply no other way to proof our worth to society. We need to be confident that our measures make sense to us and to our students. Then they will make sense to any accrediting agency.