Search This Blog

Oct 27, 2011

Dear Regents…


This is not an attempt to influence your vote, but an attempt to influence your deliberative process.

You are asked to regulate a professional community with an internal disagreement. One group of well-meaning and well-qualified people believe in A, and others believe in B. Both are committed and passionate. Who do you trust to make a decision? You do not have the luxury of time to examine the conflicting claims on their own merit. For example, RIDE believes that the certification system should no longer require professional development; it will be handled better by the new evaluation system. Others argue that this would be an unprecedented move, in sharp contrast with other states’ policies. Another example: a professional organization (RIMLE) believes teachers working in middle schools should be required to be certified in this area. RIDE staff believes these should be local hiring decisions, rather than centralized certification rules. How should such disputed be adjudicated? There are two ways, both developed within our democratic tradition.
  1. The first is to ask both sides to tell their own stories, and decide which is more convincing. RIMLE, for example, would have told you something like this: We are concerned that any middle level job opening can be claimed by any high school teacher with seniority, and principals may little say. RIDE has another story, also compelling: one district has recently decided to move its sixth grade into the middle school, and suddenly all their 6 grade teachers become unqualified to teach the same kids in a different building. The problem is that you heard the latter story, but not the former. RIMLE members cannot directly engage with the Board, and their input is actually summarized and responded to by RIDE, a party to the dispute. This is a conflict of interest. 
  2. The second way is to invite a third party, an expert who knows as much as both of these professional groups, but has no stake in the outcome of the debate. Courts do that all the time when judges and jury lack specialized training to weigh the evidence. Invite someone familiar with educational policy research from a neighboring state's university or a research center to testify. It is faster, and although you may not learn as much detail, at least you have another party checking the facts and conflicting claims. 
The same approaches can be taken with the evaluation/certification debate. Both sides are equally compelling; both can commandeer research evidence, arguments, anecdotes, and metaphors. We can (and did) debate those for hours on end, going into more and more professional nuances, imagining more and more intended and unintended consequences. Each side has its biases, and interests in the outcomes of the debate. This is why a non-professional citizen board like yours is so important; the interests of the public should be protected, and no profession should have a monopoly on running its own affairs. But protecting public interest also requires weighing in on disagreements among the professionals.

In the end, you would have to agree with one or the other side. But in a deliberative democracy, the process is more important than the outcome. And can you honestly tell what exactly does RIMLE have against the change? Do you know why do teachers object? Are you sure you understand the Higher Education community’s position before deciding it is wrong? If you can and do, cast your vote. If not, perhaps another look is warranted.

Context

For those unfamiliar with the debate:

Rhode Island Department of education has developed a proposal to revise the State’s educator certification policy. It is due to be voted on by the Board of Regents on November 3, 2011. Many of the provisions were supported by various professional groups, but some were also strongly objected. See, for example, RIC’s Feinstein School of Education letter, and the Resolution of the Certification Policy Advisory Board. There is also the RIDE-compiled Summary of the public comments and recommendation, which in my view, does not do justice to representing the opposing arguments. The three public hearings were recorded. The controversy is mainly around three items:

  • Teachers object to the immediate link between the new teacher evaluation system and the certification policy. Union leaders and many teachers actually support exploring the idea, but feel that the evaluation system (which makes student achievement an important part of teacher evaluation) is just too new, it has not been piloted, and we don’t know if it can generate reliable data. RIDE responds that the actual decisions are a few years away, and if the evaluation data is no good, they would be the first to pull the plug on using it for certification decisions. The issue is – should the safety mechanism be statutory or administratively decided. 
  • Institutions of Higher Education object to removal of professional development requirements from the certification policy. They believe de-valuing graduate education removes an important teacher quality assurance mechanism and sends a wrong message about the value of educational credentials in general. RIDE team believes teacher PD should all be embedded into curriculum work, and is best determined locally, by principals and districts. The issues is – should teachers keep going to school, or their professional growth can be self-directed and employer-directed. 
  • The proposal keeps the Middle Level Certification area, but no longer requires teachers working in middle schools to have it. Secondary teachers will be able to teach 7-12 grades, and Elementary teachers – 1-6 grades anywhere, in any school setting. The issue is whether teacher qualifications are only age-specific, or also setting-specific.

Oct 23, 2011

What’s next?


It is hard for me to get excited at educational policy talks. A good scholarly paper makes me happy; a great story about teaching moves me. But talk on educational policy… Let’s just say, these things usually bring out a skeptic in me. In fact, I can’t remember last time it happened to me, - until Thursday night that is, when Robert Balfanz gave a talk at the RI Foundation.  I thought - these are all things I always knew to be true, but just could not say it. First, he deal with education in terms of dropout prevention. It is a much better lens than international tests. Dropping out of school is a real and often tragic event, coinciding with giving up hope to succeed. In contrast, the shrill calls for outcompeting the world through education strike the wrong note, not only because they are untrue, and are just disconnected from real lives.

Balfantz then goes into a simple line of reasoning: future dropouts can be easily identified by sixth grade, and not by test scores alone (one must consider absenteeism and behavioral problems).  Different kids may have very different reasons for falling behind, and they need a different set of interventions. There should be a better allocation of resources: some schools have much more need than others, and therefore should receive more resources. Schools alone, in the narrow sense cannot help them. While kids must all have a good lesson in the morning, there should be a second shift of adults offering rich after school activities, support, specialized interventions, and just the sense of community and belonging. All of these efforts should not only resourced (for example, by shifting resources from the justice and prison expenses), but also targeted and coordinated. Teachers shuld be closely involved with the “second shift” people. Balfantz like to use the term “engineered.” Another metaphor he uses education requires coordination effort comparable to putting together a Broadway show.
I noticed that there were no usual divisions in the room.  The approach can bring people from different camps. Teachers have always rightfully argued that schools and teachers alone cannot undo the effects of concentrated poverty, no matter how hard they try. Many educational researchers have been arguing that schools as institutions of pure learning cannot work, and need to be augmented and diversified to improve. The reformers liked that there is still measurements, accountability, clear numerical targets (the drop-out rate) which have direct economic significance. The after school crowd of course, loved this, too, for obvious reasons. This is one unique case where the idea may actually play well politically, for everyone can be (and should be) included. Balfantz’s strength is in the systems approach. He is basically suggesting we may have enough resources to significantly reduce drop our rate; all we need to do is to organize and allocate them smartly.
The national education policy since Reagan has been doing very few things, each is somewhat promising, but also so mind-bogglingly disconnected. The reform has been dominated by non-professionals, who believe in miracles, and fail to see nuances. Every time one of us in the profession tries to critique another silver bullet, they take it as resistance to change. So, they will not listen, and keep making the same mistakes over and over again. Their passionate, well-intended, but unsophisticated thinking did little to address education’s problems. A quarter of century worth of reforms has very little to show for the money. The thinking goes like this: let’s just do more of the same, and do it a whole lot harder, and for longer, and it should work. If it does not work, overcome the sabotage in the profession. It created a whole new set of political divisions which did not exist before, by asking teachers to perform miracles, and then blaming them for failure to deliver.
A platform that would reunite fractious groups of educators is possible, and it can be developed along the lines of Balfantz’s thinking. Let’s keep all the existing reform efforts. I wish some could be just scaled down a little, take less time and attention, but let’s keep them all, but find proper places.  Yes, we need a set of national standards, and better testing. Yes, we need coherent teacher evaluation systems, better induction practices, and experimental schools. But we also need to bring a whole lot of additional resources into struggling schools – social workers, community partners, - in such a way that they don’t fall over each other. We need to elevate our diagnostics (along the lines suggested by the RTI) to a more sophisticated, and yet simpler level. Let’s measure not just tests and grades, and learning outcomes, but also engagement levels, how attached kids are to schools and to the adult world. Are they fed? Healthy? Have a stable home? We need to think of a whole day, from morning to night, not just about lessons from 9 to 2.
The big task is the system building. It can be done. For example, our friends in PASA have figured out how to bring dozens of after school service providers and putting them into one schedule to serve Providence kids. Central Falls is experimenting with the Restorative justice Approach which integrates social work with education.  They are also trying to connect public schools with charter schools. There are many other examples, recent, local, but also historical, and across the world. It is important to realize that the integration work is a special kind of work. Bringing community partners, schools and social services is no small task. Someone has to develop a model for integrative, logistical services, with the use of contemporary information technologies.
If Arne Duncan was an educator, that’s what he would fund. But I don’t want to wait for another wave of ill-conceived reforms to pass. I think we should just do it in Rhode Island. 

Oct 14, 2011

The capacity for change


We had an interesting discussion today at the TEIL meeting. Why is higher education so slow to change? What we realized is that the best side of us is also the worst side. As an industry, we have some of the most educated, most dynamic workforce. Faculty are trained to be critical, thoughtful, inquiring. This very advantage makes many changes on campuses almost impossible. The minute one small group comes up with an idea, a suggestion, with a plan, other groups will immediately start investigating and critiquing. They will inevitably find flaws in it, and demand further revision. Yet once revised, the proposal becomes the subject of scrutiny by other groups, and other flaws are immediately found. Eventually, the proposal either dies, or is changed to be very similar to the status quo. Higher education is a unique system where almost everyone has a veto power. Many players can say no, but almost no one can produce a definite yes. So the odds are against any potential change. This is a matter of probability determined by cultural and organizational conditions, not a result of any special conservatism.

The other extreme is passivity, where people are disengaged, and they let administration, or a group of faculty to do whatever the want. This is not a good option either. Change can happen quickly, but first, some of it is not good (the ideas were not vetted), and there is little buy-in and support from faculty. Changes like these are easy to do superficially, but they fall victim of slow sabotage of those who consented but did not engage.

We eventually started to talk about trust, and how it is an essential condition for change. To a certain uncertain degree, we need to operate on trust, and suspend our critical judgment. For example, if we trust a committee to develop something, and then find their product unconvincing, we should make an attempt to accept, unless this is something completely unacceptable. We simply cannot develop everything by consensus. Consensus is great for fundamental beliefs and strategic priorities, but fairly counter-productive for developing specific things. Writing by committee only works when people become exhausted, and ultimately disengaged (which is the error of the second kind). All campuses I have seen often fluctuate between the two extremes of jaded passivity (usually about big decisions) and of spirited struggle (usually about the littlest things).

In most cases, our thinking should be like this: OK, you guys were asked to do something; and I was amongst those who asked you. I was also asked for input, but sorry, did not have time to provide much. OK, now you produced something that, frankly, is not that great. I would do a much better job, no doubt. But hey, I was not on that committee; I did not hear all the debate and compromises. I am talented, but busy. Well, OK, perhaps my version would be just as vulnerable as yours. Can I live with what you produced? Is this against my core professional and ethical beliefs? - Not really. Is there ill intent behind this? - Probably not, just benign incompetence. So, I can’t do everything myself, so let’s try it. Next time, I‘ll get on that committee and get things right.

Oct 7, 2011

The human factor

It is a gorgeous Fall Friday, one of those days that can put one’s senses in the state of hyper-alertness. Certain smells, shades, and views bring out misfiled, but never quite discarded memories. This is all I want to think and write about.

However, I am still at work, so these are my five cents on the new teacher evaluation system that RIDE is implementing this year. None of this is news to them; I have had many opportunities to share my thoughts with the RIDE team members responsible for this impressive project. I am writing as a concerned friend, not as a disengaged critic. It does have a good potential, and I wish it very much to succeed.

The main idea is to use the value-added model to evaluate teacher effectiveness. In other words, if your students show growth, you must be an effective teacher; if they do not, you are not effective, whatever you say and whatever your credentials are. Intuitively, this makes a lot of sense to the policy makers and to the public at large. And RIDE’s statistics experts developed a very clever model that measures just the growth, not the absolute test scores, against the average growth rates in the state. There are also multiple safety checks in place for teachers not to be dismissed on accident. First, the growth model is only about one third of the evaluation; the rest is observations and professionalism. Second, you’d need to show several years of low performance to be actually dismissed. Third, you will be offered help along the way.  

There are still serious scholarly concerns about how the model is going to behave in a large-scale trial.  Most of them have to do with measurements’ stability. If you were excellent one year, but then poor in the next; the measure is not likely to be accurate. It happens more than people expect – an instrument may be tested to be valid, but after scaling up its use, or after the conditions of use change (say, from clinical to field application), it loses its reliability and validity. The non-measured, external influences may become too strong, your selection of sample becomes less random (more biased), and its size turns out to be too small. Will it happen in this case? We don’t know yet. The RIDE team has run some older data through the model, and it seems to be checking out. But no one can say it will be fine once the data is collected in the context of the high-stakes system. The math in the model is not really a problem. (Well, it may disadvantage teachers who work with gifted students – those tend to score very high on any tests to begin with, so their growth may not be as impressive. It may reflect poorly on teachers who work with students who are so low, their growth is invisible on available instruments.)

Once the evaluation system is established, people will start manipulating it, consciously or unconsciously. That pull may or may not be strong enough to undermine the validity of the central measure, but we simply cannot tell in advance. It is very hard to predict how the pressures of the new system will affect teachers’ and principals’ behavior. For example, if I work for a non-NECAP-able subjects and grades, I get to establish my own learning objectives and measure their achievement with an instrument I construct. There are very good guidelines on learning objectives, and they could be mastered, no doubt. But it takes years of trying to develop a good sense of what’s achievable, to construct a good instrument to measure growth, and people may set learning objectives as too high or too low. Every incentive is to set them too low though.

There is a comprehensive lesson observation and teacher evaluation tool developed on Charlotte Danielson’s framework. RIDE estimates a principal will spend 10 hours a year on evaluating each teacher. I think it is an underestimation, because the learning curve needs to be factored in. Coventry High School has 172 full time teachers, and Frank D. Spaziano Annex Elementary School has 8; on average 42. Even most optimistically, it adds to 420 hours, or 56 full days (if you assume 7.5 hour workday) or 11 full weeks. One third of the entire school year time is gone from the principal’s time budget, if she or he did it alone in an average school.

And then again, enter the human factor. Most of observation criteria are by necessity vague, the time is very limited, and the stakes are fairly high. From my experience, this is the recipe for the “regression to the excellent” phenomenon, which we are struggling with in teacher preparation. If you are a principal, checking 80 items within 50 minutes, and you know it actually matters, you will be tempted to evaluate everyone high, just to be safe. Then you get flat, uninteresting data in the end, where everyone is above average. Looking at the data will reinforce your low buy-in. That is the real danger. Once people lose faith in the system (even when they are at fault), their next cycle of observations becomes even less accurate. Why should I care if this does not tell me anything useful anyway? You can probably tell I am speaking from experience here. What begins a big scare ends up being a biggest joke.

Now, I don’t want any of these things to happen, and I hope they won’t. This is not a call to abandon or dismantle the new evaluation system. We should give it a very serious try, and work earnestly on using what we all learn. Expect years of finding new unintended consequences, not despairing, but fixing them all, one by one.

I do, however, believe that the timelines set by the Federal Government are utterly unrealistic. The State’s educators led by the “RIDE rangers,” no matter how competent and hard-working, simply cannot deliver a functioning evaluation system within a couple of years. It would be also absolutely unrealistic to count on that system to work properly within the next five years. So when we pin our other items on the reform agenda on this unrealistic hope, we only increase the uncertainty. For example, moving the professional development requirements from certification into new the evaluation system is just a hugely risky. We are dismantling one quality control mechanism, on pure hope that the new one will be better. Yes, the old system was not that great, but at least we know it worked somewhat. Remember American education has been slowly improving over the last thirty years by almost every measure available.

There is a huge distance between a promising idea and a working public policy, with all its underlying processes and procedures. The new evaluation system elevates the level of complexity tenfold, because of the sophisticated information technology requirements, and the number of decisions that needs to be made and recorded. One cannot expect the Great Leap Forward. Didn’t we try this before? The Goals 2000, anyone? We all remember what happened- the financial collapse, the stimulus money, the mad rush to spend it. Mistakes have been made, but they must be corrected. The sense of urgency is great, but not when it can actually make things worse rather than better.

This is not really a message to RIDE – they cannot do anything about the timelines in the Race to the Top grant, on which the entire State (with the exception of higher education) has happily signed. The feds screwed up (which never happened before, right?) We should try to persuade our Congressional delegation to work with the Federal Department of Education to allow for more flexibility.

I worry that every new failed reform undermines our collective ability to hope, to learn, and to trust each other. And we do need all of those three things to move education forward. Hope, learning, trust is what we need the most. It is easy to get cynical, and just wait for all this to pass, for stuff to hit the fan, etc., etc. That is not much of an option, really. Educators in this State have already invested an ocean of energy into the reform. Let’s just do it right this time.