Search This Blog

Oct 27, 2019

False kindness and kicking the can down the road

Academic admissions is never a perfect process. Sometimes we realize a student is not going to make it through the program. S/he may not have the right attitudes or character, or may have academic deficiencies that are too large to overcome, or some combination of these. However, we tend to support our students, and it gives us no pleasure to expel anyone. We pass someone barely, deep down knowing the student is not going to succeed, and yet hoping against evidence that s/he may benefit from another chance. Such indecisiveness simply kicks the can down the road, letting someone else deal with the problem.

Down the road, things are not going to get easier. A student who stays in a program long enough gets an impression that s/he is doing just fine, and can succeed. S/he invests a lot of money into a particular career, and becomes less open to other options. Not doing anything about an unfit student is ethically problematic. It amounts to giving false promises and charging unnecessary tuition. It is a part of the American culture, the irrational belief that anything is possible if one tries hard. This is why it is so hard to tell someone – this profession may not be good for you, but if we are not honest about it, who will be?

I am not talking about cases where the outcome is indeed uncertain. Many students do change, and many dramatically improve with time. Some can really surprise, and we should have plenty of room for trial and error. In those cases, one has to work as hard as possible to move such a student along, while warning other colleagues ahead in the program. Yet we prepare educators. Somone can be an OK engineer or a research assistant, but not a good teacher or psychologist, or a school principal. The test is simple: Would you let your own child or grandchild have this person as a teacher or a counselor? If the answer is no, the ethical obligation is clear. Other people’s children deserve the same as yours. Our primary ethical responsibility is not with our students, but with their students. I suppose it is the same with other professions as well. If you train pilots, their future passengers are more important than this student’s life dream.

Sometimes a fear to make a mistake may be paralyzing. However, it is impossible for just one faculty member to dismiss any student from a program. Students always have a right to due process. There will be committees, appeals, several layers of review. What we use is a collective judgement to protect students and each other from making hasty, biased decisions. The collective wisdom of the institution is greater than that of any of us individually. However, to engage it, someone has to raise an alarm, and not kick the can down the road.

Oct 20, 2019

No place for democracy in standard development

At a recent state-wide Deans of Education meeting, I asked why our state’s standards for teacher preparation (TPEs) are so long. For example, this proposed set of standards has 93 items on it, which makes any meaningful compliance impossible. One of the panelist responded “It is because of the democratic process is used to develop the standards.” She added that the elements are mere guidelines, and that of course, institutions are not asked to demonstrate meeting every element of the standard. But that is exactly what we are required to do, and the panelist just could not imagine such an absurd is possible. A less charitable colleague described the process of standard development as multiple special interest groups lobbying for inclusion of their things in the standards.

No matter what do you call it, there is a problem with documents developed through broadly based participatory input. Just remember when you were a part of a group that brainstormed something. Everyone in the group needs some recognition. When you come up with an idea, you want it added to the list, otherwise you feel worthless, and rejected by the group. The groups has an interest in maintaining peace and cohesion, so it is likely to accept your idea even if it is marginal or just weird. That is how we end up with laundry lists of standard elements that are impossible to use in real life.

It is important to solicit input from broader constituencies. However, any brainstorming should always be followed by a critical phase, where a smaller group would apply a critical eye to the lists of generated ideas. Each item has to be checked against the purposes of the document. For example, can programs actually credibly show that they are meeting this specific element? It would be good to check if a requirement has any kind of basis in research. Standards should be evidence-based, and derive from research. For the example, the proposed set includes “diverse learning styles,” a theory that was debunked more than ten years ago.

Finally, California regulators completely ignore the Item response theory, which is, more or less, the essence of the contemporary psychometrics. Here is how you take GRE in math test now. If you can answer a calculus question, you will not be asked to prove that you know the long divisions, or fractions. It is because statistically speaking, people who have more advanced skills, are very likely to also have the lower level skills in the same field. Deborah Ball had a somewhat similar idea, when she came up with the idea of “High-leverage practices.” For example, if a teacher can show that she or he can adjust instruction, we can safely assume that the teacher is capable of formative assessment. Otherwise, how would she know how to adjust? Extending this logic, if a teacher can “Apply knowledge of the range and characteristics of typical and atypical child development [...] to help inform both short-term and long-term planning and learning experiences for all children,” s/he should be able to “Differentiate characteristics of typical and atypical child development.” However, the standards check for both. There is no way teaching performance can consist of 93 different scales. Some of the items should be measuring the same constructs, right? Some of the elements can be included in others.

I am not just grumbling. Poorly designed, bloated, and unenforceable standards cost the taxpayers millions and millions of dollars in labor cost and lost opportunity. More significantly, they demoralize faculty, who must pretend to comply with the poorly designed requirements. I just met with two young faculty members to discuss their committee work. One of them had recently turned in a 317 page-long matrix document, and another was in a very small group that submitted a 546-page long document. In addition, they were asked to submit things like Lists of all students and their placements, list of all faculty by status and by courses taught, adjunct faculty and TT faculty job announcements, student handbooks, Hours in the Field by Type of Activity, Current list of MOUs with partners, training materials for supervisors, clinical experiences assessment instruments, Description of Process Ensuring Appropriate Recommendation, Candidate Progress Monitoring Documents, etc., ad nauseam. I felt bad for them. Did they work so hard on their PhDs to do all this mindless work? Does anyone really think these torturous processes assure program quality?

All of this is because of one small error. Standard development cannot be a democratic, all-inclusive process. Or rather, the initial phase of it can, but not the whole of it. We failed to build in a second phase, where someone with an OK knowledge of research and some common sense could just edit it down by let’s say 90%.

Oct 14, 2019

Why some people never reply to your emails, and how to stay cool about it

On every campus I know, a few people can be counted on to never return an email - not soon, not within a week – never. I always wonder why and how do they get away with that? I am sure they reply the President’s messages, but not to mine. It may seem irritating but the world of human communication always contains more shades of meaning.

I think there is a difference in a fundamental assumption about email: some implicitly believe it is an optional, almost superfluous form of communication. If you JUST email, you must not be that serious about it. They believe that not replying is not rude; it is just one of several options. The non-replying means, “If you really need to get a hold of me, call, me, or find me on campus.” The non-replying can also have the meaning of “I am not really interested in answering your question, or engaging in a conversation with you at this time. Please remind me later.” It is because there is no way to say how important your e-mail is. Yes, I know about the High Importance button, but it is reserved for true emergencies. Some readers may perceive this is an overly generous interpretation, but I believe it. I use the non-replying very rarely, and for me it means, “I do not wish to continue this conversation.”

In many cases, senders do not use the important difference between “To” and “CC” fields. In theory, only people in "To" are expected to reply, others are there for information only. In practice, it is all over the place. I do not reply when it is obvious that other people among addressees are in a better position to answer. However, the assumption can be wrong, and none of the addressees answers, because they all assume someone else is in a better position to do that.

Then there is the random error that eats up messages – from accidental deleting, to various devices’ synchronization problems. It is the “sync or swim” world. Statistically, it is quite probable for an error to strike twice or even three time against my messages in your mailbox. However, human mind does not tolerate low-probability coincidences, so after the second error I will think you are ignoring me. The solution is to try again, to write a second message, or to call and follow up. Some tolerance to human and technical errors is essential to a healthy organizational culture.

Some people do not possess good skills in dealing with their email flow. There is a method here. For example, reading e-mail three times helps, counterintuitively. The first time is a quick scan, where you delete junk, or answer those that require no effort to answer quickly. Then you read more substantial emails, but do not respond right away. Your brain will subconsciously work on replies, although it does not seem to be the case. Then, quickly scan again before actually replying. There are also ways of sorting by sender, the Outlook rules, and conversations that help deal with flow of emails. Amazingly few people take advantage of the new federal law on “unsubscribe” link, so their inbox is clogged with spam. If you never see the bottom of your inbox, you should probably learn a few things.

Finally, some people just receive too many emails. Faculty who teach large sections deserve the most sympathy here. Yes, there are many tricks to reduce the flow of student e-mails (most importantly, do not make your syllabus and assignments so confusing). However, student email inflow can be truly overwhelming at times. The rest of us, administrators, should not be getting more than 20-50 emails a day. Getting more is a reason to rethink how you organize your work. This means you are probably not delegating enough, not automating enough, and have become a human bottleneck. Being overwhelmed with emails is nothing to be proud about; I would not recommend bragging about it. It is, rather, a worrying sign.

Even 30 emails will take 2-3 hours to work through, and it is a major portion of our workload. One has to recognize it and plan for the daily task. For example, a day of back-to-back meetings guarantees a second shift at night, reserved just for emails. The shift is lonely and cranky. I’ve learned to never send any important emails at night – they always come out wrong: either too curt, or too vague.

Oct 4, 2019

Join the Google Revolution. An open letter to CTC

In California, 45 main and 14 additional standard elements describe requirements for elementary teaching preparation. Each of the main elements should be introduced, practiced, and assessed, which makes 135 minimal data points that should be linked to a specific place in one of the 15 course syllabi. Of course, many elements are actually taught several times, and are mentioned in different parts of the syllabus. For example the element 1.3 (Connect subject matter to real-life contexts and provide active learning experiences to engage student interest, support student motivation, and allow students to extend their learning.) is linked to various places in syllabi 29 times. Element 3.1 is explained through 33 links, etc. We have submitted 12 program reports, some of which may have up to 88 standard elements (Mod/Severe SPED). That’s 12 matrices with hundreds of references to specific pages in multiple syllabi.

One can only imagine how many hours of tedious manual work went into construction of the matrix with thousands of links to syllabi. Because syllabi are dynamic documents, and they SHOULD change every semester, we have to use a special “official” syllabus that is not exactly the same as the document given to students. Moreover, most faculty use the learning management system (Canvas in our case). Therefore, they have to construct an "anchor syllabus" mainly for compliance purposes.

Just wait, it gets worse. The reviewers also do not find the matrices useful. There is absolutely no way for a reviewer to click through hundreds of links, looks at hundreds of pages in the syllabi and make a sound judgement on whether the program element is taught well. Therefore, they end up randomly clicking a few places, and finding a few bugs. The reviewers will get a really good sense of the program by talking to students, partners, and faculty. Professionals can always tell if things are going right or wrong. They will report their overall conclusions based on those intangibles. However, they will pretend to derive their conclusion from the massive accreditation reports.

I know the system well, at all levels. I know people who developed those standards, and those who designed the technical requirements for accreditation, and those who submit and review reports. These are all decent, smart, well-meaning people. None of them intended for the system to become so absurd. In general, good people sometimes build bad systems; this is the first law of the organizational studies. What happened is that we have managed to miss the Google revolution that profoundly changed the information processing.

It is all about finding information. The first generation of data systems blindly followed the conventions of paper-based technologies: it had hierarchical directory structures. Some people still treat their personal files that way: they have directories, folders, subfolders, and sub-sub-folders, as well as file naming conventions. However, information is not hierarchical, and certain files can belong to two or three different folders. For example, a file on payments to faculty related to grants on graduation initiatives can belong to Faculty folder, to Financials folder, to Grants subfolder, and to Graduation Initiative folder. Computer scientists came up with a clever trick of tags (or keywords), where you could attach all four tags to this file, and retrieve the file four different ways. In effect, the same file could sit in many different “folders” at the same time.

Then came along Google, whose founders had a breakthrough insight: every word in the document is already a tag, every word is a keyword, and in a weird way, is a folder of its own. If you index the entire internet, you could find anything just by using the words or phrases in the document. Using the natural language’s syntax helped to narrow down your search. The information you get from Google search is not as neatly structured, but is a lot cheaper, and vastly more relevant than what we had before.

It took a while for the thinking to find its way into people’s personal computers. Like many other people, I do not have any folders in my drive – I just search through my documents the same way I would have searched the internet. It is the same with e-mail – there is no point in storing it in folders, just search for what you remember was in the message: names, words, numbers. With large text data, searching is really the only game in town. There is no other economical way of organizing and retreating these data. Accreditation bodies everywhere have missed the revolution completely, and design accountability practices assuming the data is small. However, the data sets are much larger than they assume, and the work of marking (tagging, linking) it is out of hand.

Here comes my pitch to CTC (it is California Commission on Teaching Credentials) and to all accrediting bodies in the world:
  • If you want to see the real dynamic picture, not a set of documents constructed just for you;
  • If you want faculty and staff to work on program improvement, not on mindless compliance;
  • If you want to save millions of mostly public dollars;
All you have to do is this: Ask the programs put all their current, real syllabi, canvas shells, handbooks, and program websites into one searchable directory. Ask your reviewers to google what they are curious about in each of the programs under review. They will see a search box tuned to look only through documents specific to one program in question. For example, to see if the program actually teaches about individualized family support plans, google “IFSP,” and see how and in which context it is introduced and assessed. Google “phonics” if you think we do not teach it enough. Google anything else related to any of the standards, as you do in your normal everyday life when you want to learn something about anything. Program review is just that, learning about a program, right?

(Now, the standards also need to be trimmed; 60 elements is simply ridiculous. Engage in Deborah Ball-like thinking. There are essential, priority skills, which you need to work on and assess. The time of checklists is over. While it is an occasion for another revolution, I will just suggest that standards themselves could be a list of key concepts rather than vague pseudo-scientific statements they are today).

Catching up with the Google Revolution would liberate us from a whole lot of useless work and allow us to do more for program quality while doing less for the sake of simple compliance. Compliance takes away all resources, all our time, all our energy so that very little is left for actual improvement.