Search This Blog

Jun 10, 2019

The secret lives of proxies

We need them, we get used to them, and they take on lives of their own. This is a pitch to always remember that a proxy measure is just a proxy, not the real thing.

A proxy is an indirect measure of something. In higher education where direct measures are few, this is especially true. For example, universities use acceptance rate as a proxy for the quality of students. More “selective” universities should be of higher quality. Of course, the less selective universities may measure higher by the value added to student education, and in this sense more productive. But who needs thinking when there is a simple index, right?

It is the same with selectiveness of each program. How many applied? How many have you accepted? However, if your program has a reputation of being very rigorous, it will discourage many people from applying in the first place, so the proxy stops working right there. Moreover, if you are very clear and careful about your program’s expectations, and criteria of admissions, fewer people will waste their time applying. You have a better program, but it has low rejection rate. This creates an incentive to over-promise to the applicants, which makes the whole system less efficient and more deceitful.

Universities often proudly present their student-to-faculty ratios. So if you have 1:15, it is supposedly better than 1:20. Why? – Because the ratio translates into smaller classes, more personal attention from faculty, and, in theory, better educational quality. But just remember it is but a proxy. In fact, we have no idea what is going on in those smaller or larger classes, if the quality of teaching and learning is in any way different. In fact, there is little evidence of class size direct effect on learning outcomes. The low ratio can be explained by terrible management, when some classes are large, and many are small for no good reason.

Take the College Scoreboard, something the Obama administration put together. It has three measures: the average annual cost, the six-year graduation rate, and the median salary at 10 years after graduation. Sac State measures at $9,7K, 48%, and $48.9K, respectively. UC Davis, our neighbor, measures $17,7K, 85%, and $58,2K. Each of the three numbers is supposed to tell something about the quality of the institution. However, UC has a medical school, and we have much larger group of teachers and social workers. These two things will skew the median. Depending on your career choices, investment in Sac State may or may not be better than that in UC Davis. The proxies tell very little about the quality of instruction or return on investment. All three indicators are useful, unless someone attempts to read too much into them. To the Government’s credit; the College Scoreboard is never described as a measure of quality. In fact, it gives very little interpretation, just “and here is some data.”

Here is another interesting proxy: people who work long hours are assumed to be more productive. We all understand that it is not so, that productivity is as much a function of organization, smart planning, and tech skills as it is a function of length of work. And yet somehow this particular proxy insinuates itself into most people’s common thinking. One person can spend days fishing hundreds а email out of a long document. Another person will google for a macro that does the same thing in five seconds, and then takes a long walk to think of the next creative solution. Which of them is more productive?

Is longevity a good proxy for population’s health? Yes, up to a certain degree. Yet is a certain society figures out how to keep very sick people alive longer, the proxy stops reflecting what you think it is supposed to reflect. You just have many more sick people hanging on, not an improvement in health and well-being.

We are better off with proxies than without any measures and indicators at all. However, one has to be aware of the danger of the proxy fetishism, when a proxy measure takes a life of its own, becomes an important as such. Every proxy is based on an assumption that whatever you are measuring is closely correlated with the thing you actually want to measure, but cannot. However, very rarely do these correlations stay constant, especially at the extremes.

Jun 3, 2019

Relation-Centered Education, a Call to action

Critiques of test-based accountability abound; alternatives to it are few and far between. The demand for accountability is not an aberration, not a mental affliction of “neoliberalism.” No, it is because education has become huge and very expensive. The US spends about 8% of GDP on education, most OECD countries are not far behind. The quality of education varies greatly across the world and within each country. The public has the right to know how the huge sums of money are spent and if there is any improvement. Many educators secretly hope that the taxpayers will fund them and not ask any questions. Well, this is not how it works. There is no public support without accountability.

With all the obvious shortcomings of the standardized testing, it is cheap, reliable, and fairly objective. It measures only one dimension of education, and too narrowly at that. If we want to change the game, we need to come up with another instrument that would measure another dimension of education equally reliably and cheaply. What would that another dimension be?

Some 15 years ago, a group of us wrote a book called No Education without Relation that opened up with the “Manifesto of relational pedagogy.” We argued that relations are not just an important educational means, but are also an educational end. Frank Margonis actually coined the term “Pedagogy of Relation” that has an intuitive appeal to many educators. We drew on several philosophical traditions (Buber, Bakhtin, Noddings, and others). The book was well received, but it suffered from the usual limitations of theoretical work: it stayed largely within the theoretical discourse.

Over the years, there was some response from empirical researchers and practitioners, much of it outside the US. There is a fairly long tradition of classroom and school climate research. The problem with it, it is all self-report, which makes the data unreliable and expensive to collect. There is a whole group of people trying to measure the 21st century skills, but they are struggling, because there is no good theory behind that movement. Focusing on skills simply broadens the current test-based accountability, but does not offer a whole new dimension of education to consider. However, a quick look across educational scholarship reveals that many people from different disciplines recognize the centrality of relations in the educational enterprise. They may call it something else, and draw on vastly different traditions, but will still demonstrate the “family resemblance.”

Some of the original authors are now trying to form an interdisciplinary network that would include empirical researchers, practitioners (both teachers and teacher educators), psychologists, psychometricians, and policy scholars. The network would start building a knowledge base around these goals, from both existing and new scholarship. It may have very practical objectives:
  1. Understand how educators create positive educational relations with and among students
  2. Learn how to help teachers develop their relational skills
  3. Develop good instruments to measure the quality of relations in educational settings
  4. Offer a policy framework for the relational accountability, to augment or replace the cognitive test-based accountability approaches
What is the next step? We need to meet to have an initial conversation, an organizing meeting. It would be easier to hold it here in Sacramento, but I need to gauge interest? Here is the link to put your name down if you are interested.