In California, 45 main and 14 additional standard elements describe requirements for elementary teaching preparation. Each of the main elements should be introduced, practiced, and assessed, which makes 135 minimal data points that should be linked to a specific place in one of the 15 course syllabi. Of course, many elements are actually taught several times, and are mentioned in different parts of the syllabus. For example the element 1.3 (Connect subject matter to real-life contexts and provide active learning experiences to engage student interest, support student motivation, and allow students to extend their learning.) is linked to various places in syllabi 29 times. Element 3.1 is explained through 33 links, etc. We have submitted 12 program reports, some of which may have up to 88 standard elements (Mod/Severe SPED). That’s 12 matrices with hundreds of references to specific pages in multiple syllabi.
One can only imagine how many hours of tedious manual work went into construction of the matrix with thousands of links to syllabi. Because syllabi are dynamic documents, and they SHOULD change every semester, we have to use a special “official” syllabus that is not exactly the same as the document given to students. Moreover, most faculty use the learning management system (Canvas in our case). Therefore, they have to construct an "anchor syllabus" mainly for compliance purposes.
Just wait, it gets worse. The reviewers also do not find the matrices useful. There is absolutely no way for a reviewer to click through hundreds of links, looks at hundreds of pages in the syllabi and make a sound judgement on whether the program element is taught well. Therefore, they end up randomly clicking a few places, and finding a few bugs. The reviewers will get a really good sense of the program by talking to students, partners, and faculty. Professionals can always tell if things are going right or wrong. They will report their overall conclusions based on those intangibles. However, they will pretend to derive their conclusion from the massive accreditation reports.
I know the system well, at all levels. I know people who developed those standards, and those who designed the technical requirements for accreditation, and those who submit and review reports. These are all decent, smart, well-meaning people. None of them intended for the system to become so absurd. In general, good people sometimes build bad systems; this is the first law of the organizational studies. What happened is that we have managed to miss the Google revolution that profoundly changed the information processing.
It is all about finding information. The first generation of data systems blindly followed the conventions of paper-based technologies: it had hierarchical directory structures. Some people still treat their personal files that way: they have directories, folders, subfolders, and sub-sub-folders, as well as file naming conventions. However, information is not hierarchical, and certain files can belong to two or three different folders. For example, a file on payments to faculty related to grants on graduation initiatives can belong to Faculty folder, to Financials folder, to Grants subfolder, and to Graduation Initiative folder. Computer scientists came up with a clever trick of tags (or keywords), where you could attach all four tags to this file, and retrieve the file four different ways. In effect, the same file could sit in many different “folders” at the same time.
Then came along Google, whose founders had a breakthrough insight: every word in the document is already a tag, every word is a keyword, and in a weird way, is a folder of its own. If you index the entire internet, you could find anything just by using the words or phrases in the document. Using the natural language’s syntax helped to narrow down your search. The information you get from Google search is not as neatly structured, but is a lot cheaper, and vastly more relevant than what we had before.
It took a while for the thinking to find its way into people’s personal computers. Like many other people, I do not have any folders in my drive – I just search through my documents the same way I would have searched the internet. It is the same with e-mail – there is no point in storing it in folders, just search for what you remember was in the message: names, words, numbers. With large text data, searching is really the only game in town. There is no other economical way of organizing and retreating these data. Accreditation bodies everywhere have missed the revolution completely, and design accountability practices assuming the data is small. However, the data sets are much larger than they assume, and the work of marking (tagging, linking) it is out of hand.
Here comes my pitch to CTC (it is California Commission on Teaching Credentials) and to all accrediting bodies in the world:
(Now, the standards also need to be trimmed; 60 elements is simply ridiculous. Engage in Deborah Ball-like thinking. There are essential, priority skills, which you need to work on and assess. The time of checklists is over. While it is an occasion for another revolution, I will just suggest that standards themselves could be a list of key concepts rather than vague pseudo-scientific statements they are today).
Catching up with the Google Revolution would liberate us from a whole lot of useless work and allow us to do more for program quality while doing less for the sake of simple compliance. Compliance takes away all resources, all our time, all our energy so that very little is left for actual improvement.
One can only imagine how many hours of tedious manual work went into construction of the matrix with thousands of links to syllabi. Because syllabi are dynamic documents, and they SHOULD change every semester, we have to use a special “official” syllabus that is not exactly the same as the document given to students. Moreover, most faculty use the learning management system (Canvas in our case). Therefore, they have to construct an "anchor syllabus" mainly for compliance purposes.
Just wait, it gets worse. The reviewers also do not find the matrices useful. There is absolutely no way for a reviewer to click through hundreds of links, looks at hundreds of pages in the syllabi and make a sound judgement on whether the program element is taught well. Therefore, they end up randomly clicking a few places, and finding a few bugs. The reviewers will get a really good sense of the program by talking to students, partners, and faculty. Professionals can always tell if things are going right or wrong. They will report their overall conclusions based on those intangibles. However, they will pretend to derive their conclusion from the massive accreditation reports.
I know the system well, at all levels. I know people who developed those standards, and those who designed the technical requirements for accreditation, and those who submit and review reports. These are all decent, smart, well-meaning people. None of them intended for the system to become so absurd. In general, good people sometimes build bad systems; this is the first law of the organizational studies. What happened is that we have managed to miss the Google revolution that profoundly changed the information processing.
It is all about finding information. The first generation of data systems blindly followed the conventions of paper-based technologies: it had hierarchical directory structures. Some people still treat their personal files that way: they have directories, folders, subfolders, and sub-sub-folders, as well as file naming conventions. However, information is not hierarchical, and certain files can belong to two or three different folders. For example, a file on payments to faculty related to grants on graduation initiatives can belong to Faculty folder, to Financials folder, to Grants subfolder, and to Graduation Initiative folder. Computer scientists came up with a clever trick of tags (or keywords), where you could attach all four tags to this file, and retrieve the file four different ways. In effect, the same file could sit in many different “folders” at the same time.
Then came along Google, whose founders had a breakthrough insight: every word in the document is already a tag, every word is a keyword, and in a weird way, is a folder of its own. If you index the entire internet, you could find anything just by using the words or phrases in the document. Using the natural language’s syntax helped to narrow down your search. The information you get from Google search is not as neatly structured, but is a lot cheaper, and vastly more relevant than what we had before.
It took a while for the thinking to find its way into people’s personal computers. Like many other people, I do not have any folders in my drive – I just search through my documents the same way I would have searched the internet. It is the same with e-mail – there is no point in storing it in folders, just search for what you remember was in the message: names, words, numbers. With large text data, searching is really the only game in town. There is no other economical way of organizing and retreating these data. Accreditation bodies everywhere have missed the revolution completely, and design accountability practices assuming the data is small. However, the data sets are much larger than they assume, and the work of marking (tagging, linking) it is out of hand.
Here comes my pitch to CTC (it is California Commission on Teaching Credentials) and to all accrediting bodies in the world:
- If you want to see the real dynamic picture, not a set of documents constructed just for you;
- If you want faculty and staff to work on program improvement, not on mindless compliance;
- If you want to save millions of mostly public dollars;
(Now, the standards also need to be trimmed; 60 elements is simply ridiculous. Engage in Deborah Ball-like thinking. There are essential, priority skills, which you need to work on and assess. The time of checklists is over. While it is an occasion for another revolution, I will just suggest that standards themselves could be a list of key concepts rather than vague pseudo-scientific statements they are today).
Catching up with the Google Revolution would liberate us from a whole lot of useless work and allow us to do more for program quality while doing less for the sake of simple compliance. Compliance takes away all resources, all our time, all our energy so that very little is left for actual improvement.
No comments:
Post a Comment