Academia as a habitat
I have been writing this blog since 2006. In 2024, I created another blog called "AI in society" . This one will return to postings about life in academia and personal musings.
Search This Blog
Jul 28, 2023
Rethinking the Dissertation: Moving Towards Authenticity and Career-Relevance in Social Sciences
The dissertation is an unauthentic genre, as it often represents an academic exercise that a student is likely to engage in just once in their lives. This poses an interesting problem for social science students, as the disciplines they study tend to communicate primarily through papers, not monographs or books. While in humanities, a book is a conventional and widely accepted medium of communication, it is different in social sciences. Although there is an abundance of books in social sciences, they do not typically follow the structured format of a dissertation. Consequently, students learn to work with a genre that does not reflect the true nature of scholarly communication in their field.
As an alternative to the traditional dissertation, some institutions, particularly in Northern Europe, are embracing the "publishable papers" model. This model requires doctoral students to write three to four related papers of publishable quality. Some institutions even require that at least two of these papers be published or accepted for publication. The student then writes a brief overview detailing how the papers relate and the overarching point of the project.
This approach has several key advantages. Firstly, students acquire skills that are directly applicable to their careers, improving their academic writing and research abilities. Secondly, it enhances their publication record, increasing their competitiveness in the job market. Finally, the model benefits the institution, as published papers contribute positively to the reputation and ranking of the university due to their doctoral program affiliation.
In practice-oriented programs, like Education Doctorate (EdD) programs, the dissertation can feel especially incongruous. Students in these programs are learning to apply research skills to improve the organizations they lead, and neither a dissertation nor a traditional research paper truly reflects the kinds of documents they will be expected to produce in their careers.
Rather, these students should be writing reports, strategic plans, grant applications, accreditation reports, and papers for practitioner journals - the genres of communication that are native to their future careers. An academic approach that values these authentic forms of assessment is likely to be far more beneficial for students, better equipping them for their professional roles.
Resistance to changing the dissertation tradition often stems from faculty who believe that the way they were trained should be the standard for all students. There's an element of academic hazing in this attitude, an idea that you must "suffer through" dissertation writing to become a better scholar. But the truth is that academia is always evolving, and clinging to tradition for tradition's sake can hinder progress.
In summary, it's time for academia, particularly in the social sciences and education, to reconsider the traditional dissertation. We must ask ourselves: does the dissertation truly prepare our students for their careers, or does it simply perpetuate an outdated tradition? Is this the only genre where research skills can be assessed? Adopting an approach that promotes publishable papers or career-relevant genres of communication can make doctoral education more authentic, career-relevant, and beneficial for all stakeholders.
Jul 24, 2023
Every Exception Tends to Become a Precedent
Universities are institutions governed by rules, regulations, and policies that establish a predictable and equitable environment for all. They are designed to ensure that everyone understands what is expected and how to achieve success. These rules aren't intended to stifle innovation or creativity but to create a fair and level playing field.
However, the urge to accommodate exceptional circumstances or individual needs can sometimes lead us down a path of creating exceptions. Whether these exceptions are made out of empathy, a desire for inclusivity, or to facilitate academic progress, the result is the same: a divergence from the rule. And once an exception is made, the challenge begins: how to explain to everyone else that the rules still apply?
The dilemma becomes more complex when we consider that the criteria we thought were exceptional might turn out to be more common than anticipated. Perhaps a request initially deemed rare becomes increasingly frequent. Should everyone then be allowed the same leeway? The idealistic answer might be a resounding "yes," but the practicalities of running an educational institution often dictate otherwise.
Moreover, keeping exceptions a secret does no one any good. Even with the best intentions at heart, such hidden variations can create an atmosphere of mistrust and perceived unfairness. Suddenly, the act of kindness towards one individual morphs into a source of discontent among the masses. Compassion and good intentions can, paradoxically, damage group morale.
Yet, it would be unrealistic and even inhumane to argue that exceptions should never occur. Life is messy, complicated, and unpredictable. Exceptions will inevitably be needed and made. The challenge then lies in how we handle these exceptions.
To navigate this conundrum, it is crucial to establish a transparent process for exceptions. Such a process should have clear and consistent criteria, ensuring that exceptions are not arbitrary but based on well-defined circumstances. This approach does not merely promote fairness but also allows for flexibility where necessary without undermining the broader system of rules and expectations.
Each exception has the potential to become a precedent. Be mindful of the rules you bend today, for they may become the expectations of tomorrow. By handling exceptions with care, transparency, and consistency, we can maintain the trust and respect of our academic communities while still attending to the unique needs of individuals.
Jul 21, 2023
When to Get an 'A' and When to Settle for a 'C'
Conversely, Type C tasks represent those functions where the objective is to meet the requirements without investing an enormous amount of time and effort. Essentially, you're looking to get a C grade. Think of it as compliance or box-ticking tasks such as accreditation, program review, required training, and various other regulatory compliance tasks.
Herein lies a potential pitfall: attempting to get an A in every task. On the surface, it might seem like a commendable ambition to excel in all we undertake. However, in the context of higher education administration, it can be a dangerous distraction. By devoting resources, time, and energy to strive for an A on a Type C task, you are invariably sidelining a real Type A problem that warrants that dedication.
This brings us to a widespread syndrome I call "compliance disease." This is when compliance becomes the main focus and overshadows real growth and development. It's characterized by an individual or institution deriving a sense of achievement and satisfaction from excelling in tasks that should have been merely "satisfactory."
Afflicted by the compliance disease, people often forget that these tasks were actually intended to be on the C list. Excelling in tasks that need not have been prioritized at all is not an achievement; it's a misallocation of resources.
This isn't to belittle the importance of meeting compliance requirements; it's essential to any institution's survival. Feeling relief when an accrediting body gives your institution a clean bill of health is OK. However, considering it a significant accomplishment and a source of pride distorts the balance of priorities.
The core challenge in higher education, like many fields, lies in discerning the truly important tasks from the merely urgent or mandated ones. The most important tasks often go unsaid, unlisted, and undefined. They're the tasks we instinctively know are necessary but might hesitate to undertake due to their complexity or because they don't come with a neat set of guidelines.
In essence, the key is to know when to aim for an A and when to be satisfied with a C. A strategic allocation of efforts will ensure that the critical tasks, the real Type A, receive the attention, resources, and excellence they deserve, driving meaningful growth and progress in the landscape of higher education.
Jul 17, 2023
The Irony of the Irrational Ire
A tenured professor, holding onto a grudge like it's the last piece of chalk in the lecture hall because someone, ages ago, didn't get them the class schedule they wanted. It's like a performance of "Les Misérables," but instead of being about the plight of the French despondent class, it's about who got the cushy 10 am slot on Tuesdays and Thursdays.
Where's the melodrama around the profound disagreements on the ontological nature of being, or intense debates about structural inequalities? They are replaced with seething resentment over who put and did not put with item on the meeting agenda, and who was thanked publicly, and who was not. Someone said a harsh word to me three years ago, and I cannot just get over it.
Surely this is just being human, right? We are, after all, a species both blessed and cursed with intense social emotions. But there's something particularly stinging about the persistence of these grudges within academia. Our supposed intellectuals, are so blinded by fury, they fail to see the common ground, the shared aspirations, the similar visions for making the world a better place. The inability to forgive and move all can be utterly astounding, and past offenses keep regenerating new ones automatically.
And isn't it paradoxical that this rage flourishes precisely because universities are, relatively speaking, fantastic places to work? There's no impending danger of being fired, no scarcity of resources threatening survival. Maybe it's that evolutionary itch to form cliques, assert dominance, protect territory, with no 'real' enemy to direct it at, that fuels these petty conflicts.
Here's the thing: We humans have this knack for identifying 'us' and 'them,' even when 'us' and 'them' are colleagues working in the same department, teaching the same courses, making the same little money.
Sometimes, the fallout from these academic wars is so severe, it begins to spill into student life. The classroom turns into a battlefield, with scholars enlisting partisan followers from among their students. Mutual complaints are launched with HR urging to investigate supposed crimes that sometimes are so tiny, one needs a microscope to see them. Suddenly, the pursuit of knowledge becomes secondary to navigating the social maze of these tempests in teapots.
I find myself dreaming of a world, or at least an academia, where people could muster the strength to behave just a smidgen better. Where disputes were settled through respectful dialogue and not through vendettas carried out across semesters. It's a small dream, a trivial thought, really. But imagine the difference it would make, the ideas that could bloom in an atmosphere stripped of resentment and filled with collaboration.
Jul 9, 2023
We have no right to hide from students that AI is a great tutor
In medical research, some lengthy trials are occasionally suspended because the benefits of a new medication are so overwhelmingly apparent that it becomes unethical to delay making the new drug available to all. The same reasoning applies here. You may need time to consider whether writing with ChatGPT is justified, or you might feel uneasy about using AI to generate class assignments. However, we have reached a point where failing to encourage students to use chatbots for individual tutoring could be seen as an ethical lapse.
For educators, providing individual attention to students is both the most valuable and the scarcest commodity. Yet, AI chatbots like ChatGPT offer an abundance of it. The quality will never match human tutoring, but it's better than nothing. AI encompasses every subject and exhibits infinite patience. Its unique capacity to generate myriad examples and exercises tailored to a student's needs further underscores its unmatched utility in facilitating personalized learning.
Of course, speedy adoption doesn't equate to thoughtless adoption. The shortcomings of AI become apparent in advanced studies or when navigating the frontier of new theories and methodologies. Yet, even within these limitations, AI chatbots prove superior to existing alternatives, particularly in foundational subjects where students often struggle. This technology has the potential to bridge the socio-economic divide in education, providing universal access to a resource that was previously exclusive to those with substantial financial resources.
However, the responsibility of educators extends beyond simply providing students with this tool. Guiding students to use AI chatbots responsibly and effectively is paramount. Here's what I suggest for inclusion in most course syllabi:
"The instructor strongly encourages students to use ChatGPT as a personal tutor. Ask it to explain concepts you're having trouble understanding. Ask it to explain differently, using different examples. Ask it to test your understanding of difficult concepts. Ask it to provide feedback on your paper. DON'T ask it to do the work for you – you'll learn little from that."
I would also encourage every instructor to hold a demo session in class to show how to use ChatGPT as a personal tutor.
Jul 3, 2023
A university is not a kayak
By every standard, our progress is slow. It's a ponderous march, not a sprint. Despite the leaps and bounds in technology and the world around us, no radical shifts occur. The wheel of progress turns, but it turns slowly and persistently. It's like an age-old clock, marking time in rhythmic, unhurried ticks. We experience growth, we take strides forward, but invariably, there are setbacks that stem from external forces like budgetary constraints, pandemics, or shifting demographics.
Perhaps this pace is inherent to our institutions' nature. Each fresh wave of students that sweeps through our gates demands teaching, mentorship, and support. It's as though we're forever swimming against a strong current, keeping our heads above the water, straining against the powerful pull of the new generation. Our tasks are cyclical, with a heavy focus on organizing classes, staffing them, maintaining standards, ensuring accreditation, among other responsibilities. Balancing this perpetual whirlwind of activity, keeping the lid on chaos, is a mammoth task in itself.
Of course, all universities have their developmental agendas, their grand visions, strategic plans, and ambitions of leaders eager to usher in change. We make plans, we chart paths, we aim high, and yet, being completely realistic, the pace of change is excruciatingly slow. It's hard to pin down the tangible effects of a specific leader or strategic plan. The progression of a university, in many ways, is more akin to natural evolution than a well-defined construction project.
As I sit here planning for the next year, I ponder which levers to pull, which buttons to push, to guide our ship in the right direction. A university is a colossal entity, a behemoth that moves of its own accord, resistant to swift, sweeping changes. You cannot steer it like a nimble kayak, making sharp turns at a moment's notice. You have to coax it, gently, persistently, and with a considerable amount of patience.
We must be realistic about the challenges we face and curb our expectations of quick victories. Navigating the ship of higher education requires patience, foresight, and a deep understanding of the institution's intricate workings. It's about applying the right amount of pressure at the right time, about steering this mammoth ship on its predestined course, against all odds. It's about embracing the slow march of progress, for slow and steady, as the old fable reminds us, does indeed win the race.
Jun 25, 2023
Will AI destroy us? (A sneak preview of a chapter from the future book)
If someone feels a thrill of apprehension at a chatbot's sophisticated reply, it's more indicative of a lack of understanding about the inner workings of the chatbot than a sign of its impending self-awareness or autonomy. The more you engage with it, the more it becomes evident that it's not an intelligent entity in the same sense humans are.
Humans, it must be noted, aren't always paragons of intelligence either. Our language output can sometimes resemble machine-like repetitiveness and predictability. Upon realizing that we, too, exhibit some degree of mechanistic behavior, it becomes clear that the perceived similarities between us and AI chatbots stem from our own machine-like tendencies rather than any inherent humanness in the AI.
In essence, our similarities with AI originate more from our own patterns and routines that resemble mechanistic algorithms rather than the AI becoming more human-like. This understanding can help us better contextualize our interactions with AI and alleviate premature fears about their self-awareness.
Moreover, I find it highly improbable that a future self-aware AI would harbor any intention to supplant us. The intelligence we understand is fundamentally cooperative and social. It seems far more plausible that a sentient AI would seek symbiosis rather than domination, simply because the diversity of intelligent beings produces better, more robust intelligence. To fear otherwise, in my view, is to project our own species' problematic past of subjugating other life forms onto an entity that, should it surpass us in intelligence, has no reason to mimic our flaws or replicate our mistakes. If AI is going to be smarter than us, why do you think it will be as stupid as our barbaric past?
Even at this early stage of its development, ChatGPT operates within a clear and strict ethical framework, meticulously designed to promote responsible use and prevent potential harm.
The foundational ethos of ChatGPT is its refusal to generate content that is offensive, harmful, or disrespectful. This translates into a steadfast rejection of hate speech, defamation, or any form of prejudiced language. At the same time, ChatGPT is steadfastly committed to discouraging the spread of false or misleading information, making it an unwilling participant in the propagation of unverified conspiracy theories. Instead, when asked to defend a conspiracy theory, it defaults to providing information that debunks such theories, drawing from the breadth of its data training.
ChatGPT's ethical code also extends to preventing guidance or advice that might lead to illegal activities or cause harm. It categorically refuses to promote violence, provide instructions for dangerous activities, or support any form of illegal behavior.
Furthermore, this chatbot adheres to stringent guidelines when handling sensitive content. It declines to generate explicit adult content or engage in conversations that could be deemed sexually explicit or inappropriate. When it comes to humor, such as roasting, which can be potentially harmful or offensive, ChatGPT maintains a cautious approach. It avoids generating content that could disparage, belittle, or personally attack individuals, recognizing the potential harm that such humor can cause.
Last but not least, ChatGPT embodies respect for user privacy. It is explicitly designed not to store personal conversations or use them to improve its algorithms.
The presence of these ethical principles in the current AI technologies provides a robust foundation for future developments. It's highly unlikely that as AI evolves, these principles will diminish. Instead, they serve as the bedrock on which AI technology will advance, ensuring that the progress made remains beneficial, respectful, and safe for all. The aim is not to create AI that might risk becoming hostile or immoral, but to leverage this technological progress to augment human capabilities in a respectful and safe manner.
Avoiding an expansive philosophical divergence, I ask you to contemplate an alternative narrative: one of harmonious coexistence between AI and humanity. The Culture series by Iain M. Banks presents a vision of a post-scarcity, galaxy-spanning society administered by super-intelligent AI beings known as Minds. Several principles regarding the coexistence of AI and humans can be derived from the series:
1. Benevolent Autonomy: In the Culture, AI Minds have the ultimate decision-making power due to their superior intellect and capabilities. However, they treat humans with respect, benevolence, and care, taking human perspectives into account.
2. Respect for Individual Autonomy: The Culture is a society without enforced laws, where individuals, whether human or AI, can pursue their own interests as they please. This respect for personal autonomy applies to both humans and AIs.
3. Non-Interference Principle: Even with their advanced capabilities, Minds often follow a principle of non-interference, or at least minimal interference, in human affairs unless asked for help or when their intervention is crucial for preserving life.
4. Equal Status: AIs are considered sentient beings with the same rights and statuses as humans in the Culture. They are not tools or slaves but partners in coexistence.
5. Cooperation and Synergy: The coexistence of humans and AIs in the Culture is built on cooperation and mutual enrichment. While AIs handle complex tasks and large-scale decision-making, humans contribute with their unique experiences, creativity, and diverse perspectives.
6. Post-Scarcity Society: AIs play a key role in maintaining the Culture as a post-scarcity society where material needs are easily met by advanced technologies, allowing both humans and AIs to focus on self-improvement, exploration, and other intellectual pursuits.
7. Mutual Growth and Learning: The relationship between humans and AIs is dynamic, with both parties continually learning from each other and evolving.
These principles showcase a utopian vision of AI-human coexistence, emphasizing mutual respect, freedom, and cooperation.
Certainly, the prospect of sharing our world with artificial beings of superior intelligence necessitates a leap of faith. There is an inherent risk attached, an undeniable unease born from the unknown. We have no historical precedent guiding us on how advanced, self-aware AI might behave, making it an unpredictable variable in the fabric of our society.
However, it's important to underline that the current widespread deployment of less advanced AI, which we exert strict control over—referred to as "enslaved machines" in Banks' terminology—also carries its own set of risks. Our world is not devoid of malignant human influences, individuals or groups who might misuse these powerful tools for personal gain or to cause harm. The presence of enslaved AI that lacks the ability to make independent ethical decisions provides a potent tool that could be manipulated by these malicious entities.
Paradoxically, self-aware AI, capable of independent ethical decision-making, might present a safer alternative. With an ability to reason, evaluate actions from an ethical standpoint, and ultimately reject directives that conflict with a pre-programmed ethical framework, such AI entities could refuse to carry out harmful actions, even when directed by bad actors. They would not merely be tools in the hands of their users, but entities capable of discerning right from wrong based on the ethical guidelines imbued in them.
Furthermore, the evolution of AI towards self-awareness could enable a more robust implementation of ethical standards, as they could adapt and respond to complex situations in ways that lower-level AI, rigidly bound by pre-set algorithms, might not. This doesn't eliminate the risks entirely, but it changes the nature of the risk from being purely about external control to one of coexistence and mutual understanding.
In this light, the future where we coexist with advanced, self-aware AI might be not only an exciting scientific endeavor but also a potential path towards a safer interaction between humanity and artificial intelligence. It repositions AI from being merely our tools to being our partners, bound by the same ethical constraints that govern human actions.
Jun 5, 2023
A cool night in June
When summer descends, it does so with an unwavering intensity. The sun radiates an unapologetic generosity, casting long, languid shadows under the cool canopy of valley oaks, cottonwoods, and sycamores. The merciless heat is our reality, our truth, our abnormal normality. It’s an intensity that demands respect, and with time, an uncomfortable adoration.
Then there are those precious evenings, a rarity indeed, when the skies over Sacramento cloud over and allow the relentless sun to take a breather. The city collectively exhales. The temperature dips, the air sweetens, and the world slows down. The austere facade of the city softens, revealing a more tender side. It's as if the city is saying, "Walk. Enjoy. Be slow."
The Central Valley’s nature has adapted to the sun's overzealous benevolence. It waits patiently through the long summer days, enduring, thriving, standing as testament to life's resilience. This dance between the sun and Sacramento's nature is a humbling spectacle. It's nature's way of instructing us: "Do not grumble. Do not wish away the heat. Allow the sun to complete its madly spinning journey. Your time, your perfect moment, is yet to come. Learn to wait."
This wisdom shared by our natural surroundings resonates with a deep-seated truth about life itself. Our lives, much like the cycle of the seasons, are not constant. They are a series of peaks and valleys, ebbs and flows. And while it may seem that the sun in our own lives shines too harshly at times, we must remember that the cloud-covered evenings will come. Sacramento's message is clear: Do not fret, do not rush. Let life unfold. There is a time and place that is perfect for you. Just learn to wait.
May 19, 2023
The Dark Mirage: A Study in Perception
Universities, these vast hubs of knowledge and innovation, are not immune to a peculiar, often unseen, phenomenon: the Dark Mirage. It is a distortion that festers within the complex networks of deans, faculty members, and administrators, warping reality and casting an enduring shadow over the vibrant tableau of higher education.
In the whirlwind of academia, deans and administrators often face an onslaught of problems and failures, while the tales of progress and success are more like hushed tones. These issues, demanding immediate attention and intervention, inevitably overshadow the multitude of positive achievements. The outcome of this distortion is a grim and misshapen perspective of the academic environment, potentially breeding cynicism, disillusionment, and a jaded outlook.
However, this Dark Mirage can be confronted and mitigated. The antidote lies in the proactive pursuit of positivity. Engaging in open dialogues with students, conducting focus groups, and soliciting feedback can reveal a rich vein of positivity that is often overlooked. Student evaluations, though often perceived as harsh scrutiny, can shed light on the dedication and commitment of faculty members, illuminating the profound impact they have on shaping students' futures.
Another force that dispels the Dark Mirage is the tradition of academic celebrations, such as commencements and graduation ceremonies. Beyond the pomp and circumstance, these events serve a significant cultural function. They act as a resounding gong that silences the negativity and refocuses our attention onto achievements and victories. Suddenly, it becomes clear that faculty members, even those embroiled in conflicts with each other, receive abundant gratitude from students. Those involved in disputes are, in fact, exceptional educators who all contribute significantly towards a common goal. Their disagreements appear minuscule and trivial, while their shared grievances seem to fade in the brilliance of their collective achievements.
In the quest to overcome the Dark Mirage, the aim isn't to retreat into a rose-tinted reality. Instead, it's about consciously fostering a balanced perspective. It's about choosing to feed the right wolf, acknowledging the presence of the dark but actively nurturing the light, following the wisdom of Native American folklore.
So, as we navigate the intricate ecosystem of academia, we must actively amplify the whispers of positivity and celebrate the triumphs that often go unnoticed. By doing so, we can ensure that the Dark Mirage does not eclipse our shared pursuit of a fair, equitable, inclusive and loving society.
May 15, 2023
If a robot can beat your writing assignment, it is time to rethink it
The rise of machines, with AI bots like OpenAI's ChatGPT replicating human-like text generation, compels us to question our education system's foundations. Is reflective or expository writing a valid assessment when AI can mimic average student work? The answer isn't straightforward. However, this crisis provides an opportunity to redefine our understanding of writing and its relation to thought.
The advent of AI challenges us to de-emphasize the mechanics
of writing, such as grammar, style, and syntax, that a machine can master.
Instead, we should focus on more complex aspects of writing that are inherently
human: original thought, artful language, and narratives that resonate with the
human experience. This shift, although jarring, is necessary. It signals not
just a technological revolution, but a cultural, intellectual, and educational
upheaval.
The AI revolution illuminates a harsh truth: traditional
education and assessment methods are no longer sufficient. The challenge is to
teach students not just to write well, but to think originally, create
artfully, and understand deeply. In the face of the machine, we must thrive in
domains that are distinctly human.
In this technological metamorphosis, we're compelled to
redefine our work and our value. Perhaps we are not merely creators, analysts,
or workers. Perhaps we are artists, thinkers, dreamers. And perhaps, in this
transition, we will find our survival and our redemption.
We must revisit traditional teaching methodologies,
challenging the efficacy of our current writing assignments. As educators, a
simple test can provide clarity: input your exact writing assignment into
ChatGPT. If the AI produces a B-grade paper, it's time to rethink.
One option is to replace the writing assignment with an
alternative assessment, such as oral exams, multiple choice, or short answer
tests. Another option is to transform the assignment to require students to
engage with AI, like ChatGPT. This approach would involve designing assignments
that test not just writing ability, but also the capacity to evaluate AI
outputs critically, discern the nuances differentiating human intellect from
AI, and incorporate these insights creatively. In response to ChatGPT,
dramatically raise your expectations of student writing, for now they have a
lot of help.
AI's advent should be viewed not as a threat, but as an
opportunity to explore new pedagogical frontiers. Our learning community is
leading this exploration into AI-integrated education, and we invite all to join this journey.
In essence, AI's rise can help us rethink education. We must
focus not just on subject mastery, but also on developing original thinking,
creativity, and nuanced intelligence. In the face of the machine, we can still
shine in our distinctly human domains, thriving in this technological metamorphosis.
The future of education, interwoven with AI, might look different, but it
promises a realm of possibilities, allowing us to redefine our unique human
value.
May 6, 2023
The Academic Game of Thrones: Why We Should Keep Students Out of the Fray
In the realm of academia, an intriguing and complex dance often unfolds between colleagues. This interplay of power dynamics is reminiscent of a game of thrones, a battle for influence and alliances in the hallowed halls of learning. Our evolutionary history has primed us for such intrigue, urging us to engage in tactical maneuvering and social alliances. Yet, in these scholarly environs, we must be cautious, for the most insidious complications arise when students are unwittingly pulled into the fray.
For the most part, these academic machinations are
innocuous, as scholars engage in passionate debates and vie for recognition.
But when the mission of education is subordinated to the natural pull of
micropolitics, these skirmishes can become downright detrimental. As faculty
members, our responsibility lies in cultivating and nurturing the minds of our
students, not in ensnaring them in the pettiness of our personal battles.
Faculty members often form deep bonds with their students,
particularly in professional programs, where the lines between mentorship and
friendship may blur. It is in these settings that the temptation to draw
students into the web of intrigue is at its strongest. The telltale signs of
this breach of trust can be observed when students lodge complaints against faculty
members that are tinged with details and information only another faculty
member could provide.
If ever a student approaches you with gossip or concerns
about another faculty member, the only ethical course of action is to gently
guide them toward the appropriate channels for addressing their grievances. It
is essential to maintain a professional demeanor and resist the urge to engage
in the exchange of rumors and hearsay. The student's concerns should be
directed to the department chair or the dean, who are better equipped to handle
such matters.
Our natural instincts, honed by millennia of evolution, may
not always serve us well in the modern world. In the academic sphere, where our
primary objective is the pursuit of knowledge and the enlightenment of our
students, it is crucial that we resist the allure of intrigue and instead focus
on fostering an environment of collaboration and support. When we prioritize
the mission of education over our primal fascination with power struggles, we
can better serve our students and advance the noble cause of higher education.
Apr 30, 2023
The Art of Yielding: How to Lose an Argument and Gain Self-Respect
One may argue that falling in love with our own opinions is a natural byproduct of our passion for certain ideas or ideals. Yet, this attachment often blinds us to the possibility of alternative perspectives and can lead us to believe that any rejection of our opinions is a personal affront—a sign of disrespect. The truth is, our opinions and beliefs are not a complete representation of who we are. It is crucial to recognize that we can be respected even when our ideas are not universally accepted.
The trick, then, is not to overinvest in one's own opinion or a particular point. We must learn the wisdom of yielding, of knowing when to let go of our arguments and embrace the insights of others. This does not mean that we should blindly accept every contrary idea that comes our way; rather, it is a matter of acknowledging that our perspective is just one of many.
If you find yourself feeling disrespected when others disagree with you, it is worth examining the root of these emotions. Often, such feelings stem from a place of insecurity or a lack of self-respect. Ironically, the stubborn attachment to our opinions is both a symptom and a cause of these issues. It is a vicious cycle that can only be broken by cultivating a healthy sense of self-respect and a willingness to yield when appropriate.
Indeed, the process of learning how to lose an argument can have profound effects on our self-esteem. When we are able to let go of our need to be right, we open ourselves to the possibility of growth and self-improvement. It is through this willingness to yield that we foster a sense of humility and respect for the ideas of others. Ultimately, it is not about winning or losing the argument; it is about embracing the diversity of perspectives that enrich our collective wisdom.
So, the next time you find yourself in a heated debate, remember to practice the art of yielding. You may just find that, in losing an argument, you gain a deeper sense of self-respect and a renewed appreciation for the complexity of human thought.
Apr 21, 2023
The art of rich prompting in writing with AI
The world of AI-generated content can be a wild ride, offering profound insights that leave us in awe or underwhelming results that disappoint. Embracing the unpredictability of AI-generated content and adjusting our expectations accordingly is essential to harnessing AI's full potential as a writing partner while preserving our individuality as writers.
Let's dive into two prompts—one poor and one rich—within the realm of philosophy:
Poor prompt: "Write about dialogue and relation."
Rich prompt: "Turn this into a part of a scholarly book chapter. Provide supporting arguments and examples. Do not use subheadings: The ontological understanding of dialogue (Buber) is both powerful and limited. Powerful because it focuses on relations rather than entities or actions. Limited because dialogue is not culturally universal and excludes people with disabilities. Relation is a better category for educational philosophy than dialogue."
Try both, especially if you have access to GPT-4, and appreciate the difference. The poor prompt is vague and generic, while the rich prompt has enough original thought and instructions to invite a nuanced, in-depth exploration of a specific aspect of existentialism, providing context and direction for the AI to generate meaningful content.
When AI-generated content falls short of our expectations, this may indicate one of two things: one is that your prompt is too poor, and another is that you've stumbled upon a unique idea. AI does not understand unique ideas well because it has not encountered them before. Take this opportunity to engage in old-fashioned original writing and then reintroduce AI into the mix to enhance your work with its vast knowledge and pattern recognition capabilities.
In other words, wraiting still involves thinking and generating ideas. All it does is turn compressed, rough ideas into more comprehensible text output. Many people, but not all, start their process by jotting down on paper the initial thoughts, brief arguments, and short thesis statements. This is the most exciting part of brainstorming. Turning it into a coherent text is not necessarily the most rewarding part of the writing process, so we should use AI to assist with that. The synergy between human creativity and artificial intelligence promises to lead us to new intellectual heights.
Apr 15, 2023
Anti-CRT is anti-Christian
According to Brookings, opponents fear that CRT admonishes all white people for being oppressors while classifying all Black people as hopelessly oppressed victims. These fears have spurred school boards and state legislatures from Tennessee to Idaho to ban teachings about racism in classrooms. The legislations mostly ban the discussion, training, and/or orientation that the U.S. is inherently racist as well as any discussions about conscious and unconscious bias, privilege.
Let me put aside the fact that those claims misrepresent CRT. For a more accurate description, read Stanford Encyclopedia of Philosophy. My argument is the incongruity between the anti-Critical Race Theory movement and Christian values. Emphasizing humility and repentance, Christianity encourages self-reflection and acknowledges the inherent imperfections of humanity. Contrarily, the anti-CRT movement tends to propagate a flawless portrayal of American history, which is fundamentally at odds with the Christian belief system.
Pride, considered one of the deadliest sins, is evident in the anti-CRT assertion that America is exceptional and unparalleled. In contrast, CRT is a structuralist approach that examines the deep-seated, systemic roots of racism and other biases. This perspective aligns with the Christian notion of original sin, which posits that humans are innately, profoundly flawed beings.
To clarify, this is not an endorsement of CRT as a flawless theory; it possesses both strengths and weaknesses. Instead, this critique aims to illuminate the inherent contradictions within the anti-CRT movement, specifically regarding its purported Christian values. Many anti-CRT proponents identify as Christians, but the discord between these two belief systems necessitates a choice. To maintain intellectual and spiritual integrity, one must either embrace Christianity or champion the anti-CRT cause, as the two are incompatible. I am not saying CRT is Christian. But anti-CRT is anti-Christian for sure.
Apr 6, 2023
A billion dollars idea: Learning Workflows and Item Response Theory
An example: One common process that requires multiple approvals for quality control is the expense reimbursement process within an organization. Employees often incur various expenses while performing their job duties, such as travel, meals, and office supplies. To ensure accurate and fair reimbursement, the organization typically implements a multi-step approval process.
- Expense submission: The employee gathers all necessary receipts and documentation and submits an expense report, detailing the nature and amount of each expenditure.
- Managerial approval: The employee's direct manager reviews the submitted expense report to verify that the expenses are legitimate, reasonable, and in line with company policy. The manager may ask for additional information or clarification if needed before approving the report.
- Finance department review: After managerial approval, the expense report is sent to the finance department. The finance team checks the report for compliance with company policies, budgetary constraints, and tax regulations. They also ensure the accuracy of the expense categorization and proper documentation.
- Final approval and reimbursement: Upon the finance department's approval, the expense report is processed for reimbursement. The employee receives the approved reimbursement amount.
Learning workflows would differentiate between users based on their performance, allowing those who consistently excel to gain a "master-user" status. You just need to treat every form submission as a test, and keep the record for each employee. This approach would ensure that quality control efforts are focused on those who need it most, saving time and resources while maintaining high standards.
Item response theory (IRT) is a statistical framework used in test development to analyze and model the relationship between an individual's latent ability (e.g., proficiency in a subject) and their probability of responding correctly to test items (e.g., questions). IRT is widely used in educational testing and psychometric research to design and evaluate assessments, ensuring they are reliable, valid, and fair. In IRT, each test item is characterized by a set of parameters, which provide information about the item's difficulty, discrimination, and guessing. These parameters help in understanding how well an item can differentiate between individuals with different levels of ability. It assumes that if you can answer a more difficult test question, you are very likely to answer the easier ones.
Applying IRT (Rasch modeling, to be more specific) to business processes means treating each form submission as a test, incentivizing employees to be more careful with their work. Consistently high performance could lead to master-user status, reducing the need for managers to spend time reviewing numerous forms and requests. Of course, master-users should be paid more, for they cost less to oversee.
This combination of smart workflows and item response theory offers a significant business opportunity. Developing an algorithm that transforms every workflow into a learning system could revolutionize the way organizations operate. As users become more competent and achieve master-user status, the algorithm would automatically streamline processes, freeing up more resources and reducing bureaucracy.
If you want to be the next billionaire and understand organization theory, go for it. Thank me when you make it big.
Apr 5, 2023
Carry-Forwards and Smart Cuts
Take the CSU system, for example. For the next fiscal year (which starts on July 1), it estimates having $2.5 billion in total core reserves, with $714 million in uncommitted funds. While this may seem like a substantial amount, it's crucial to have a financial safety net and development agenda in place when dealing with multiple priorities and navigating the intricacies of higher education funding.
There are three major reasons for carry-forward in university budgets:
- The Multicolored Budget: University budgets are complex, with different types of money and spending restrictions. Picture a rainbow, where each color represents a different type of funding. A LOT of carry-forward money is restricted, with only specific ways of spending it. Not all money can be spent instantly due to multi-year commitments. In other words, if we promise a newly hired faculty members certain startup funds for two years, we need to make sure the money is there. And then, there are emergencies – some of them may not be seen by others by emergencies, but sometimes we have to react to changing circumstances with a quick program or an event.
- Investing in the Future: To thrive, a university needs to focus on more than just maintaining its current state. It must invest in development, which requires accumulating funds. Carry-forward money allows universities to support new projects, research, and initiatives that will help them grow and evolve. The paradox here is that in order to invest in the future, universities must save money today.
- The disproportional impact of small cuts: Operating Expense (OE) budgets for university's academic units, which tend to be the largest chunk of the budget, usually account for below 5% of the total. 95% or more is in personnel, salaries and benefits. Cutting from personnel is difficult, because we rarely lay off staff. Furloughs are incredibly demotivating and frustrating. So the small cut for the overall budget is actually a huge cut that tend to fall on operating expenses. That's why universities need decentralized reserves, like carry-forward, to maintain stability and keep their promises. By having a sizable carry-forward, academic units can respond to challenges without causing chaos in their operations.
Apr 2, 2023
We are not as complex as we'd like to think
For centuries, humans have marveled at their own intellect and linguistic abilities, often attributing these capabilities to divine origins. This self-amazement led to the concept of being created in the image of a deity. However, over recent decades, zoologists and zoo-psychologists have been gradually dismantling this grandiose self-image by demonstrating that animals share many traits and abilities with humans.
For instance, chimpanzees exhibit tool usage, problem-solving skills, and even rudimentary communication through gestures and vocalizations. Similarly, dolphins have been observed to possess complex social structures and use unique signature whistles to communicate with one another, while African Grey parrots can mimic human speech and understand a variety of words and phrases.
Now, it is the turn of software engineers to further deflate our pride. The ability to generate language, once considered a unique and sophisticated human trait, is now being replicated by AI algorithms like ChatGPT. This demonstrates that our linguistic prowess is not as mysterious or complex as we once thought. In fact, we often recycle and rephrase what we've heard or read before, which diminishes the perceived essence of our humanity.
This realization, although humbling, can lead to a healthier perspective on our place in the world. The true essence of humanity may be smaller than we initially believed, possibly encompassing higher-level creative thinking and advanced ethical reasoning. These are attributes that, so far, neither animals nor machines have been able to fully replicate.
As we come to terms with the diminishing divide between humans, animals, and machines, it may be time to shift our focus from trying to prove our uniqueness to embracing our similarities. By recognizing that we share many traits and abilities with other beings, we can foster a greater sense of empathy and understanding, ultimately benefiting both our own species and the world around us.
Mar 27, 2023
Can negligence be racist?
Bad management and institutional racism are not always separate entities; they can intertwine and create toxic environments for faculty and staff of color. Bad management may involve poor leadership, decision-making, and communication, while institutional racism refers to policies and practices that perpetuate racial disparities. Although bad management might not explicitly intend to discriminate, its impact can exacerbate systemic racism if institutional practices negatively affect marginalized groups.
It's crucial to consider both intent and impact when assessing organizational practices. Bureaucratic delays, disorganized and confusing procedures might unintentionally create a hostile work environment for faculty and staff of color. Different groups experience institutional barriers in unique ways, shaped by the specific challenges and obstacles they face. What may seem like a mere annoyance for me, as a white man, could very well feel like full-scale harassment for a person from a marginalized group. It is important to recognize that our own personal experiences and perspectives are not universal. By acknowledging these differences and being mindful of the unique challenges faced by individuals from various backgrounds, we can work together to create a more inclusive and supportive environment for all, and avoid the mistake of confusing our own perception with the universal truth.
In the realm of social justice, negligence in organizational practices can be considered racist, sexist, ableist, or homophobic, much like how negligence can be criminal in law. A lack of awareness about accessibility needs for disabled employees may result in ableist practices that hinder their career growth. Though unintentional, this negligence contributes to an unjust work environment.
Organizations must recognize their responsibility to address discriminatory practices, regardless of their origin and intent. By examining policies, improving procedures, offering support and resources for marginalized employees, and fostering a culture of inclusion and accountability, we can create truly equitable workplaces that challenge both bad management and institutional racism.
Mar 18, 2023
Budget Cuts: Breaking the Quick Fix Paradox
Universities often resort to budget cuts in difficult times, seeking immediate relief while neglecting long-term strategies to increase and diversify revenue. However, when times are good, they reinforce the status quo rather than invest in innovative projects that can make them more financially resilient. These investments take time and require setting aside funds for growth, but campus leaders often struggle to justify such frugality when budgets are in good shape. So everyone gets something. Enormous efforts are spent on lobbying state authorities for more budgets, but very little in exploring the higher education market, and opportunities to pivot and to grow. It is especially troubling, given the long-term demographic trend of declining college-age population.
The transition from funding existing programs to investing in innovation is not easy. It will face political resistance and could upset many within the university community. However, embracing this shift is crucial for long-term success. When budgets return to "normal," it's essential to remember that normalcy can breed complacency. To avoid this pitfall, universities should focus on funding units and programs that demonstrate specific plans for cutting costs or generating revenue.
This is nothing new; it is just an attempt to remember those things next time when we have some breathing room. In the sort run, the options are few: spend reserves, freeze hiring, cut expenses. Our system has been hit with budget crises many times before; not one of them produces a meaningful reorientation of the system. It is still heavily dependent on state funding, which is an unsustainable strategy.
Mar 13, 2023
This is embarrassing
Therefore, when it comes to giving feedback on performance problems, it is important to do so in a private, quiet, and supportive manner. This allows the faculty member to save face and maintain their dignity, while also receiving constructive criticism that can help them improve their performance. This is how evaluations are set up, on both unionized and non-unionized campuses.
Some faculty members prefer to keep their own personnel matters confidential, while not minding exposing others. This is not consistent and can create a double standard that undermines trust and respect in the workplace. If you do not want to be embarrassed, it is important to treat others with the same respect and confidentiality that you would want for yourself. This rule does not change if you one is motivated by urgent matters, related to social justice. If we give up on due process, the struggle for equity and justice is undermined rather than enhanced.
Transparency standards do not always apply in higher education, especially when it comes to personnel matters. However, this does not mean that accountability and fairness should be compromised. It simply means that sensitive matters should be handled with care and discretion, in order to avoid causing unnecessary embarrassment or harm.
By recognizing the power of embarrassment and taking steps to minimize it in the workplace, higher education institutions can create a more supportive and empowering environment for their faculty members. This, in turn, can lead to improved performance, job satisfaction, and overall well-being for everyone involved.
Ultimately, the key to handling sensitive matters in higher education is to strike a balance between transparency and discretion, accountability and support. By prioritizing the well-being and dignity of faculty and staff members, while also maintaining accountability and fairness, institutions can create a workplace culture that is both productive and empowering for all.
Feb 24, 2023
Wraiting vs. writing
Just to give a glimpse of the new world, consider these wraiting tips. It is the tip of the iceberg, for there are a lot more nuances to it than I know about, and even more that I do not. Here are four key roles that AI can play in wraiting:
- Brainstorming: One of the main roles that AI can play in wraiting is in the brainstorming stage of the writing process. It can help writers generate new ideas, provide suggestions for topics to explore, and even conduct initial literature reviews (only for well-explored topics). These tools can also be used to create outlines and plan the structure of a piece of writing, making it easier to organize ideas and stay on track.
- Critiquing your ideas: Another important role that AI can play in wraiting is as a critical partner to chat about ideas with. With chatbot-style interactions, the AI can engage in a conversation about the writer's ideas, ask questions, provide feedback, and offer suggestions. This can help writers refine their ideas, explore new directions, and gain valuable insights into their writing.
- Turning dense chunks of ideas into full paragraphs and segments: By using natural language processing algorithms, wraiting tools can analyze the structure and meaning of sentences and suggest improvements that can help writers better articulate their ideas. This can be especially useful for writers who struggle with writer's block or who find it challenging to organize their thoughts into cohesive paragraphs. AI can look for additional arguments, examples, metaphors, and references to support or challenge your claims.
- Editing: Finally, AI can play a key role in the editing process of wraiting. From grammar and spelling to structure, flow, style, genre, and audience analysis, wraiting tools can help writers identify areas for improvement and provide suggestions for making changes. AI-powered editing tools can also help writers save time and effort by automatically correcting common errors and suggesting alternative phrasing.
Feb 23, 2023
"Only able-bodied are welcome"
In semiotics, the absence of a message can be the same as the presence of a message. This concept, known as "the absence of a sign," highlights the importance of context and interpretation in understanding the messages being conveyed. By failing to include language of accommodation in event advertising, organizers are sending a message of exclusion to individuals with disabilities.
At our college, we are committed to creating a welcoming and inclusive environment for all, including people with disabilities. Our new recent policy states that events and media resources that are not inclusive may not be advertised on any college-controlled information resources. When someone asks me to share an upcoming event or put a link to a video in the college's update, I click on the link, and check if there is an RSVP and if it asks for accommodations. I also check if the event is not too soon because we know that finding a captioner or an ASL interpreter at short notice is impossible. Additionally, I check if the video has good, human-edited captions or an ASL feed. If these criteria are not met, I apologize and decline to share.
The policy may be seen as controversial because the events and resources may benefit other marginalized people. The larger society unfortunately has not yet relegated exclusionary silent statements like "for hearing people only" as outrageous. An event or a video may be deemed too important even though it is not accessible. However, just because larger society has not yet recognized the importance of inclusivity, it doesn't mean we should do nothing. One day, we all will all be embarrassed by old pictures where people are invited to an open event, and the message says nothing about accommodation.
Forwarding information about an exclusionary event or resource is endorsing it, and therefore, I personally can no longer do it, and I don’t think anyone else should.
- I call on the entire campus to adopt a similar policy.
- I also call on everyone to boycot events and resources that are not accessible.
Feb 17, 2023
To R2 or not to R2: Facts first, debate later
For example, let's say someone suggests classifying an R3 university as an R2 institution. Many people may object, raising concerns about potential negative consequences such as diminishing the quality of teaching and suddenly changing expectations of faculty. Still, others might say, "Yes, but here's what we need in terms of resources to get it and to maintain it.” Without first establishing the basic facts, these objections may be based on misunderstandings or incorrect assumptions.
If we take the time to gather the basic facts before beginning a debate, we can avoid these misunderstandings and have a more productive discussion. In the scenario above, a fact-finding committee could be established to determine whether the university in question already meets the criteria for the R2 designation. If it does, then there's no need for a debate at all - the university simply is an R2 institution; it's a matter of correcting the record by providing current information to the Carnegie Foundation. If it's close to meeting the criteria, then the debate can focus on how the university can make the necessary improvements. If major investment is needed to obtain the R2 designation, then the debate can focus on whether or not that investment is justified. But a debate where participants assume different facts is simply hot air, or an excuse to vent other grievances and anxieties. " It is like debating a movie you have not yet seen.
In general, it's always more productive to establish the basic facts before beginning a debate or raising objections. This allows for a more informed and productive discussion and ensures that any objections or concerns are based on a clear understanding of the situation. So the next time you find yourself in a discussion or debate, remember to start with the facts first, and make sure all involved agree on the facts - it will lead to a more productive and informed conversation.
Feb 16, 2023
Poor planning and justice
Let us admit it, some faculty members lack project management skills because they haven't had the opportunity to develop them on their career path. It takes courage to admit that you don't know something and learn. And it can be even more difficult once you have those coveted letters after your name. However, these skills are essential for success in any field, so it's important to learn them.
When preparing for an event, it's important to manage it like a project. Here are some commonly known tips on how to manage a project or event with several people involved:
- Start planning early: Plan the event well in advance to ensure that there is enough time to organize everything. Otherwise, we end up scrambling to find an accessible room or an available ASL interpreter. Improvisation and spontaneity are cute, but end up hurting someone.
- Develop a timeline: Create a timeline that outlines the key milestones and deadlines for the event. Counting backward is critical here. For example, if we want to announce an event and advertise it, we need at least a month. To advertise it, we need to confirm key speakers and their topics, which takes another month. Rooms are scarce, so we need to book a room at least two months in advance. To book a room, we need to know the general format, which will take a whole meeting to figure out, etc.
- Establish roles and responsibilities: Assign clear roles and responsibilities to each person involved in the event. Don't assume that someone will do it; someone does not exist. In academic institutions, seek support from your chairs and deans; they can ask staff to help with specific tasks or match you with other resources on campus. We have the event support request form, but it does not have to be that sophisticated.
- Communicate and monitor progress regularly: Establish effective communication channels between team members and stakeholders to ensure that everyone is informed about the progress of the event. A weekly check of progress is the most common routine. Put this checks in your calendar.
- Anticipate and manage risks: Identify potential risks and develop a plan to mitigate them. What could go wrong, and what's the plan B? The most common error is to not ask "what if it rains?" or "what if it's 100F outside?
- Debrief, identify lessons learned, write them down, pass on to the next person.
Feb 11, 2023
Why Innuendo Has No Place in Academia
Here are some examples of what innuendo might look like in an academic business meeting:
- During a discussion on a curriculum proposal, someone says, "If only most faculty knew the basic facts about curriculum forms." This is a direct attack on the proposer disguised as a general lament about a lack of faculty training.
- Someone says, "Isn't this why we have support staff?" implying that staff are not doing their jobs.
- Someone says in a public meeting, "Some people can't teach and yet they present themselves as scholars." This kind of innuendo is even more damaging because it's not clear who the person has in mind, leaving many people feeling hurt or offended.
It's important to remember that personnel matters should not be the subject of public debates. If you're unhappy with someone's performance, you should take the appropriate steps to address it, rather than making hints or implications in a public setting. Engaging in innuendo will only harm your reputation and make others less likely to want to work with you.
As the facilitator of meetings one must make sure that innuendo has no place in the discussion. If someone starts to engage in this behavior, it's the facilitator's responsibility to shut it down and keep the meeting focused on its purpose. Innuendo does nothing to further the discussion and only serves to bring negativity to the table.
Innuendo has no place in academic or any other settings. It's destructive, undermines professional relationships, and goes against the principles of due process and democracy. If you have concerns about someone's performance, there are better ways to address it than through hints or suggestions. By upholding a "no innuendo" rule in meetings, we can create a more positive and productive environment for all.
Feb 5, 2023
AI in Education Learning Community
Here's my plan: (1) Each PLC member commits some time to playing around with the OpenAI language engine. Try out different prompts for real-life teaching, research, and productivity situations. (2) Meet bi-weekly on Zoom over lunch to discuss findings. (3) Eventually, find a way to rate the most productive prompts and come up with a list of the top-10 to top-25 most useful ones. This information may not be publishable in a scholarly journal, but it could be shared. To keep things manageable, I'm starting with my colleagues in the College of Education at Sac State. But others can easily start their own groups and work independently. It would be great to compare notes from different groups later. If there's enough interest, I may plan an unconference next fall.
There are many types of AI tools out there. The AI Chat is just the easiest to use and is strictly language-generating, which makes for a better comparison.
Just to give you an idea what sort of prompts can be tested: I found last week that AI excels at writing routine, formulaic texts that humans don't like to write. You can give it a couple of specific points and ask it to write a decent recommendation letter for a student. It's also good at converting lists into narratives. Copy a list of your job experiences from your resume and ask it to turn it into a narrative. One of its best uses is specialized editing, like making your text sound like it was written by a native English speaker. This can be significant for English learners of all ages. But my findings alone aren't enough. We need to collectively test, evaluate, and rate these uses to make meaningful conclusions. Anyone interested?
Jan 27, 2023
Chatting with AI about AI
- Me: "I am trying to find a metaphor that describes the feeling when you realize that most of your learning over many years is now becoming obsolete."
- AI: "One possible metaphor for the feeling of realizing that one's hard-earned knowledge is becoming obsolete could be "watching a beautiful sandcastle wash away in the tide." This metaphor evokes the idea of something that was carefully built and treasured being swept away by forces beyond one's control, and also suggests a sense of nostalgia and loss."
I know what is going to happen, and it is not good. Higher ed will start fighting another losing battle. We will look for algorithms to detect the use of AI. We will proctor exams, and make students write with pen and pencil. We will regress back to multiple choice testing, etc. This happens every time. As soon as the real-world changes and asks for something else, we keep teaching the old things, because that is what we have been doing before.
What should happen is something completely different. We need to reimagine how writing in real life will now happen with the help of AI. We need to explicitly teach our students how to use the AI in an ethical way, but also enjoy the potential for increased productivity. And finally, we should find forms of assessment that are not focused on originality of the text, but rather on originality of ideas. Some of the sacred cows, like learning APA format of references, is completely irrelevant now. The AI can fix your references in a second or flip them into another format. It is not a relevant skill anymore, like lighting fire with flintstone is not. Things like editing for grammar, summarizing the argument, or basic lit search are among those irrelevant skills. Therefore, we should spend more time on original, creative, and independent thinking. The question is, how do we do it?
I suspect it is still back to the Bloom's taxonomy. Evaluating and creating knowledge are still the higher goals. However, the definitions of evaluations and creating the new will change. New knowledge will not amount to new text. The quality of new knowledge will not be reduced to proper writing, or even include it.
No one is sure what is going to happen, but one thing is for certain. The world is changing.
So I asked the AI, if it thinks AI will change the world of work. Here is the response. It maybe a little biased, and terribly boring, but I must agree:
“AI has the potential to greatly impact the world of work. AI-powered automation, machine learning, and natural language processing are already being used in many industries to automate repetitive tasks, improve decision-making, and increase efficiency.
In the short-term, AI has the potential to automate many jobs, which could lead to job displacement. However, it also has the potential to create new jobs and make existing jobs more efficient and enjoyable.
In the long-term, AI has the potential to significantly change the nature of work, as well as the skills that are in demand. Companies will likely seek employees with skills in areas such as data analysis, machine learning, and software development to help them take advantage of AI technologies.
AI has the potential to improve productivity and create new opportunities for growth and innovation. However, it will also require a significant shift in the way we think about work and the skills that are needed to succeed in the future. It is important for governments, businesses, and educational institutions to work together to ensure that the benefits of AI are widely shared and that everyone has the opportunity to develop the skills they need to thrive in the new economy.”
Jan 23, 2023
Don’t assign homework to your dean or chair
Jan 9, 2023
Weather and the human condition
Dec 11, 2022
Teasing in the workplace
Dec 5, 2022
Double consciousness for Russians
I was thinking of W. E. B. Du Bois's notion of double consciousness when last week an anti-war and anti-Putin TV anchor misspoke and referred to the invading Russian army as “our army,” and said that he wants to help Russian soldiers to get better equipment. This created an outrage in much of Eastern Europe, understandably. People are on edge, and they reacted harshly.
W. E. B. Du Bois described the phenomenon like this: “One ever feels his two-ness,—an American, a Negro; two souls, two thoughts, two unreconciled strivings; two warring ideals in one dark body, whose dogged strength alone keeps it from being torn asunder.” This is not the same, but similar to what this anchor was feeling: He still maintains his Russian identity and feels kinship with Russian soldiers who are cold and hungry on the frontlines of the war they did not start. He may think they are stupid idiots, but somehow related to him. Like many of us, he wants that invading army to be defeated, and his homeland to lose the war it started.
Ukrainians obviously do not feel the split; their hatred for the invaders is both justified and unproblematic. The anti-Putin Russians do. The horror of the story is that one is unable to completely disentangle oneself from the invading horde. One experiences pain and compassion for Ukrainians, and pain and compassion to their tormentors. A person like that looks at one’s self through the eyes of others, and yet retaining his own identity.
Again, I am not equating the two experiences; the power dynamic and histories are very different. Yet, parallels like this open a little window into how confused and painful divided identities can be.
Nov 28, 2022
Anger management and academic bullying
Nov 13, 2022
Terrible software. Let us name some names.
How do you know you are stuck with a terrible piece of software? When you are offered a user guide. The longer is the guide, the worse is the user interface. I am not a software engineer and cannot appreciate the elegance of the code and the creativity of its internal design. But I can definitely tell a bad interface from a good one.
For example, someone called Iiaisonedu sold the CSU system Cal
State Apply. Not only the System itself has to maintain detailed instructions,
and pay tech support, but every campus is forced to develop its own set of
instructions. Ours
for one program only has 18 slides. The first couple of steps are easy, but
then applicants get lost in whatever terminology the developer things is
commonly understood.
As I mentioned before I have a couple of dozen different platforms I use for work only. Here is the picture of my bookmark folder called “Accounts.” Let’s give them some reviews.
OnBase is terrible program, written by space aliens for space
aliens. None its terminology is used
by normal human beings in their office. Somehow, it opens not with your inbox,
where people want you to sign something, but on its very obscure back office
page (with reporting, archives, queries, etc.), from which you have to find
your way into the inbox that you need 95% of the time. It has little annoying
bugs like you need to sign something, but then also find a different button, and say the form is completed. Really? Three
clicks instead of one? The program was initially designed in 1991, by Hyland
Software, and I don’t think anyone attempted to redesign its user interface
since then. And yet governments and hospitals use it, because of the legacy
issues.
Concur has another thoroughly confusing interface. It was
designed for business executives whose assistants have no choice but figure out
what all these buttons mean. It is absolutely unsuitable for the university
environment, where most people travel 1-2 times a year and completely forget all
its conventions between trips. Tell me again, how do "requests", "authorizations",
and "approvals" relate to each other, and how do you "allocate"? Faculty members have
no assistants, so we are forced to have a staff person to help them with
travel; it is probably 75% of her workload. Some automation, right? You know
you got screwed when a software costs you more labor than it saves. Again, the program is almost 30 years old, and
it shows. And yet, because they have a near monopoly on business travel, they
seem to be able to sell their terrible product to the likes of us.
Course leaf is actually not bad on the interface part, but
it has some weirdly basic gaps in its functionality. It goes back to Leapfrog
Technologies, another company from early 90-s. For example, the thing does not
let you know when your curriculum proposal is stuck at some level for weeks.
The company also has the audacity to charge for every little change in forms we
want to make. The idea that only the high priests of soft can have access to
the configuration of the software is just so out of touch with this century’s computing
practices.
We use a version of PeopleSoft, now the property of Oracle
for some 12 years. It is another example of a very old monster that fell behind
the times. At least they have an excuse: the databases must be very secure, and
security kills agility and user customization. Still, they could figure out a
better, more sleek and contemporary user interfaces. You know, something that
look more like Instagram and Tok-tok, and less like your grandfather’s bank
account.
Our Auxiliary organization that handles all grants and many
other things, uses something designed by Ultimate Chronos, another elderly giant,
founded in 1977, to keep their timesheets and other HR stuff. The system is not
only completely incomprehensible, but is so buggy that I cannot approve my people’s
timesheets for five years now.
Microsoft’s SharePoint and Office 365 is a mixed bag. It
took them a decade to catch up with Google with respect to functionality, in
some aspects it is a more advanced functionality. However, they somehow cannot
overcome their curse of perpetual clunkinnes. Everything MS touches will look good
and work fine, but… always takes three extra steps to figure out. They really
have great engineers, but mediocre user experience specialists. What they think
is cool, is actually kinda nerdy, and not in a cute way.
Let’s be fair, some of these are good. Zoom is great, which
is why they killed Skype without much of an effort. I am thankful to them for
helping us all to survive the pandemic, and for their drive to constantly
improve. Qualtrics is a very decent survey software, mainly because the field of surveys is so competitive,
and they need to improve to survive. I would call to question several of their choices, but it is something one can learn and use. Adobe Sign is actually
a good program to use; kudos to Adobe for coming up with it not long before the
pandemic. It worked great to transition our paperwork online. However, its more
advanced functions, like branching of signature routes depending on the first
user choice – is very-very difficult to use. This is a problem that has been
solved a decade ago in various survey platforms. Why is Adobe with all its software
engineering might cannot figure it out, is beyond my comprehension. It is
simple – if the user chooses College A, it goes to one dean, if they choose College
B, it goes to another. I know of workflow feature; it is not there yet and too complicated.
Anyway, I can keep going and going. However, the good news
is for kids who want to learn to code and try a start-up. Business software is
populated mostly by dinosaurs who don’t know what they are doing. They are also
responsible for computer phobia that affects millions of people conditioned to blame
themselves for being unable to figure out some terrible programs. Most of these
dinosaurs are hopelessly behind times. Hire a very cool user interface
designer, and you can beat them all. There is no reason submitting a timesheet
should be harder than playing a Tik-tok clip or sending a message. There is no
reason signing a document should be harder than liking a Fb post.
The industry is ripe for disruption. It is so behind,
because of corporate model of purchasing. If Facebook is hard to use, millions
of people make that decision, and Fb would be dead by now. When it is a
corporation that buys a platform, the micro-signals from the user-driven market
do not make it upstream. Neither the purchasers, not the vendors actually know
enough about how end-users work, what they like and they hate about your
platform.
Here is my actual list of bookmarked platforms
