Miss the previous edition of this newsletter?Read it here.
Also, make sure you subscribe and share it with colleagues who can benefit from these insights!
Evolving from Surveys to Student Voice Listening Systems
Every fall, campuses gear up for the same cycle of course evaluations, end-of-term surveys, and quick student body temperature checks. All are familiar and, to a degree, useful and measurable.
But more and more academic leaders are hitting the limits of this approach. Student surveys alone tell only part of the story. When the only lens is high-level student sentiment, the data can feel incomplete, and trust in the process often wanes.
The result? Careful answers from students, accompanied by equally cautious internal conversations.
That is why many forward-thinking institutions are shifting toward holistic evaluation. Instead of treating surveys as the entire story, they are reframing them as one channel in a broader listening system. When all voices (students, peers, instructors) come together, you get a fuller, fairer view of effectiveness, creating a stronger foundation for growth.
This is not theory. Institutions such as Dalhousie University and Northwestern University, which were featured in an Explorance webinar on this topic, are already making the shift from single-source to multi-source learning.
Three big reasons are driving the shift toward holistic evaluations:
Equity and validity
We know student evaluations can be shaped by factors beyond an instructor’s control, including gender, race, discipline, and course level. A holistic model reduces biases by incorporating multiple perspectives (though it can’t erase bias entirely).
Trust and culture
Faculty buy-in when their work is seen in context. Students engage when their feedback leads to visible change. A holistic approach gives both sides the space and latitude to participate in a meaningful way.
Accountability with nuance
Post-pandemic teaching looks different, to put it mildly. AI is transforming classrooms, while boards and accreditors want evidence that conveys meaningful information. Holistic evaluation meets those expectations by shaping a more nuanced, multi-faceted view of teaching.
The main takeaway: The more voices you can align on these central themes, the more likely leaders are to act confidently and plan targeted, lasting enhancements to campus life.
Even the best systems have room for improvement. It’s about enhancement, not replacement.
How It Works in Practice: Institutional Models Driving Holistic Evaluation
Across North America, the approach to holistic evaluation centers three voices: students, peers, and self-evaluation.
The policy sets the direction, with each faculty tailoring the process further according to their field. Teaching chemistry isn’t the same as teaching design, after all, and the evaluation measures shouldn’t be either.
Here are three examples from Explorance clients that stand out for how they blend structure, equity, and growth.
UCLA: Development before judgment
At UCLA, teaching evaluation is tied directly to faculty development. The Holistic Evaluation of Teaching initiative encourages mentoring and reflection, positioning evaluation as support rather than scoring. Departments piloted the model first, refining it before scaling the approach across campus.
UVA: Clarity through shared domains
The University of Virginia’s framework organizes teaching around four domains: course design, mentoring, reflection, and service. Each domain includes sample evidence like peer reviews and student input. Departments calibrate together to ensure consistency and fairness.
UAlberta: Equity in every dimension
At the University of Alberta, holistic evaluation is built around five dimensions that embed inclusion and leadership. The process blends artifacts, reflection, and student perception to give a fuller picture of teaching quality.
What's still in progress:
Time and workload. Collecting and reviewing multiple perspectives takes effort and coordination.
Training and calibration. Reviewers need shared standards for weighing different types of evidence.
Institutional alignment. Frameworks only work when tied to professional development and recognition.
Even so, these examples share one conclusion: policy creates structure, but culture drives trust. That’s where the real transformation begins.
A single source of truth sounds tidy, but, practically, it rarely is. Accurate, multi-source measurement is what truly strengthens data validity and reliability.
Student feedback captures the in-class experience. Peer observation captures pedagogy and fit. Self-reflection captures intent, design, and growth. In their individual silos, none tell a complete story. Together, they create a clearer, more balanced picture that leaders can act on and learn from.
That said, keep these important clarifications in mind:
Holistic doesn’t mean anything goes. It means evidence is gathered and weighed against shared, transparent criteria. Aspects like course design, inclusion, assessment alignment, engagement, and continuous improvement constitute verifiable evidence.
It’s not about averaging scores. It’s about context. A low number paired with thoughtful peer notes and a reflective plan can frame results as a growth opportunity, not failure.
Systems work only when the people inside them are empowered. It’s not solely IT or technical admins that need to be involved in their construction. Faculty, deans, provosts, and other decision-makers who live the process daily must play a part in creating that synergy.
Michigan State University stands out as an institution that's redefining how teaching is evaluated by combining self-reflection, peer review, and student perception into a holistic model.
The message from presenter Nate J. Clason, Ph.D., an Academic Specialist at the Office of Faculty and Academic Staff Development, was clear: reform requires more than new systems. It requires sustained cultural evolution toward fairness and trust.
Watch the full recording of that MSU session on demand.
The Human Side of Evaluation: Culture Before Compliance
Holistic evaluation is only partly about collecting and analyzing better data. It extends to (and can change) how people feel about feedback.
Plenty of instructors have faced student comments that felt personal or unfair. Students have shared feedback that seems to vanish into a void. Peers have been asked to address crucial issues without clear guidance.
A healthier culture overcomes those common obstacles by:
Naming the “why.” The purpose is growth, plain and simple. Keep saying it. Model it from the top.
Normalizing reflection. Self-evaluation shouldn’t be seen as a defense memo. It’s a space to build trust by talking about goals, decisions, and constraints.
Building skills. Observation, equity awareness, evidence weighing—these are learned abilities. Short trainings, clear rubrics, and shared examples go a long way.
Every form of feedback carries at least some bias. The goal must be balance, not perfection. When feedback processes are transparent and focused on growth, resistance fades and improvement quickly follows.
6 Holistic Eval Implementation Takeaways
If you’re moving from surveys to holistic evaluation, don’t skip any of these practical steps. The efforts will compound and give your listening system incredible momentum.
Pilot first. Pick one department or program, set a clear timeline, gather lessons, and examine your results. Once you have contextual data, then scale.
Define teaching effectiveness together. Draft criteria with faculty, students, and academic leadership. Include examples of acceptable evidence for each criterion.
Train the peer cohort. Cover observation focus, evidence standards, and equity topics like positionality and language. Provide checklists, sample notes, and calibrate with side-by-side reviews.
Resource the work. Recognize peer observation and dossier building as real service. Reduce other admin friction to pave the way for present and future success.
Automate the reporting, not the judgment. Use tools to consolidate student, peer, and self-reflection data into a single view. Include themes, trends, and comments, all infused with human-centric context.
Close the loop visibly. Share changes with students and faculty. Publish short updates, ideally framed as a “you said, we did” report. Trust grows when outcomes are easy to see.
Looking Ahead: Where Listening Systems Are Headed
As more institutions adopt multi-rater, multi-source approaches, take stock of these emerging patterns:
From annual to continuous. Short, formative touchpoints throughout the term feed a more meaningful end-of-course view.
From numeric to narrative. Quantitative signals are balanced with structured reflection and rubric-guided notes.
From compliance to coaching. Committees still make decisions. Departments are building light coaching programs and practice communities that shape feedback and growth.
Explorance is helping many institutions, like the University of Pittsburgh, bring these voices together into a single, coherent view.
The institution introduced pre-course surveys in Explorance Blue to capture student expectations before classes even begin. Faculty now use those insights to create a continuous feedback loop that supports stronger learning outcomes.
Next month, we’ll go deeper into mitigating bias in evaluation and structure every part of the process to obtain more equitable feedback. Don’t miss it when it drops.
Feel free to forward this email to friends or colleagues who would benefit from the insights and content shared in this edition.