Abstract
Obtaining sufficient survey responses to make course and instructor evaluation results meaningful is a challenge in many, if not most, health
professionstraining programs. This paper describes a series of policy changesthatsignificantly improved data quality at one college of veterinary
medicine located in the United States. The steps consisted of minimizing the number of items appearing on the instruments, providing students
adequate time and space for completion, clearly explaining the purpose and value of the evaluations, simplifying data collection, collecting
verbal feedback, and closing the loop with student participants by informing them of any changes that were made as a result of their feedback.
The steps outlined in this model may be easily extended to other health professions programs that involve cohort models, multi‑instructor
courses and limited resources with respect to time and people.
professionstraining programs. This paper describes a series of policy changesthatsignificantly improved data quality at one college of veterinary
medicine located in the United States. The steps consisted of minimizing the number of items appearing on the instruments, providing students
adequate time and space for completion, clearly explaining the purpose and value of the evaluations, simplifying data collection, collecting
verbal feedback, and closing the loop with student participants by informing them of any changes that were made as a result of their feedback.
The steps outlined in this model may be easily extended to other health professions programs that involve cohort models, multi‑instructor
courses and limited resources with respect to time and people.
Original language | English (US) |
---|---|
Pages (from-to) | 7-10 |
Number of pages | 4 |
Journal | Education in the Health Professions |
Volume | 1 |
Issue number | 1 |
State | Published - Oct 1 2018 |