Online Teaching Evaluations
The University is disappointed that the online teaching evaluation system failed Monday night April 20th. We sincerely apologize to all students who were unable to complete the evaluations. We understand this is a very frustrating situation. The ability to provide and receive course feedback is an important part of the classroom experience.
A team of experts is conducting a thorough investigation into the cause of the failure. To-date, despite many testing cycles since Monday the 20th, no single factor has emerged as the obvious cause. It is our intention to have the program ready for use next term.
Just under 40% of all the possible responses had been received by the time the system failed at 9:00 PM on Monday night. These responses have not been lost. Reports on those evaluations will be provided by May 18th.
The online evaluation system has numerous benefits including reduction of manual clerical efforts, faster results and ability to customize the questioner to provide more meaningful results. Despite this current setback the University continues to believe an online evaluation system is the best tool to collect and process course feedback.
Q1. Why were Teaching Questionnaires closed early?
An, as yet, unexplained system failure caused the early closure of Teaching Questionnaires (TQ).
On Wednesday, April 15, 2009 the campus-wide process for distributing and collecting Teaching Questionnaires (TQ) as a CTools service was opened for access.
For reasons not yet understood, on Monday, April 20th beginning shortly after 9pm, CTools experienced a series of intermittent service interruptions. Initial indications suggested TQ as a source and the CTools service was taken offline to restart the servers without TQ access. Continued operation of CTools was considered the highest priority and CTools was restarted without TQ access at 11:40 pm, Monday.
As soon as other CTools operations were restored Monday night, the CTools technical teams began investigating possible causes for the outage. No single factor emerged, but several configuration settings were optimized and tested in the test environment. By Wednesday afternoon a decision had been made with input from university leadership to launch a graduated restart of TQ, with the provision that if any anomalies or outages resulted, TQ access would be turned off to preserve CTools core operations.
Shortly before the TQ tool was scheduled for restart, an unusual load peak of unconfirmed origin at 11:20 am on Thursday. April 23rd, caused all CTools servers to overload and the system was automatically restarted. CTools was restored within 10 minutes, but that unexplained outage was the basis for placing highest priority on maintaining CTools and not risking a relaunch of TQ. Based on the conditions agreed upon with university administration TQ was not restored to service as had been planned.
Q2. How could this happen?
The causes leading to the closure of the TQ process are being investigated by CTools technical and operations staff from all three IT organizations that support it. This is a serious event and problem investigation and resolution is of the highest priority. To-date, despite many testing cycles since Monday the 20th, no single factor has emerged as the obvious cause.
The CTools development and operations teams have been at work since the 20th to identify the cause, and will be contacting independent third-party software reviewers to assist us in performing an audit of the CTools software. This process may take several weeks. Upon completion a detailed report will be made available and a corrective plan including more testing will be developed.
Q3. What will become of this year's evaluation data?
As of 9pm Monday, April 20th, 52% of students had responded to one or more evaluations, comprising 39% of all possible evaluation responses. This data has been collected and is being processed for reporting to the academic units.
The data is of great value and is not corrupted in any way. This feedback will be included in the instructor reports after the term ends. Before the system failed, we collected approximately 62,000 or 65% of the expected responses (based on last term's participation rate of 62%). We received evaluations of 7,227 courses, and we have response rates of 40% or higher on 3,543 of those courses.
Q4. When will evaluation reports be sent to teachers and departments?
Teachers and departments should have their winter-term reports by May 18.
Q5. How will schools and colleges fairly evaluate instructors?
Online (web-based) evaluations are just one aspect of the teaching evaluation process. There are many other evaluation methods which, used in combination, can paint an accurate picture of teaching effectiveness. CRLT consultants would be glad to assist individual instructors and academic administrators in choosing teaching evaluation methods (contact for an individual appointment).
Q6. Why wasn't there a back-up plan to ensure student's had the ability to complete TQs? As this is a new system, why didn't we run parallel systems and allow paper submissions as well?
It's important to note that until to Monday night's failure, student submissions during previous terms have been completed without incident.
Literature shows significant differences between online and paper survey outcomes. This means the paper responses would not be comparable to the online responses. Consistency in delivery is critical to data integrity. It would not have been feasible to run both paper and online submissions.
Q7. What kind of data is collected?
Students answer the questions using Likert-scale type multiple choice responses and short essay text responses.
Q8. Has the system failed like this before?
No problems were experienced with the student submissions when we first used the student questionnaires during Fall 2008 term. No issues were identified in previous pilots either. There was an issue with a separate reporting function for faculty.
Q9. Is this a purchased product or was it developed internally?
The portion of the system that appears to be implicated in the failure is a component of the Sakai open source collaboration and learning environment, referred to as CTools at U-M. The "Teaching Questionnaires Tool" is a software module in this system that collects student feedback. This specific module was originally developed by a partner Sakai institution and was heavily customized to meet U-M requirements.
Q10. The University is known for its quick responses to issues such as moving graduation to the diag. Why isn't the same level of redress being given to such a critical educational process?
This situation is being treated with the utmost priority and the team has been working hard to resolve the issue. It's important to understand that before the system failed, we did receive evaluations of 6967 courses, and we have response rates of 40% or higher on 3442 of those courses, which means additional feedback opportunities may not be necessary for all courses. While the breakdown is very serious, we do believe there are alternatives to ensure students can still voice their opinions and faculty will be fairly reviewed.