Plugging in to Course Evaluation
go to original version with critical reviews
Though college students can order textbooks, register for courses, view grades, and apply for jobs using the World Wide Web, Yet [Delete.] a recent survey by Rensselaer Polytechnic Institute's Interactive and Distance Education Assessment (IDEA) Laboratory (Hmieleski, 2000) found that nearly all colleges conduct course evaluations at the end of the term, using a paper-based format. This reliance on institute-wide end of course evaluations as a source of student feedback does not serve students, faculty, or the institution well for a number of reasons. These include: the results are delivered weeks after the term has ended; summaries of student feedback are often ambiguous and fail to provide action-oriented solutions; the students know that their comments will not be read for weeks (if at all); and evaluations are seen as the basis for highly stressful decisions (e.g., raises, promotion and tenure) rather than as a tool for improving teaching and learning. [Change to "This form of student feedback does not serve students, faculty, or institutions well for the following reasons: results are delivered weeks after the term has ended; summaries are often ambiguous and fail to provide action-oriented solutions; students know that their comments will not be read for weeks (if at all); and evaluations are the basis for highly stressful decisions (e.g., raises, promotion, and tenure) rather than as a tool for improving teaching and learning."] The ubiquity of this "autopsy approach," determining what went wrong after the course is over, in the Internet age indicates a vast untapped potential for the use of technology to improve teaching and learning via student feedback. [Change to: "In the Internet age, this 'autopsy approach,' determining what went wrong after a course is over, leaves a vast untapped potential for improved teaching and learning via high-tech student feedback."]
The Disadvantages of Traditional Course Evaluation
To sample current evaluation practices, the Interactive and Distance Education Assessment (IDEA) Laboratory at Rensselaer Polytechnic Institute surveyed the nation’s 200 most-wired colleges (see America's 100 Most Wired Colleges, 1999). The goal was to elicit concerns and benefits involving the transition of course evaluation to the Web [To prompt discussion of the challenges and benefits of transferring course evaluation to the Web, the IDEA Laboratory surveyed the nation's 200 most wired colleges (America's 100 Most Wired Colleges, 1999).] Below is a summary of the issues resulting from the 105 respondents to the March 2000 survey. (Access the full report.) [Below is a summary based on 105 responses (Hmielski, 2000):].
Three Steps Toward Web-Based Course Evaluation
Of the colleges surveyed, those transitioning to the online environment have
done so by converting their paper-based evaluation form to a Web-based form. This is an important first
step but not an optimal use of the Web
environment. Most of the surveyed schools responding to this survey
[Delete.] are well-positioned
to implement the second step in moving toward a Web-based evaluation.
They can do this by
incorporating a "feedback-and-refinement" process to their ongoing
evaluation efforts. This process, particularly well-suited for high-enrollment
and distance learning courses, allows frequent feedback from students.
Such frequent student feedback in order
to [Delete.] removes obstacles to learning, improves student
satisfaction, and provide
rapid improvements in [change to "rapidly improves"] course delivery. Regardless of the technology
used to
drive this process, the key features of the [Delete.]
feedback-and-refinement stage [Delete.] are
providing instructors with the following:
For the leaders [change to "Schools that lead"] in developing Web-based course evaluations can take a third step can be taken [Delete.] using currently available online technology and infrastructure. This stage would [Delete.] redefines the [Delete.] course evaluation as a process of frequent exchange of information between students and instructors to guide course refinement. Key features of this system would [Delete.] include:
This system may [Delete.] improves the quality of teaching, learning, and course delivery, increases the utility of course evaluations, and could [Delete.] serves as a model to other institutions of higher education. [Since in writing your article you are advocating this system, it's important that your evaluation of it appears confident. That's why I took out the "may" and "could."]
Advantages of Web-Based Evaluation
Many schools have hesitated [Change to
"hesitate"] to convert to Web-based evaluation due to fears
regarding cost, return rates, and response quality. Ironically, these same
factors have provided [change to "provide"]
the strongest support for converting to a Web-based
format.
Cost of Conversion. Although ten schools, at the time of the survey,
[Although at the time of the survey, ten schools] had
conducted or were conducting cost analyses of Web-based evaluation, none
reported the results of their analyses. However,
[Delete.] Kronholm, Wisher, Curnow, and
Poker (1999) conducted a comprehensive study on this issue, comparing
production, distribution, monitoring, scanning, and analysis preparation costs
for both paper-based and Web-based evaluations of a distance learning course. According
to the study, The delivery of [change to
"delivering"] a 22-item paper-based evaluation measure
[Delete.] to 327 participants
across 18 sites costs $568.60 (or $1.74 per student, assuming labor costs are $15
per hour). The actual cost in this case
study to [change to "study indicates that"] delivering the same
measure [change to "evaluation"] via the
Internet was [change to "is"] $18.75—a
savings of 97%! This savings increases rapidly as
course size increases, since
there is practically zero added cost when
multiplying the number of evaluations conducted via the Web. [Change to
"via the Web, multiplying the number of
evaluations adds practically zero cost."]
In terms of data analyses and reporting, Kronholm et al. (1999) found that analyses of 327 forms took approximately 16 hours of labor, with additional time needed to write reports to the key stakeholders. Schools using a feedback-and-refinement system or a fully Web-based course evaluation system, with automated analyses of the database of responses, would perform the analysis function within a few seconds. Customized reports to individual faculty can also be generated in either system, requiring only a few hours of setup time, regardless of the number of reports. [Revise this paragraph in the following way: "In terms of data analysis and reporting, Kronholm et al. (1999) find that analyzing 327 forms takes approximately 16 hours of labor, with additional time needed to write reports to key stakeholders. Schools using feedback-and-refinement or fully Web-based evaluation systems, with automated analysis of a database of responses, analyze within a few seconds. Either system also allows customized reports to individual faculty members, requiring only a few hours of setup time, regardless of the number of reports."]
Return Rates. Several respondents noted that return rates of Web-based evaluations were [change to "are"] lower when compared to in-class evaluations. However, if return rates become the primary goal of the [Delete.] course evaluation, then the meaning [change to "value" or "purpose"] of evaluation may be lost. One respondent summarized the thoughts of many: "We are afraid that students would not complete surveys (outside of class, but) with paper, the instructor can hold them captive at the beginning of the last class." [This quote's form seems to be slightly incorrect. When an author inserts words to smooth the syntax of a quote, the inserted words should appear in brackets, e.g., "We are afraid that students would not complete surveys [outside of class, but] with paper, the instructor can hold them captive at the beginning of the last class."] End-of-course evaluations usually achieve the goal of high return rates (approximately 100% of students who show up for class that day). But this manner of evaluation may [Delete. See comments at end of paragraph 4.] results in many students' simply circling the entire column labeled "agree" and leaving the comments section blank before rushing out the door. It is interesting to note that this same phenomenon occurs online as well. In our administrations of the feedback-and-refinement system, wh! ere [change to "in which"] students give feedback voluntarily, return rates are indeed lower. However, [Delete.] When participation is made [Delete.] mandatory, the return rates also approach 100%. However, the number of useful comments drops dramatically.
Global comparisons between return rates of paper-based and Web-based formats
have been conducted, but they are usually as unproductive as
past [Delete.] attempts to
determine the superiority between learning that occurs at a distance or in a
traditional program [change to "distance and traditional
learning."]. In both cases, there are far too many alternative
explanations to explain the results (Champagne, Wisher, Pawluk, & Curnow,
1999; Phipps & Merisotis, 1999).
In reality, there are three primary factors that determine return rate: faculty,
students, and the instrument. If the [Delete.]
faculty are "on-board" and are [Delete.]
eager to use the information provided by good evaluation, and the
[Delete.] students see
changes resulting from their feedback, and both parties recognize that the
instrument measures what it is supposed to measure, then return rates will be
high. If these factors do not exist (e.g., results are unknown for months,
students believe that their comments will not be heard, evaluation items appear
unrelated to the particular course), then return rates may not be high.
[Change to "will be low." See comments at end of paragraph 4.]
Response Quality. Some schools felt [Since a
school isn't a being capable of feeling, change to "Some respondents to the
survey felt"] that students completing course
evaluations on their own time, without the urgency of running to their next
class, would provide richly detailed comments and thoughtful responses. Others
speculated that students would give undue negative or reckless remarks due to environmental distractions present outside of class. Still others
argued that students would
give insincerely positive remarks due to their worry over
the lack of anonymity of their responses. [change to
"because their responses would not be anonymous."]
We have found that when using a feedback-and-refinement system, where students give feedback at their leisure, comments tend to be more plentiful and insightful. Our recent survey of a graduate management course found that students typed an average of four times as many comments (62 words/student) as students completing a paper-based version of the same evaluation form at the end of class (15.4 words/student). In addition, comments delivered through the online system were automatically sorted by categories and searchable by key words, generating individual results and lists of action-oriented recommendations. Comments written on the paper-based form had to be re-typed to hide recognizable handwriting and provided no means of ordering the information for the instructor’s benefit.
Conclusion
Colleges have grown accustomed to using end-of-term standardized evaluations as a basis for both improving the quality of instruction and the basis for [change to "making"] important faculty career decisions. This system frustrates both students and instructors, providing neither with the feedback required to make necessary changes while the class is still in session [change to "classes are in session"]. A feedback-and-refinement process could [Delete. See comments at end of paragraph 4.] better [Delete.] serves students, faculty, and administrators better by removing obstacles to learning, providing a means to rapidly improve delivery, and cutting evaluation costs. A fully developed Web-based evaluation system could [Delete. See comments at end of paragraph 4.] better [Delete.] serves colleges better by providing information quicker and clearer [change to "more quickly and clearly"] and shifting the definition of quality instruction and improvement away [Delete.] from "getting high scores" and toward [change to "to"] "use of [change to "using"] student feedback to facilitate change." By incorporating [change to "taking"] these steps, schools can begin to mine the vast potential of technology-driven evaluation to improve teaching and learning.
Authors' note: Many of the statements that [Delete.] we have made are based on feedback from college administrators, faculty, and students, as well as from our data and personal experience. We invite others to test our assumptions, share data, and make their own [Delete.] discoveries in the emerging area of Web-based evaluation.
References
America’s 100 Most Wired Colleges. (1999,
May). Yahoo Internet Life. Retrieved 15 June 2000 from the World Wide Web: http://www.zdnet.com/yil/content/college/colleges99.html
Champagne, M. V., Wisher, R. A., Pawluk, J. L., & Curnow, C. K. (1999). An
assessment of distance learning evaluations. Proceedings of the 15th Annual
Conference on Distance Teaching and Learning,
15, 85-90.
Hmieleski, K. H. (2000). Barriers to online evaluation. Troy, NY: Rensselaer Polytechnic Institute, Interactive and Distance Education Assessment (IDEA) Laboratory. Retrieved 15 June 2000 from the World Wide Web: http://idea.psych.rpi.edu/evaluation/report.htm
Kronholm, E. A., Wisher, R. A., Curnow, C. K. & Poker, F. (1999). The
transformation of a distance learning training enterprise to an Internet base:
From advertising to evaluation. Paper presented at the Northern Arizona
University NAU/Web99 Conference, city, state.
Phipps, R., & Merisotis, J. (1999). What’s the difference?: A review of
contemporary research on the effectiveness of distance learning in higher
education. Institute for Higher Education Policy. Retrieved 15 June 2000
from the World Wide Web: http://www.ihep.com/difference.pdf