University of Colorado Boulder faculty course questionnaires
Scope and Contents
These statistical reports attach a letter grade to questions regarding an instructor’s performance in a classroom. The grades assigned are derived from the average of the numerical scores given by students. These rated categories include; presentation of material, explanations of assignments, relevance of assignments, fairness of grading, treatment of students, explanation of complex material, accessibility of the instructor, how well the students were motivated, and learning experiences. These categories are complimented by a grade for an overall course rating and an overall instructor rating. The FCQs also include a comparison with the particular instructor’s score and the departmental average as well as the campus average. The campuses included are Boulder, Denver, Colorado Springs, and Continuing Education. Continuing Education became a separate category in the fall semester of 1983, although it is not consistently classified as a separate campus throughout the entire collection. In the early 1980’s, there was also a separate category for the “non tenure track”, a practice that was abandoned after the spring semester of 1986. The spring semester of 1977 does not appear in the collection, in that there were no FCQs due to a reformatting of the questionnaire. It is also important to note that the first three to four years of regular faculty course evaluations were printed out either into book form or into loose-leaf computer read-outs. After that point, the records are sorted by semester on the box level and by campus on the folder level. However, there are some variations within the folders in organization over time. Early reports were divided according to college and even further by departmental number. A guide to these departmental numbers can be found inside the lid of each box as long as the system was in use. In 1991 there was a shift in the processing of the records, in that the departmental numbers were changed. While FCQs continued to be sorted by campus, college and departments, the departments were now in alphabetical order as opposed to being organized by departmental number. Within departments courses are listed numerically. Commonly, every semester’s evaluations for each campus will contain an index in the front of the first folder for that semester, although that is not always the case. This should aide the researcher in locating their course or instructor in question.
Dates
- Creation: 1974 - 2017
Biographical / Historical
The desire for an evaluation of faculty and courses by the student body dates back to the mid-nineteen sixties, although the necessity for its existence stemmed from events previous to 1965. Prior to 1945 and continuing afterwards, such evaluations were not possible due to the University’s use of the system of in loco parentis, or the University acting in the place of parents. Since most of the student body consisted of minors according to the legal age of adulthood, then twenty-one, students had little recourse in regards to professors. The faculty acted, to all effects and purposes, as their guardians. However, there was also little need for such an evaluation due to the smaller enrollment of students attending the University and the lower number of presiding professors. Student clubs had faculty sponsors and faculty members; this close contact with the students eliminated the need for rating instructors. After 1945 there was a five to six year growth spurt on the campus as a result of the G.I. Bill. Although the growth eventually tapered off until the Baby Boomers arrived for college, in twenty years the college population had swelled from a mere 7,000 to 20,000 by 1970. The number of faculty members had increased to such a size that the Faculty Senate, in existence since the nineteenth century, had become too unwieldy. They were forced to create an elected Faculty Council in the late 1960’s. This population growth had caused many students to lose the close contact with professors that they had previously enjoyed. In 1970, after having been the subject of protests, the age of legal adulthood was reduced to eighteen and in loco parentis was dropped. As part of the expression of their new political power on campus, students began to request evaluations of professors, in part to regain the intimacy of the smaller school, and the knowledge of faculty such closeness implied. Beyond that desire, the students were concerned with generating a way to respond to their professors about the materials and the formats presented to them in the classroom, as they deemed their time at the University of Colorado a significant investment. The student philosophy of looking at their education as consumers began to generate a greater parity between professors and themselves.
In fact prior to the official drop of in loco parentis, the students had set in motion a movement towards a rating system of faculty. In September 1966 the ASUC published The Seer, a magazine listing the responses of students in regards to their classes. Jeff Levine acted as director, John Bilorusky as president, and Twyla George as editor. The expressed purpose being to give honest critiques where there was cause and praise where it was due. “The student must be honest, critical, and analytical about his educational experience,” stated Jeff Levine. Professors and courses were rated on a scale of one to five, with one denoting excellence, and five marking deficiency. Certain questions were weighted, as they were considered more pertinent to certain areas. Answers were also weighted according to grade level; the senior response was weighted four times, the junior three, the sophomore two, and the freshman one. Questions were then averaged for each professor in each course; individual question means were also weighted. Questions were formed into groups of six categories; the seventh score was formed from the mean of all the weighted questions. The categories included: classroom situation, examinations, lab or discussion sections, the professor, the course, and attendance. The Seer had the most difficulty compiling the results due to student apathy to filling out the forms, according to John Bilorusky. The magazine functioned on a fairly limited scope, and the faculty did not receive it particularly well. However, the students were not deterred, and took their cause to the Board of Regents for administrative assistance in generating a larger scale program. Viewing themselves as consumers, they felt that they had a right to have an idea of what classes and professors were like before committing their time and money. With this monetary consideration in mind, the students pointed to a lack of such information available to them. Some animosity between professors and students became apparent as the push for course evaluations increased in strength. Some members of the faculty feared that they could be destroyed by negative criticism. There was also the opinion that forms could be one-sided and subjective, as many thought The Seer was. Among faculty groups, such as the Teaching Committee and the Faculty Council, there was little support for a mandatory or a published teacher-course evaluation. As a result of this reluctance among the professors, students assumed they were being hedged. On October 24, 1973 the Tri-Executives of the Boulder Associated Students of the University of Colorado submitted a report to the Board of Regents that stated, among other things, the faculty committee had put publishing course evaluations at risk, course evaluations must have student participation to get the needed data, and secretive/unpublished course evaluations were a waste of time.
In actuality, one of the earliest forms of course evaluations was begun on faculty initiative between 1965 and 1967 with the creation of the University Testing Service. The purpose was to benefit teachers by having student feedback. This initial form had twenty seven standard questions, with space allowed for sixteen other questions chosen from a list of 160 optional questions, several short essay questions, and room for a few of the professor’s own questions. It was used on a voluntary basis except in the Business and Physics departments. However, the faculty was under the impression that the questionnaire was not suited to meet specific course needs. This system was eventually dropped. In 1969 President Thieme wrote a letter suggesting the establishment of the Teaching Committee, which would have the purpose of creating a “Boulder-wide” teaching evaluation to be implemented by the fall of 1970. This system would “apply to all courses and all faculty…in a systematic manner.” However, he did not feel that the questionnaires should necessarily be mandatory, nor be used for promotion or merit’s sake. Thieme’s interest in creating the evaluation program was for young professors without much experience looking to better their teaching skills. The Teaching Committee was formed with students among its members, and was one of the first committees to include them. This administrative push towards evaluations was also not received well by the faculty. The faculty feared the uses of the data derived from the questionnaires, since not only was slander a concern, but in their minds there was a very direct connection between these evaluations and tenure. The administration, on the other hand, saw nothing wrong with “professors knowing the criticism, the analysis, the evaluation and suggestions of their students.” Yet, they were not particular about whether or not the results of the forms should be made available to the students. There was some apprehension in relation to whether such published information would promote “good” teaching or perhaps just more “popular” teaching.
In June 1975 an agreement was reached mandating the use of course evaluations throughout the Boulder Campus. Regent Carrigan moved to adopt a “compulsory and universal system of Teacher-Course Evaluation” beginning with the fall semester of 1975. The details of the new program were to be worked out by a committee established by the resolution presented at the June 25, 1975 Regents’ Meeting. The Committee on Teacher-Course Evaluation (C.T.-C.E.) was comprised of eleven members: six students, three faculty, one representative from the University Testing Service, and one from the Chancellor’s office. This group would be responsible for running the course evaluation program. The recommendation from the UCSU was that departments happy with the current form in usage should retain it, others to develop their own questionnaire, with five to ten questions being universal to all surveys in order to generate a standard for student use.
By the mid 1980s, tuition had risen sharply, putting pressure on all universities to attract prospective students from the dwindling student pool. The advertising that followed placed a premium on “teaching quality.” Concurrently, parents and legislators had been criticizing University faculty for a perceived overemphasis on research over teaching. As a partial response, the Board of Regents extended Boulder Campus’s FCQ program to the other three campuses in April 1986. The motion stated, “The evaluation system shall be designed to provide published information to students, faculty, departmental administration, and the University’s administration.” However, there were some signs of problems already arising. According to Professor Wolf, “…the student evaluation is rather contentious on the Boulder campus and faculty are not completely happy with it for several reasons – the anonymity of students, a feeling that styles of teaching can be punished, and the fact is that it is an immediate rather than long-term evaluation of the education process.” There had been evidence that capable instructors as well as faulty instructors were being given poor ratings, a fault that some felt lay with “faculty and administrators who find it convenient to use an arbitrary score on a questionnaire as an evaluation of teaching.” Even the student government was aware that the current method of critiquing professors did not adequately serve the administration’s needs with respect to tenure review and promotion issues.
However, the “convenience” of statistical ratings became more evident over time, and there was enough cause for unease that in August 2000 the resolution was amended yet again. Included in the revisions was a new objective that was designed to “improve instruction and student learning (formative evaluations).” In essence this separated the previously established Teacher-Course Evaluations from what would now be termed: “Faculty-Course Evaluations”. The student critiques would only supplement a more expansive type of review, termed “formative evaluations”. This new formative evaluation was unique in that its results were to be confidential, its use to be optional, and numerical feedback would not be required as part of the evaluation methods. Introduced were new means of achieving this goal, such as, independent observation, video, educational specialists, faculty/ peer visitation, and student/faculty focus groups. Teacher-Course Evaluations were now under the category of summative evaluations, and these numerical results would continue to be made public, therefore available to students and others. Another significant change made by the Board of Regents on August 3, 2000 stated for the record that one of the uses of Faculty-Course Evaluations would be to “support the faculty evaluation process and faculty reward system (summative evaluation).” In other words, the Teacher-Course Evaluations would become a part, officially, of the awarding of promotion, tenure, and other merit based rewards. However, the Board of Regents also included a list of other forms of teaching assessment. These other means included, independent experts’ observations, faculty/peer visitations, supervisor’s observations, longitudinal assessment to measure year-to-year improvement, and availability of remedial assistance to support improvements in teaching effectiveness.
The need of the Board of Regents to further amend the evaluation policy suggests that some unintended applications of the evaluations had sprung up over time. The numerical ratings provided by anonymous students have had a significant effect over tenure proceedings. Great legitimacy has been granted to the statistical analyses provided in the documents. While the most recent amendment allows for this usage formally, there also appears to be an effort to combat the negative effects of granting so much importance to the evaluations. An attempt to allow for a more balanced system of faculty rewarding has developed, by advising the employment of other forms of faculty critiques through “formative evaluations” as well. It must be noted, however, that the use of these “formative evaluations” is not yet mandated. Furthermore, while the use of Teacher-Course Evaluations is optional in regards to a faculty rewards system; usage of the completed forms at the Archives Department would suggest that it is not as optional as was originally intended. For instance, the Sociology, Fine Arts, and the Writing Program within the College of Arts and Sciences, and various departments within the College of Engineering appear to regularly use the FCQ responses in faculty personnel decisions. The use of FCQs in faculty evaluations may be far more widespread. Another indication of the widespread administrative use of FCQs is the stock piling of FCQs at the college, department, and individual faculty levels.
For the students’ part, the continued growth of the University community has limited the ability of students to make course selection based on FCQ results. Oftentimes students are locked into taking a particular section of a course, or a specific course itself, due to schedule and class requirement constraints. Due to this inability of students to choose a certain professor, the use of such evaluations has shifted from a consumer usage to an administrative usage . Today, course evaluations are used by campus administrators in decisions regarding: the hiring and firing, promotion, and tenure of faculty. Faculty and graduate instructors also use their FCQs when applying for positions in other universities. Hence Faculty Course Questionnaires, established as a consumer relations effort to assist in students in the selections of their classes, developed into a method by which administrators could evaluate the “teaching” of their faculty and to raise the standard of higher education.
Extent
112.5 linear feet (98 Boxes)
Language of Materials
English
Abstract
This collection contains the cumulative, numerical evaluations of faculty by students, or Faculty Course Questionnaires (FCQ), 1975 to the present. The evaluations were conducted by class of individual faculty. The raw data is distributed to the evaluated faculty and copies of the FCQs are mailed to the College, Department, and faculty member. One set of the FCQs is sent to the Archives every semester. Evaluations are numerically listed by department and college. From 1975 to the fall of 1991 departments were listed numerically. Following the redesignation of class numbers from three to four digits departmental numerical designations were dropped and departments were listed alphabetically by college and department.
- Author
- Processed by: Megan Lillie, Sarah Johnson, Megan Applegate, November 2001; Christopher Leighton, 2002-2003; Genevieve Clark, 2006; Caroline Michaels, September 2007; Christine Cardenas, October 2008; Alex Jefferson, 2013, Ally Jewell, Alex Kaaua, 2016
- Date
- 2001
- Description rules
- Describing Archives: A Content Standard
- Language of description
- English
- Script of description
- Latin
Repository Details
Part of the University of Colorado Boulder Libraries, Rare and Distinctive Collections Repository