View KSU Summary Reports
Response Rates: Fall 2015 to Fall 2016
This report provides summary tables for 17,426 classes taught at Kennesaw State University from Fall 2015 through Fall 2016 on the numbers and percentages of students who responded to invitations to provide student feedback of teaching via Digital Measures, an online vendor.
The overall response rate for all KSU classes was 33.2% (or 31.9%, subtracting students who declined to respond). Colleges varied in their response rates from a high of 44.1% (42.5%) to a low of 26.1% (25.3%). (Note: The reported response rate in some classes may be an underestimate of the actual response rate if one or more students stopped attending but did not officially withdraw via the Registrar’s Office prior to the date that class rolls were sent to Digital Measures.)
Response rates declined as class size increased. Response rates of 50% or higher were achieved in 20.5% of classes with 1 student, 19.8% of classes with 2-10 students, 16.4% of classes with 11-20 students, 15.1% of classes with 21-50 students, and 6.2% of classes with 51-350 students. Response rates of 30% or lower were achieved in 79.5% of classes with 1 student, 48.6% of classes with 2-10 students, 48.8% of classes with 11-20 students, 52.6% of classes with 21-50 students, and 66.0% of classes with 51-350 students.
Some context. The response rates at KSU are within the range of online response rates (20% to 47%, average of 33%) for eight studies summarized by Nulty (2008). However, KSU’s response rates are considerably lower than online response rates reported by other institutions; Benton, Webster, Gross, and Pallett (2010, Table 3) reported response rates between 51% and 64% for institutions that administer the online version of the IDEA Student Ratings of Instruction System.
The low response rates for many classes at KSU may raise concerns about the accuracy of the data for summative assessments of teaching effectiveness. However, Benton and Ryalls (2016, p. 3) suggest that “even classes with low response rates can provide useful information for a teacher.”
Student Ratings on Instructor Effectiveness: Fall 2015 to Fall 2016
This report provides summary tables of data from students who completed online student feedback via Digital Measures in 15,783 classes taught at Kennesaw State University during Fall 2015, Spring 2016, Summer 2016, and Fall 2016 inclusive. The report focuses on one global item common to all feedback forms, “The instructor was effective in helping me learn.” Students rated their instructors on this item using a scale where 1=Strongly Disagree, 2=Disagree, 3=Agree, and 4=Strongly Agree.
Similar to summary reports of data from previous semesters, students rated most KSU instructors highly. The mean rating of all responses combined was 3.49, which is halfway between a rating of Agree and Strongly Agree. Students selected Strongly Agree as their rating 64% of the time, and almost 90% of all responses were either Agree or Strongly Agree.
Similar to previous summary reports, there is variability in the rating distributions for different colleges, schools, and departments. Please note that these differences do not indicate that instructors in certain colleges, schools, or departments are more (or less) effective teachers. Benton and Cashin (2012) provide a useful summary of research studies that have identified those factors that do, and do not, correlate with student ratings of instructors, which includes academic discipline: “humanities and arts courses receive higher ratings than social science courses, which in turn receive higher ratings than math and science courses” (p. 8). Benton and Ryall (2016, p. 9) suggest that “differences in [student ratings by] disciplines [may be] attributable to variations in quality of teaching, students background preparation, or subject-matter difficulty.”
Faculty in a specific school or department may consider comparing their own response rates, mean ratings, and percent of Strongly Agree/Agree responses to their school or departmental averages as a rough estimate of how students perceive their teaching effectiveness compared to other faculty in their unit. However, I caution against using these data for more than a rough comparison because, as Benton and Cashin (2012, p. 8-9) discuss, other factors also correlate with student ratings, such as course level, class size, and workload/difficulty of the course. Tom Pusateri (CETL Associate Director) intends to prepare additional analyses that will compare ratings by course level, course delivery (face-to-face, online, hybrid), and other factors (e.g., course size, courses that are part of learning communities, learning support courses, honors courses, courses in study abroad programs).
The mean rating for all KSU classes included this analysis is 3.49, but this should not suggest that instructors of half of KSU’s courses are taught by ineffective instructors. Instructors in 63% of all classes received ratings of either Agree or Strongly Agree from all students who responded to the instructor effectiveness item, and instructors in 82% of all classes received at least 80% Agree or Strongly Agree responses. Instructors in only 6% of all courses received ratings of Agree or Strongly Agree from no more than half of the students in the class. For more information, refer to this blog post on KSU's CETL website.
Reliability Analyses: Spring 2012 - Spring 2014
KSU began using Digital Measures Course Response in Fall 2010 to collect student ratings online for all courses. As of Spring 2014, KSU distributed a total of 762,350 electronic forms to students in 30,854 classes, saving paper and staff processing time compared to the previous paper forms.
In Spring 2012, the KSU Faculty Senate approved the inclusion of two multiple-choice items on all ratings forms, one item oncourse content and one on instructor effectiveness. Students rated each of these items on a 4-point scale (1=Strongly disagree, 2=Disagree, 3=Agree, 4=Strongly agree):
“Overall the content of this course contributed to my knowledge and intellectual skills.”
“The instructor was effective in helping me learn.”
This report summarizes analyses of the reliability of student feedback on these two items collected from Spring 2012 through Spring 2014. During these semesters, KSU distributed a total of 481,013 electronic forms to students in 19,072 classes. Students visited the Digital Measures Web site to provide responses to a total of 171,023 of these forms, for an overall response rate of 35.6%. Students who visited the Web site also had the option to decline completing a form. During these semesters, students actively declined to complete 6,427 forms; as a result, the total number of completed (answered + declined) forms is 177,450, for an overall completion rate of 36.9%.
Summary of results in the report:
- Students have given high average ratings to both course content and instructor effectiveness.
Approximately 90% of students responded “Agree” or “Strongly Agree” to each of the items.
Across classes, the average (mean) ratings of course content and instructor effectiveness have also been high.
Over 60% of classes received average ratings for course content and instructor effectiveness of 3.5 or higher on the 4-point scale (midway between 3=Agree and 4=Strongly Agree), and over 80% of classes received average ratings of 3.0 or higher (3=Agree).
Students appear to be responding similarly to the course content and instructor effectiveness items.
The high correlation (0.81 overall, 0.86 for classes with 5 or more student ratings) between these items indicates that students tended to give similar ratings to both items. This suggests that students were not differentiating between the course content and the instructor who taught the course.
The instructor effectiveness item is not behaving as well as we should expect.
Students do provide more similar ratings for the same instructor in different sections of the same course taught via the same modality (face-to-face, hybrid, online), and they provide less similar ratings for different instructors teaching the same course. However, the correlation between mean ratings of the same instructor teaching the same course in the same semester (0.57) is lower that similar correlations reported in the research literature.
The course content item is also not behaving as well as we should expect.
Ideally, students should be responding to the content of the course, not the instructor, which should result in a higher correlation than obtained when the same course is taught by different instructors. Although some courses may be standardized, instructors may have sufficient flexibility to teach the same course in ways that are sufficiently different from other instructors that students may be responding to the course content item based on the instructor’s approach in that class.
The reliability of both the instructor effectiveness and course content items decline when comparing sections of courses with low (<=35%) and high (>=50%) response rates.
On the instructor effectiveness item, the correlation drops from 0.57 (when all response rates are included in the analysis) to 0.47 (when classes with low and high response rates are compared). Similarly, the correlation for thecourse content item drops respectively from 0.48 to 0.39.
Despite the lower correlations, the mean ratings in classes with low (<=35%) and high (>=50%) response rates were similar for both the course content and instructor effectiveness Items.
Students in courses with low response rates were equally likely as students in courses with high response rates to give higher (or lower) ratings on either item.
The reliability of the instructor effectiveness item appears to be unaffected by the mode in which the course is delivered. The correlations for the same instructor teaching the same course in the same semester using the same delivery mode are 0.57 for face-to-face classes, 0.61 for hybrid classes, and 0.58 for online classes.
The correlation for the course content item differs for online classes when compared to the other delivery modes.
For online classes, the correlation in ratings of course content from students taking the same course in the same semester taught by different instructors is 0.39. This is higher than the correlations obtained in face-to-face (0.18) and hybrid (0.04) classes, and it is similar to the correlation for students taking different courses online from the same instructor (0.37). A possible explanation for these results is that instructors of online courses must first complete training in online course development, which may contribute to greater similarity in course content across courses taught by the same instructor and across course sections taught online by different instructors.
- Students have given high average ratings to both course content and instructor effectiveness.
Summary of Fall 2010 - Fall 2012 Administrations
The following summary was presented at the January 14, 2013 Faculty Senate Meeting by Tom Pusateri, CETL Associate Director
SUMMARY OF COURSE RESPONSE DATA: FALL 2010 THROUGH FALL 2012
- KSU began using Digital Measures Course Response in Fall 2010. The handout provides
additional data tables summarizing the Fall 2010 through Fall 2012 administrations.
- From Fall 2010 through Fall 2012, we have distributed nearly 500,000 forms to students
in nearly 20,000 classes, saving paper and staff processing time.
- The average response rate is 38%, or 41% if we include students who visit the Digital
Measures site and actively decline to respond.
- There is some variability in response rates by College. Three Colleges have higher
response rates than the KSU average: Coles (44%), Bagwell (45%) and University College
(48%). One College (College of the Arts) has a lower response rate (31%) than the
- Response rates are similar for classes of at least 3 and no more than 90 students
(between 37% and 43%). Response rates are lower for classes with 1 (30%), 2 (34%),
or more than 90 (30%) students.
- Response rates for 1-person classes vary by College: Arts (22%), Coles (38%), Bagwell (47%), WellStar(44%), Humanities & Social Sciences (26%), Science & Mathematics (33%), University College (42%).
RATING DISTRIBUTIONS FOR UNIVERSITY-WIDE ITEMS: SPRING 2012 THROUGH FALL 2012
In Spring 2012, the KSU Faculty Senate approved changes to the university-wide items on all ratings forms to include two multiple-response items, one on course quality and one on instructor quality:
- “Overall the content of this course contributed to my knowledge and intellectual skills.”
- “The instructor was effective in helping me learn.”
Students rated each item on a 4-point scale: Strongly agree, Agree, Disagree, Strongly disagree
Here is a summary of the results of analyses of the data. Download the handout for more information.
- Students have rated both their courses and their instructors very highly on average.
Approximately 90% of students responded Agree or Strongly agree to each of the items.
These ratings have been stable across semesters, with summer classes receiving slightly
- Ratings of courses and instructors are slightly lower for lower-division (1000- and
2000-) undergraduate courses than for upper-division undergraduate courses and for
graduate courses. Students who take pre-college courses (0097, 0098, 0099) rate those
- Additional analyses were conducted for Spring and Summer courses that were offered
in two or more delivery methods (e.g., face-to-face, hybrid, online). There were
no consistent differences between response rates and rating distributions across delivery
- There is variability in the rating distributions for departments and course prefixes. Department chairs have the ability to conduct further analyses of the data from courses within their programs that may provide further context for interpreting these results.
For a summary of research on the validity, reliability, and usefulness of student ratings of teaching, visit this link. This research paper also discusses differences in student ratings administered online versus on paper and differences in student ratings in face-to-face versus online courses.
- KSU began using Digital Measures Course Response in Fall 2010. The handout provides additional data tables summarizing the Fall 2010 through Fall 2012 administrations.