Online Resources

Report from the Senate Center for Teaching Excellence Committee

Midcourse evaluation is a powerful tool that can be used to improve instruction and student learning. Midcourse evaluation strengthens communication between students and instructors to enhance teaching excellence at Miami University.

We, the Center for Teaching Excellence Senate Committee, submit this White Paper to offer a detailed proposal on midcourse evaluations at Miami University. We have come to the conclusion that midcourse evaluations are valuable processes that should be encouraged as long as instructors retain control of the process and products.

Our committee members spent significant time discussing and debating midcourse feedback, and its implementation processes. Considerable time was allocated to analyzing literature on midcourse evaluations, and soliciting input from colleagues who offered a wide range of perspectives.

As such, we have compiled a list of recommendations for midcourse evaluation processes at Miami University:

  • Self-selection: Instructors choose whether to participate.
  • Implementation: Instructors select the evaluation tool.
  • Results: Instructors maintain control of data.

Class structure, student composition, and teaching approaches should inform the delivery method and content of the midcourse evaluation. Attached to this document, we have provided examples of midcourse evaluations, key selection considerations, and suggestions for their implementation.

We encourage all instructors to consider the benefits of using midcourse evaluations to strengthen communication with your students and enhance your teaching effectiveness by practicing a short cycle course improvement.

Committee Members

Ellen Yezierski, Chair
Rose Marie Ward, Director of Center for Teaching Excellence
Janice Kinghorn, Senate Liaison
Faculty Members: Annie-Laurie Blair, Dennis Cheatham, Darrel Davis, Cynthia Govreau, Lynette Hudiburgh, Steven Keller, Janice Kinghorn
Graduate Member: Bethany MacMillan
Undergraduate Members: Brianna Minshall, Nicholas Spurgus

Purpose of Midcourse Evaluation

The midcourse evaluation process can be beneficial for both the instructor and the students. Students who have the opportunity to participate in midcourse evaluations, rather than just end-of-course evaluations, tend to have a more positive outlook on the course. Instructors are more likely to receive a higher end-of-course rating than if they did not use this teaching improvement tool (McGowen & Osgathorpe, 2011). Additional positive outcomes documented in the literature include an increase in instructor confidence and motivation, better communication with the students in the classroom, and a broader knowledge of teaching resources (Diamond, 2004).

Midcourse evaluations can be a successful and meaningful process if they follow a structured procedure (Hampton & Reiser, 2004). First, it is important that students understand why these evaluations are important for the teacher (Schwier, 1982). Explaining to the students that they have a voice in their education and how it helps the instructor improve can inspire the students to take them seriously (Kite, Subedi, & Bryant-Lees, 2015; Veeck, O'Reilly, & MacMillan, 2015). Part of this explanation should outline the characteristics of a good teacher (Harris & Stevens, 2013) or the statement of Good Teaching Practices from the Miami University Policy and Information Manual. Second, once the midcourse feedback is processed, the instructor can alter the course when possible (Friedlander, 1978). Thirdly, it is important to eliminate bias in the evaluations by providing prompts that ask for specific suggestions for improvement focused on student learning while minimizing responses not productive for course improvement (i.e., "she's nice"; Berk, 2006; Bartlett, 2005). Finally, the midcourse evaluations should include some form of student reflection about their effort in learning and in co-creating the knowledge in the classroom (McKeachie & Svinicki, 2013).

Midcourse evaluations can be structured for every class and instructor schedule due to the large variety of resources available for implementation. For example, midcourse evaluations can take the form of a simple survey (either online or on paper), delivered in and/or out of class. Bullock (2003) advocates for online midcourse evaluations because they are easily analyzed, saving the instructor time. Beyond a structured form, midcourse evaluations can be a Small Group Instructional Diagnosis (SGID) or colleague review (Sipple & Lightner, 2013; Veeck, O'Reilly, & MacMillan, 2015). An instructor could also have the class name an ombudsman to collect anonymous class feedback and report this to the teacher (McCann, Johannessen, & Spangler, 2010). Furthermore, if the instructor prefers putting a larger amount of this process in the hands of the students, they could implement an Instructional Development and Effectiveness Assessment. In this process, students rate themselves and their own learning development against the learning outcomes for the class (McKeachie & Svinicki, 2013).

Midcourse evaluations can be intimidating for instructors because of the potential of "bad" evaluations and how they may influence their career (Pulich, 1984). Removing the intimidating features of these evaluations is critical for large numbers of instructors to adopt them. Instructors need feedback to improve instruction, and instructors and all levels/ranks can benefit from midcourse evaluations.

There are five main advantages of midcourse feedback: (1) in contrast to end-of-course evaluations, the feedback can be used to make changes during the current course; (2) the students report feeling empowered to help craft their educational experience; (3) questions can be tailored to highlight the specific characteristics of the course, rather than global measures of teaching effectiveness from the end-of-course evaluation; (4) the instructor can solicit formative feedback that does not have to be shared with the university administration; and (5) the midterm feedback can go directly to the instructor about characteristics of the course most pertinent to the instructor (Keutzer, 1993).

Cohen's meta-analysis of studies about the impact of midcourse evaluations on end-of-term evaluations concludes, "Instructors receiving mid-semester feedback averaged 0.16 of a rating point higher on end-of-semester overall ratings than did instructors receiving no mid-semester feedback" (Cohen, 1980, p. 337). In a more recent study, the impact of midcourse feedback on end-of-term feedback depends on what instructors do with the midcourse feedback. For example, "Student ratings showed improvement in proportion to the extent to which the instructor engaged with the midcourse evaluation. Faculty who read the student feedback and did not discuss it with their students saw a 2 percent improvement in their online student rating scores. Faculty who read the feedback, discussed it with students, and did not make changes saw a 5 percent improvement. Finally, instructors who conducted the midcourse evaluation, read the feedback, discussed it with their students, and made changes saw a 9 percent improvement" (McGowan & Osguthorpe, 2011, p. 169).

Considerations for Instructors

Course Content Focus

  • What aspects of the course do you want feedback on?
  • Will the midcourse feedback be on the whole course or part of the course (e.g., the syllabus, an assignment, an exam)?

Format of Evaluation

  • Will the midcourse feedback be student feedback or colleague feedback?
  • How does the size of your class impact the midcourse feedback process?
  • What types of questions do you want to use (open-ended v closed-ended questions; mix of questions)?
  • Do you want the same questions that are provided on the end of course evaluations?
  • Should the responses be anonymous or do you want to follow up with specific students based on responses?

Time Investment

  • How much time do you want to devote before the course to format the evaluation?
  • How much class time do you have to deliver the midterm evaluation?
  • How much time outside of the class do you want to devote to processing the evaluation?

Delivery of Evaluation

  • Do you want the midcourse evaluation delivered online, on paper, verbally, or through another method?
  • Should the students see one another's responses on the midcourse evaluation?
  • Should the students work in groups to produce the midcourse feedback?
  • Is the midcourse feedback process accessible and inclusive to all learners in your course?

Results and Reporting

  • Who do you want to have access to the results of the evaluation? (e.g., chair, administration)
  • In what format will you communicate the results (e.g., annual report or dossier) if you are sharing them?
  • What is your end goal or purpose in collecting these feedback results? (e.g., tenure, class improvement, teaching development)

Impact of Results and Validity

  • Do you want a person other than yourself to deliver the midcourse evaluation?
  • If someone else, what are the qualifications of the people conducting the midcourse evaluation?
  • How much of the course are you willing to change?
  • What are your plans for sharing the results with the students?


Below are key points that should be considered in administering midcourse evaluations to ensure credibility, integrity, and instructional improvement, according to best practices gleaned from literature reviews:

  1. Course evaluations are encouraged to be administered near the midpoint of the term (e.g., seventh week for a full semester class) but can be distributed multiple times throughout the semester for more frequent feedback. It is helpful to match the timing of the evaluation with the purpose for giving the evaluation.
  2. Each evaluation process has advantages and shortcomings. The selection of a method should match the instructors' teaching approach, classroom, and who they would prefer have access to the data and summary.
  3. It is recommended that instructors should not administer their own evaluations; however, there are tools available if the instructor prefers to administer their own.
  4. Evaluation results should be discussed with the class and improvements in the course made if necessary. The principle purpose of the method is short cycle for continuous improvement.
  5. Independent studies, research, field experience courses and courses with low enrollment can also have midcourse feedback processes that are sensitive to the enrollment and experiential nature of the course.
  6. In team-taught courses, the same method of evaluation should be administered uniformly in each subsection of a course as taught by secondary instructors, to ensure uniform effectiveness in course delivery.

Using Your Midcourse Evaluation Data

A midcourse evaluation is good in and of itself–it gives you feedback and reminds the students that you are interested in what and how they are learning. However, you will also want to report back to your students on the evaluation itself. It lets students know that you have considered what they have said; it helps students to see that not everyone in the course may feel the same way they do; and it reinforces for students that filling out evaluation forms thoughtfully is appreciated and valued. Here are some tips on responding to students' feedback.

Respond quickly to students' feedback. Ideally, you will want to respond to your students' comments as soon as feasible. So schedule midcourse evaluations at a time during the term when you will have the opportunity to immediately review the class's comments and respond to them. It is common to discuss the results and changes the following class session.

Consider carefully what students say. First, look over the positive things your students have said about the course. This is important because it is easy to be swayed by negative comments.
Then, read the suggestions for improvement and group them into three categories:

  • Those you can change this semester (for example, the turnaround time on homework assignments)
  • Those that must wait until the next time the course is offered (for example, the textbook)
  • Those that you either cannot or, for pedagogical reasons, will not change (for example. the number of quizzes or tests, or the fact that you teach the full hour the course is slotted for)

You may want to ask a colleague or a faculty developer from the Center for Teaching Excellence to help you identify options for making changes. Be open minded about creating space for change, but remember that you still have the final say on what remains the same and what is changed.

Let students know what, if anything, will change as a result of their feedback. Thank your students for their comments and invite their ongoing participation in helping you improve the course. Students appreciate knowing that an instructor has carefully considered what they have said. Clarify any confusions or misunderstandings about your goals and their expectations. Then give a brief account of which of their suggestions you will act upon this term, which must wait until the course is next offered, and which you will not act upon and why. Let students know what they can do as well. For example, if students report that they are often confused, invite them to ask questions more often. Keep your tone and attitude neutral; avoid being defensive, indignant, or unduly apologetic.

Select a method for responding to student feedback that works for you. Most instructors simply discuss the results with the class as a whole during a scheduled class session. Some instructors provide a handout of salient responses to questions, deleting those that are clearly idiosyncratic (e.g., if there is just one comment that says "this classroom is too hot.") Other instructors do a short PowerPoint presentation, complete with graphs and charts of responses. Additionally, other instructors post summary responses on the whiteboard so students can see what others have written. Whichever method you select, the most important factor in responding is to do so thoughtfully, and in a timely fashion.

(Adapted by Barbara Davis and Steve Tollefson from Tools for Teaching, Jossey-Bass, 2001) Source : UC Berkeley Office of Educational Development

Midcourse Evaluation Tools

Below are some midterm course evaluation tools you can use. Each tool has a description and a delivery method. Most also have detailed examples of how to administer.

Bare Bones Questioning Technique (aka Stop-Start-Continue; Snooks et al., 2004)

Who has access to the data: Instructor

Delivery method: Online, in class (e.g., note cards), or verbally


This is a five-minute evaluation process where the students will write on note cards, online, or verbalize their answers to the questions below. Instructors may write the questions on the board/screen, read them aloud, or pass them out on printed paper. At the end of this activity, the instructor will have access to the raw data. The words in parentheses next to the questions below should help the instructor with completing their own analysis of what should no longer happen, begin to happen, or continue to happen in their classroom.

Use the following three questions:

What (if anything) is interfering with your learning? (STOP);
What suggestions do you have to improve your learning? (START);
What is your instructor doing that helps you to learn? (CONTINUE).
To download the Bare Bones instrument to your course Canvas site, search "Miami Midcourse Evaluation CTE & eLearning" in Canvas Commons.

Student Assessment of Their Learning Gains (SALG)

Who has access to the data: Instructor

Delivery method: Online


The SALG is a free online survey designed to test curricula and pedagogy developed through a National Science Foundation grant and currently hosted by the Wisconsin Center for Educational Research. The SALG is designed for instructors in all disciplines to get feedback from their students on various elements of a course. Once you register, you can modify the survey to fit your course. The tool is preset to create learning goals based on the statements that you choose. Students take the survey online through a link you provide at the completion of the setup of the evaluation; the Wisconsin Center then provides a statistical report.

Register at SALG Website for Instructorsor Use the Template with your own survey platform (Canvas, Qualtrics, GoogleForms,SurveyMonkey, Formstack, etc.).The Center for Teaching Excellence has archived a seminar on how to use the SALG: CTE SALG Video

University Online Midcourse Evaluation

Who has access to the data: Instructor, Chair, University

Delivery Method: Online


The midcourse evaluation will be delivered via the same delivery tool (i.e., What Do You Think) as end of course evaluations. Please contactcourseevals@miamioh.edufor details on how to use the survey feature of "What Do You Think" as a tool for creating and delivering mid-course evaluations.

Students' Evaluation of Educational Quality (SEEQ)

Who has access to the data: Instructor

Delivery method: Online


The SEEQ is a comprehensive student rating form providing useful information about teaching effectiveness. This tool focuses on learning environment, enthusiasm, organization, group interaction, individual rapport, breadth, examination, assignments, and overall (general) evaluation. This tool can be administered online or printed and distributed during a class session.


1. The following statements are rated on the scale: Very Poor, Poor, Moderate, Good, Very Good, or Not Applicable

You find the course intellectually challenging and stimulating.
You have learned something which you consider valuable.
Your interest in the subject has increased as a consequence of the course.
You have learned and understood the subject materials in this course.
2. Do you have any comments to add about the LEARNING ENVIRONMENT of the course?

Quick Course Diagnosis (QCD)

Who has access to the data: Instructor

Delivery method: In class


For a QCD, the instructor meets with a faculty developer to discuss objectives and any changes to the basic protocol. The instructor prepares the class for the 15-minute experience and leaves the room during the QCD. Later, the instructor meets with the faculty developer to review the data and to plan improvements.

For the processing, the faculty development team (one person to ten, depending on the class size) greets the students and explains the procedures. Students are asked to write on an index card a number from one to five indicating their satisfaction level with the course and a word or phrase to clarify their experience ("awesome," "confusing," etc.). For the report, these data are dropped into a histogram that displays the number of students and lists each number and the associated words or phrases.

The team can then display for students, via a projector or printed copies, a numbered list of the student learning outcomes (SLOs) for the course. On the reverse side of the index card, the students indicate (by recording their numbers) the two SLOs they felt were best met and the two that were least fulfilled. During the final stage of the QCD, students form groups of five to seven and on a highly structured form, they identify the course (or program) strengths and weaknesses using a cooperative brainstorming technique called "roundtable" where students rapidly pass around a sheet of paper, adding ideas as they say them aloud. The groups then rank the top three strengths and the top three weaknesses. These data are recorded onto a single template, group by group, and then analyzed by a person skilled in trend analysis, usually the faculty developer. Common themes are coded with the same color across teams, thus emphasizing the common strengths or issues. For example, if four teams mention "poor textbook" or anything similar (e.g., "textbook sucks"), a reader will see red in all the team ratings, if that is the color selected for "poor textbook."

Instructional Skills Questionnaire (ISQ)

Who has access to the data: Instructor

Delivery method: Online


The ISQ can be used to provide instructors with immediate and specific feedback concerning their teaching. The ISQ conceptualizes teaching in terms of the seven ISQ dimensions based on Feldman's (2007) categories of teaching behavior. Each dimension was measured by two indicative items and two contra-indicative items with a 7-point Likert scale response format (response options ranging from strongly disagree to strongly agree). The contra-indicative items are recoded prior to analyses.

The seven ISQ dimensions are defined as follows:

Structure: the extent to which the subject matter is handled systematically and in an orderly way
Explication: the extent to which the instructor explains the subject matter, especially the more complex topics
Stimulation: the extent to which the instructor interests students in the subject matter
Validation: the extent to which the instructor stresses the benefits and the relevance of the subject matter for educational goals or future occupation
Instruction: the extent to which the instructor provides instructions about how to study the subject matter
Comprehension: the extent to which the instructor creates opportunities for questions and remarks regarding the subject matter
Activation: the extent to which the instructor encourages students to actively think about the subject matter
These statements are rated on a scale of Strongly Disagree, Disagree, Somewhat Disagree, Do Not Agree Nor Disagree, Somewhat Agree, Agree, and Strongly Agree

For more clarity on this evaluation process, please visit this PLOS Article Abstract

Knol MH, Dolan CV, Mellenbergh GJ, van der Maas HLJ (2016) Measuring the Quality of University Lectures: Development and Validation of the Instructional Skills Questionnaire (ISQ). PLoS ONE 11(2): e0149163. doi:10.1371/journal.pone.0149163

CWSEI Teaching Practices Inventory

Who has access to the data: Instructor

Delivery method: Instructor reflection


The instructor completes the Teaching Practices Inventory (TPI) and writes a reflection. Students are not directly involved in this process. These questions are specifically geared toward the science and mathematic lecture-based subjects. It takes about 10-15 minutes to complete. The acronym for this method is representative of Carl Wieman Science Education Initiative at the University of British Columbia.

Classroom Observation Protocol for Undergraduate Students (COPUS)

Who has access to the data: Instructor

Delivery method: In Person


Instructors will observe their classroom in multiple 2-minute increments throughout the session. These observations focus largely on student behavior rather than teacher quality. There are 25 different codes that can be used to describe each observation installment that can then be turned into quantitative data for further and easier assessment. These codes represent themes such as listening, individual thinking, clicker, discussion, worksheet group work, other group work, answer, student, whole class discussion, predicting, student present, test/quiz, waiting, other, lecturing, writing, follow up, pose, moving/guiding, one-on-one, demo+, and admin. There is a recommended 1.5 hour training session to learn how to properly implement this method as an evaluation method.

Reformed Teaching Observation Protocol (RTOP)

Who has access to the data: Instructor

Delivery method: Paper


This observation method is advocated for use in classrooms that are considered 'reformed' in regards to being student-centered and activity-based rather than teacher-centered. The RTOP looks at 5 main items in a reformed classroom: lesson design and implementation, propositional knowledge, procedural knowledge, student-student interaction, and student teacher interaction. Each of these items, or subscales, are graded on a scale of 0-4. This procedure was originally created for mathematics and science based courses.

Example Section:

Instructional strategies and activities respected students' prior knowledge and the preconceptions inherent therein.
The lesson was designed to engage students as members of a learning community.
In this lesson, student exploration preceded formal presentation.
This lesson encouraged students to seek and value alternative modes of investigation or of problem solving.
The focus and direction of the lesson was often determined by ideas originating with students.
Students rate using the following scale: 0 - Never Occurred, 1, 2, 3, 4 - Very Descriptive.

Teaching Dimensions Observation Protocol (TDOP)

Who has access to the data: Instructor and Observers

Delivery method: Paper


This method is used to examine the dynamics that occur between students, instructors, and technologies within the classroom. This method gives descriptions of teaching rather than giving the judgment of quality of teaching. The TDOP has 6 main areas that it focuses on: instructional practices, student-teacher dialogue, instructional technology, potential student cognitive engagement,* pedagogical strategies,* and students' time on task.* The 3 listed above with an asterisk (*) are considered optional to those using this evaluation tool. To use this method, one needs to select who is going to be observing the class (usually more than 1 person), select which of the 6 you want them to focus on, use the codes on the template (a sample is given below), participate in a training to read/use results, conduct observations, then analyze and interpret data.

Peer Review of Teaching (aka Colleague Evaluation)

Who has access to the data: Instructor

Delivery method: Classroom observation or critique of classroom artifacts


Peer review is often identified with peer observations, but it is more broadly a method of assessing any aspect of the class for the instructor under review. This typically includes peer observations of teaching, and other evidence such as syllabi, assignments, student work, and exams. Your peer may use their own background knowledge of teaching to evaluate these items/events, or it may be beneficial to use the benchmarks provided a professional organization in your field. If you are interested in learning more or would like to request help looking for these professional benchmarks, please contact your department chair.

It is also worth noting a common distinction between two very different forms of peer review: formative and summative. Formative evaluation typically is oriented solely toward the improvement of teaching and is part of instructional mentorship and development. Summative evaluation, in contrast, is that done to inform personnel decisions. To improve the freedom and exploration of individual instructors, formative reviews may be shielded from scrutiny for a period of years until such time that there needs to be accountability to standards of excellence for personnel decisions. At this point in time, summative evaluations are more common since they are tied to decisions related to reappointment, promotion, or tenure (Bernstein et al. 2000). Because the more consequential nature of summative evaluations tends to diminish the formative value of the peer review process, it is important to maintain a clear distinction between these types of evaluation and be transparent with those under review.

Visit the Peer Review of Teaching page on the Provost's website.

Feedback Question Bank

There are several types of questions that can be used in midcourse evaluations. Below are samples of narrative, likert-scale, qualitative, and quantitative questions you can integrate into your evaluation process.

Narrative Questions that Generate Qualitative Data

  1. What have you learned in this course that you find particularly interesting or compelling?
  2. What's helpful in this course to your learning?
  3. What suggestions do you have for change?
  4. How is the course going for you?
  5. What would help make it a better learning experience for you?
  6. On a scale of 1-7, with 1 being low and 7 being high, how is the course going for you? Why did you choose this number?
  7. Do you usually understand what is expected of you in preparing for and participating in this class? If not, please explain why not.
  8. What aspects of this course and your instructor's teaching help you learn best?
  9. What specific advice would you give to help your instructor improve your learning in this course?
  10. What other ideas would you suggest to improve this course (e.g., changes in course structure, assignments, or exams)?
  11. What are the most important things you have learned so far in this class?
  12. What don't you think you understand well enough yet?
  13. What would you like to see more of between now and the end of the semester?
  14. What do you think we could cut down on?
  15. What do you need to do in terms of understanding the material between now and the end of the semester?
  16. What is your overall evaluation of the instructor?
  17. Which aspects of this course/instructor are learning to valuable learning experiences?
  18. What aspects of the course/instructor need to be improved to increase the value of the learning experience?
  19. Please write any additional comments or suggestions.

Likert-Type Scale Questions that Generate Quantitative Data

Students rate from strongly agree through strongly disagree - Strongly Disagree, Disagree, Neither Agree nor Disagree, Agree, Strongly Agree.

  1. I find the format of this class (lecture, discussion, problem-solving) helpful to the way that I learn.
  2. I feel that this class format engages my interest.
  3. I feel comfortable speaking in this class.
  4. I learn better when the instructor summarizes key ideas from a class session.
  5. I find the comments on exams or other written work helpful to my understanding of the class content.
  6. I find that this class stimulates my interest in reading about this subject outside of class.
  7. I feel comfortable approaching the instructor with questions or comments.
  8. I think that I would learn better if a different format were used for this class (suggested below).
  9. The instructor holds the students to high academic standards
  10. The instructor effectively challenges me to think and to learn.
  11. The instructor is well prepared.
  12. Examinations and/or other graded components cover course concepts in a challenging manner.
  13. The instructor shows enthusiasm for the subject.
  14. I feel free to ask questions and to make comments in class.
  15. The instructor deals with questions and comments effectively.
  16. The instructor is generally available during office hours.

Questions for Problem-Solving or Laboratory Classes:

    1. The problems worked in this class help me in working other problems on my own.
    2. The problems worked in this class help me in learning the content ideas in this class.
    3. I feel that I learn how to solve problems more easily when I work with a group of students.
    4. I find the laboratory lectures helpful in understanding the purpose of the experiment.
    5. I find the instructor's comments during laboratory help my understanding of key steps in the experiment.
    6. I find the comments on my written laboratory reports helpful in understanding the experiment.
    7. I learn more from the laboratory when I am given questions about it to think about first.
    8. I learn more from the laboratory when I am given questions about it to write about first.

Questions for Discussion-Oriented Classes:

    1. I find class discussions help me in understanding the readings.
    2. I find class discussions help me in understanding key ideas in the course.
    3. I learn more if class discussions are more structured.
    4. I feel that class discussions are dominated by one or a few people.
    5. I learn better when I have more of a chance to speak.
    6. I learn more from discussions when I am given a question to think about first.
    7. I learn more from discussions when I am given a question to write about first.

Questions for Classes Using Team or Group Work:

    1. I feel that I learn more when I work with a group.
    2. My group works well together
    3. I feel that I need more guidance for our group work.
    4. I find that working in a group confuses me.
    5. I find it helpful if the instructor summarizes results obtained as part of group work.
    6. I find it helpful to get feedback from my group on my own performance in the group.
    7. I think that groups work better when each person has an assigned role in the group.

Quantitative and Qualitative Questions on Student Preparedness, Effort, and Work

  1. I take responsibility for helping make this course a positive learning environment.
  2. I attend class.
  3. I prepare for class.
  4. When I attend class, I am actively engaged.
  5. I stay up-to-date in the course work.
  6. I seek help when I need it.
  7. What steps could you take to improve your own learning in this course?
  8. How many hours per week, outside of regularly scheduled class meetings, do you spend on this class?
    • 1-2, 2-4, 4-6, 6-8, more than 8
  9. How much of the reading that has been assigned so far have you completed?
    • 100%, 90%, 75%, 50%, less than 50%

Suggested Readings


Here is a bibliography of suggested readings on midcourse evaluations. Use them to learn more about the evaluation process and how to get the most out of it for your class.

Bartlett, A. (2005). "She seems nice": Teaching evaluations and gender trouble. Feminist Teacher, 15(3), 195-202.

Berk, R. A. (2006). Thirteen strategies to measure college teaching: A consumer's guide to rating scale construction, assessment, and decision making for faculty, administrators, and clinicians. Sterling, VA: Stylus.

Bullock, C. D. (2003). Online collection of midterm student feedback. New Directions for Teaching and Learning, 95-102.

Buskist, C. & Hogan, J. (2010). She needs a haircut and a new pair of shoes: Handling those pesky course evaluations. Journal of Effective Teaching, 10 (1), 51-56.

Cohen, P. A. (1980). Effectiveness of student-rating feedback for improving college instruction: A metaanalysis of findings. Research in Higher Education, 13(4), 321-341.

Diamond, M. R. (2004). The usefulness of structured mid-term feedback as a catalyst for change in higher education classes. Active Learning in Higher Education, 5(3), 217-231.

Friedlander, J. (1978). Student perceptions on the effectiveness of midterm feedback to modify college instruction. The Journal of Educational Research, 71(3), 140-143.

Hampton, S. E., & Reiser, R. A. (2004). Effects of a theory-based feedback and consultation process on instruction and learning in college classrooms. Research in Higher Education, 45(5), 497-527.

Harris, G. L. A., & Stevens, D. D. (2013). The value of midterm student feedback in cross-disciplinary graduate programs. Journal of Public Affairs Education, 19(3), 537-558.

Keutzer, C. (1993). Midterm evaluation of teaching provides helpful feedback to instructors. Teaching of Psychology 20(4), 238-240.

Kite, M. E., Subedi, P. C., & Bryant-Lees, K. B. (2015). Students' perceptions of the teaching evaluation process. Teaching of Psychology, 42(4), 307-314.

Ladson-Billings, G. (1996). Silences as weapons: Challenges of a Black professor teaching White students. Theory into Practice, 35(2), 79-85.

Lewis, K. (2001). Using midsemester student feedback and responding to it. New Directions for Teaching and Learning, 87, 33-44.

Marsh, H. W. (1982). SEEQ: A reliable, valid, and useful instrument for collecting students' evaluations of university teaching. Educational Psychology, 52(1), 77-95.

McCann, T. M., Johannessen, L. R., & Spangler, S. (2010). Mentoring matters: Mentoring by modeling informal self-evaluation methods. The English Journal, 99(5), 100-102.

McGowen, W. R., & Osgathorpe, R. T. (2011). Student and faculty perceptions of effects of midcourse evaluation. To Improve the Academy, 29, 160-172.

McKeachie, W. & Svinicki, M. (2013). Teaching tips: Strategies, research, and theory for college and university teachers. Cengage Learning: Belmont, CA.

Murray, H. G. (2007). Low-inference teaching behaviors and college teaching effectiveness: Recent developments and controversies. In R. P. Perry & J. C. Smart (Eds.), The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 145-200). Dordrecht, The Netherlands: Springer.

Pulich, M.A. (1984). Better use of student evaluations for teaching effectiveness. Improving College and University Teaching, 32(2), 91-94.

Ransom McGowen, W. & Osgathorpe, R. (2011). Student and faculty perceptions of effects of midcourse evaluation. To Improve the Academy, 29, 160-172.

Schwier, R. A. (1982). Design and use of student evaluation instruments in instructional development. Journal of Instructional Development, 5(4), 28-34.

Simonson, S., Earl, B., & Frary, M. (2021, May). Establishing a framework for assessing teaching effectiveness, College Teaching.

Sipple, S., & Lightner, R. (Eds.). (2013). Developing faculty learning communities at two-year colleges: Collaborative models to improve teaching and learning. Sterling, VA: Stylus.

Supiano, B. (2018, June 29). A university overhauled its course evaluation to get better feedback. Here’s what changed. The Chronicle of Higher Education.

Veeck, A., O'Reilly, K., MacMillan, A., & Yu, H. (2015). The use of collaborative midterm student evaluations to provide actionable results. Journal of Marketing Education, 1-13.

Wieman, C. (2015). A better way to evaluate undergraduate teaching. Change: The Magazine of Higher Learning, 47(1), 6-15.

Wiggins, G., Eddy, S. L., Wener-Fligner, L., Freisem, K., Grunspan, D. Z., Theobald, E. J., Timbrook, J., & Crowe, A. J. (2016). ASPECT: A survey to assess student perspective of engagement in an active-learning classroom. CBE—Life Sciences Education, 16(2), 1-13.