Making the Assessment Process More Manageable

This chapter provides practical advice for improving the efficiency and effectiveness of assessment activities.

Tangible Suggestions for Making Assessment More Manageable

Maki (2004) suggests doing the following in order to ensure your assessment is more manageable:

  • Develop and maintain an assessment plan so that everyone knows what’s coming.
  • Pick one learning goal per year for assessment and follow-up discussion and action.
  • Embed assessment into existing courses wherever possible.
  • Establish departmental assessment day to concentrate efforts.
  • Collect data from a sample of students rather than all of them, if you have sufficient numbers of majors.
  • Make submission of work into a student portfolio a requirement for students.
  • Identify opportunities such as internships, field experiences, undergraduate research opportunities, and study abroad that provide opportunities to collect evidence of student learning.
  • Employ a graduate student to help do the front-line work of analysis and interpretation.

Setting Priorities for Assessment

Suskie (2009) suggests that in setting your priorities you should:

  • Start small.
  • Start by focusing on important goals.
  • Start with easier assessments.
  • Focus on approaches that yield the greatest dividends for the time and resources invested.
  • Work with samples rather than whole populations of students, where possible.
  • Stagger assessment activities.
  • Take advantage of existing resources.

Examples of Assessment Information That May Already Be On Hand

Suskie lists the following that may already be available to you for your assessment needs:

  • scores on published tests (ACT, placement, certification/licensure)
  • ratings of students by internship/practicum/field experience supervisors
  • assessment information assembled to meet disciplinary accreditation requirements
  • scores and scoring criteria for locally-developed tests and assignments
  • retention and graduation rates
  • information on employment and subsequent education
  • surveys of students and alumni
  • information on student course-taking
  • information on student participation in internships/practica/field experiences, study abroad, Immersive Learning Virginia Ball Center projects, living-learning communities, undergraduate research, etc.
  • information on students’ use of technology (Blackboard, library resources)

Using Samples of Student Work for Assessment

Walvoord (2010) reports on the advantages and disadvantages of using samples of student work for assessment:


  • Information is already available.
  • There are no student motivation problems, since students must complete the work for a grade.
  • There is no direct cost.
  • This work reflects what faculty members actually teach, not what is included on standardized tests; so, faculty members are more motivated.


  • Evidence is not comparable across institutions.
  • Everyone evaluates differently, so common standards or rubrics and training are needed.
  • Information is in multiple parts and multiple formats, so it needs to be collected in portfolios.
  • There is quite a bit of work, especially at the beginning.

The Basic No-Frills Department Assessment System

Walvoord recommends that the following be included in a basic department assessment system:

  • learning goals for each degree program, co-curricular program, etc.
  • two measures of how well your students are achieving this goal
    • One direct measure (e.g., student work samples near the time of graduation)
    • One indirect measure (e.g., surveys, interviews, or focus groups that ask students how well they feel they achieved each of the learning goals, what aspects of their program helped them achieve those goals, and what the department might do differently that would help students to learn more effectively)
  • one two-hour department meeting per year in which assessment results are discussed, at least one follow-up action to improve student learning is agreed upon, and for which meeting notes are kept

Developing and Using Rubrics

The University of Virginia (n.d.) offers the following suggestions for developing and using rubrics:

Developing Rubrics

  • Clearly define the assignment including the topic, the process that students will work through, and the product they are expected to produce.
  • Brainstorm a list of what you expect to see in the student work that demonstrates the particular learning outcome(s) you are assessing.
  • Keep the list manageable (3-8 items) and focus on the most important abilities, knowledge, or attitudes expected.
  • Edit the list so that each component is specific and concrete (for instance, what you mean by coherence); use action verbs when possible and descriptive, meaningful adjectives (e.g., not adequate or appropriate but correctly or carefully).
  • Establish clear and detailed standards for performance for each component. Avoid relying on comparative language when distinguishing among performance levels. For instance, do not define the highest level as thorough and the medium level as less thorough. Find descriptors that are unique to each level.
  • Develop a scoring scale.
  • Test the rubric with more than one rater by scoring a small sample of student work. Are your expectations too high or too low? Are some items difficult to rate and in need of revision?

Using Rubrics

  • Evaluators should meet together for a training/norming session.
  • A sample of student work should be examined and scored.
  • More than one faculty member should score the student work. Check to see if raters are applying the standards consistently.
  • If two faculty members disagree significantly (e.g., more than 1 point on a 4-point scale), a third person should score the work.
  • If frequent disagreements arise about a particular item, the item may need to be refined or removed.

Available Rubric Libraries

Assessment Commons:

Association of American Colleges and Universities:

Fresno State University:

University of Delaware:


Maki, P. L. (2004). Assessment for learning: Building a sustainable commitment across the institution. Sterling, VA: American Association of Higher Education and Stylus Publishing.

Suskie, L. A. (2009). Assessing student learning: A common sense guide (2nd ed.). San Francisco: Jossey-Bass.

University of Virginia, Office of Institutional Assessment & Studies. (n.d.). Planning assessments.

Walvoord, B. E. (2010). Assessment clear and simple: A practical guide for institutions, departments, and general education (2nd ed.). San Francisco: Jossey-Bass.