Once you’ve created your rubric, the next step is to use it. As stated before, during the creation process, you should have taken into account both who would be using the rubric and who would see the rubric. Here are some guidelines for putting the rubric to use. They may seem like a lot, but as you begin to use more rubrics, these guidelines will become second nature.

  1. Begin slow. The first time you use a rubric, you should carefully look over the student’s work to make sure that you have a good idea of how to interpret your criteria and scoring dimensions. Often, as you go through a rubric for the first time, you change your mind at some point on the scoring divisions on at least one of the criteria. You are then left to go back through what you have already looked at to see if they need to be corrected.
  2. Try hard to stick to the rubric. As you deviate from the rubric, you enter your own personal biases into the scoring. You also may not deviate in the same way from one student to the next, reducing the reliability of the rubric.
  3. Monitor your thinking as you go through the assignments to ensure that you are maintaining consistency.
  4. Train multiple scorers. If you will have more than one person using the same rubric, it is very important that you achieve inter-scorer reliability and consistency. One method of helping to maintain consistency is to first test scorers on a small sample of student work to make sure that they provide the same results and results that are in line with your expectations. An example of such a situation would be when two different instructors teach two different sections of the same class. They may score the same rubric in different ways, making one section’s scores appear different than the others in a manner that is not related to student knowledge, instructor ability, or some other variable.
  5. Beware of boredom. As you grade many papers, no matter what method you use, it can become tedious as you reach later papers. It is not uncommon to begin to think of the current paper as no different from the last and thus probably deserving of the same grade. Especially when you have an analytical rubric, you can’t judge a final score based on the look and the feel of an assignment. You have to maintain diligence.
  6. Along the same lines, don’t skim. I once had an instructor way back in high school who obviously skimmed student work. Several students would add sentences randomly to their work such as, “You stink,” although often in not such nice language, and the instructor went most of the year without noticing or at least not saying anything. There is more to a paper, and you can extrapolate the context to other assignments, than the first sentence of every paragraph and the overall length.
  7. It is not how a paper looks that you are scoring, but the work inside. While presentation may be a part of the score, don’t let an initial impression bias your assessment of the content.
  8. You are not grading the student, but their work. Just because you like a student, or they always do well in the course, does not mean that the current assignment is as good as the others. From personal experience, I would always try to get the best score as possible on the first assignments and actively participate at the beginning of a course. These first impressions can carry over into a course. Even though the student may begin to slack off, their future scores may not accurately portray this situation if bias affects their score.
  9. Avoid empathic scoring unless that is your intent. In other words, unless effort is one of your criteria, you shouldn’t grade the final work on it. If something comes easier to one student over another, then an accurate grade may reflect that if the slower students are not able to complete quality work. Unless there is an individualized education plan with accommodations for effort or some other factor, it should usually not come into your scoring. An A for effort is great as one part of an overall grade, but it should not be so great as to raise a letter grade unless there is a clear objective for effort.
  10. Order effects. After viewing 10 good assignments, the next assignment that doesn’t measure up may be given a lower grade than it deserves. The corollary would be true when 10 bad assignments are viewed followed by an average one that is given a higher grade than it deserves.
  11. The error of central tendency. It was mentioned before that some rubric developers prefer to use an even numbered scale as opposed to an odd one. The reason is that you may tend to feel that if a work is average, then it deserves a 3 out of 5. The problem is that 5 is not above average, but excellent. When you have a 5 point scale, don’t forget to use 4 and 2 when a work is deserving of those intermediate scores.
  12. Possible pitfalls with multiple scorers exist. Use a sample score and training to bring the scorers together.
    • Positive-Negative leniency error. One scorer is harder or easier than the other.
    • Trait error. One scorer is harder or easier on a specific criteria.
    • Length effects. One scorer tends to grade easier as the assignments get longer or the more assignments that have already been viewed.
    • Personality clashes. One scorer may like one student or believe that a given topic is more important. Sometimes the only way to correct for such an error is to add a weighting factor on final grades. Determining what that weighting factor is can be difficult though and require you to go through many samples of that graders work in order to compare it to the others and determine what corrections need to be made.
    • Self-scoring. Some people are harder on themselves than others.

Evaluating Rubrics

Once you’ve used a rubric, it is important to see if it works. Are the results from the rubric valid, reliable, consistent, objective, usable, etc.? The following questions should be asked in order to put your rubric to the test. When a rubric is not up to par, a redesign is probably advisable; however, the redesign would not necessarily require repetition of all of the steps of rubric creation.

  • Validity – Was the rubric too hard or too easy when reporting final scores? Courses can use absolute scoring, where students have to get a certain number of points for an A. If a rubric is too hard or too easy, then final grades could be affected.
  • Validity – Are you experiencing a problem of central tendencies? Some courses use a relative scale to distinguish amongst individual students. If everyone receives similar grades when using a rubric, then it is difficult to apply relative grades. The rubric may be written in such a way that too many works fit into the same relative level of achievement. You may want to rewrite the rubric so that there are more delineations or so that the deliniations in scores are more applicable.
  • Validity – Was there a clear basis for assigning scores at the various levels within each criteria? After using the rubric, did it make sense not just when, but why certain assignments were getting a score higher or lower than others? If not, you will want to see what did not make sense and address the issue in your redesign of the rubric.
  • Validity – Did the rubric address something that the students were not expecting? Especially when the rubric is not known to the students beforehand, there may be something assessed by the rubric that was not expressed in the directions or expected by the students. You may have even applied something in the rubric that was scoring something that you did not actually address in your instruction. Either the instruction or the rubric probably needs to be changed in order to reflect what the students are learning and should know. The rubric should not be assessing extraneous materials.
  • Validity – Was something left out? Even if you use student examples when creating the rubric, in some subjects, the state of knowledge changes with time. For that reason, the assignments turned in by students may vary as well, and something that should have been assessed by the rubric may have been left out as well. There are other reasons that something may be missing as well, such as simply missing the concept when brainstorming. Take the lesson learned and apply it to a redesign of the rubric for future use.
  • Validity – Was the rubric developmentally appropriate? Especially at earlier grades, it is a common mistake to design a rubric at a level of understanding above which the students probably are capable of attaining. It is important to consider the audience in a redesign and initial design of a rubric. If a certain criteria is consistently graded low, either the instruction is lacking or the wording of the scale may be done in such a way as to negatively bias the results.
  • Reliability – Did you experience difficulties assigning scores for all student work? Perhaps some assignments did not fit well into your scoring plan.
  • Consistency – As you were using the rubric, did you notice yourself changing your mind over the scale of the criteria? While it is difficult to redesign a rubric half way through grading, it important to verify scores whenever your attitude changes in the process of applying the rubric. Keep a list of the changes that you desire to make so that you can update the rubric before the next time it is used.
  • Consistency – When using multiple scorers, it is often a good idea to have everyone grade a few assignments in common. Afterward, the scores can be compared to look for consistency among scorers. Usually, such issues should be addressed before rubric use, but weighting of scores can be used to correct for differences post-scoring.
  • Objectivity – Was there internal bias in the rubric and possibly in the assignment? By such techniques as a t-test on the differences of certain criteria by different groups of gender, race, economics, etc., you can see if there was some bias built into the rubric that you were not anticipating. Rarely do instructors take the time to address such issues post instruction, but rather rely on instructional design or simply their own feelings to look for such biases, but equally rare would be one’s ability to determine bias within themselves. Often statistical analysis is a more accurate measurement of bias.
  • Usability – Can the rubric be applied to multiple assignments? Although it may seem like there is a lot of work ahead of you, it is important to keep in mind that a new rubric does not need to be constructed for every assignment. Often, only a few may be needed within an entire course. Look for ways of combining, altering, refining, or other method in order to apply the rubric to multiple assessments. Once applied, evaluate the ability of the rubric to be applied in such a way. If the rubric is not working out as you expected, then you may need multiple rubrics for the various assignments.
  • Usability – Was the rubric practical? After you have finished all of the work and considering all of the possible assignments that you could use the rubric on, are you really helping yourself by using the rubric?

If any of the above questions are answered no, then you need to consider redesigning the rubric or perhaps even consider another method of scoring or assessing an assignment. It is unusual to get everything right the first time. Even when the rubric is piloted, you may miss something that needs addressed later. So be prepared to make some changes.