grade like a robot

You Are Not a Robot, Do Not Grade Like One

First, let me confess that I have done bad things to students with grading. My teacher prep program provided no research or guidance on grading methods. I distinctly remember teaching summer school early in my career and a student was upset that I wouldn’t round up his grade. Kid had a 89.4 and I was not going to round that up. If he had just studied for one quiz it would have pushed his grade over the edge. I remember LAUGHING about it. Silly kid, sheesh. Can’t believe he’s complaining to the office. I dug my heels in and I was not going to move.

I WAS A JERK.

What Shapes Your Grading Policies?

Why do you grade the way you grade? What are your policies based on? I will say that for years my answer was HABIT. Because it was done to me. Not one lick of research was influencing my policies.

“There’s probably no area in all of education where the gap between a knowledge-base and practice is greater than in the area of grading” Thomas Guskey

Why is 90% an A? Not because of any research supporting that practice. Some dude at Yale just made that up as a means of ranking students. Why do we not allow retakes? Not because research says that is a good idea. More logistics than good learning pedagogy.

Very likely your teacher prep program did a POOR JOB of providing you with research on grading and assessment methods. I can not encourage you highly enough to spend some time digging into the research on this. I can not find any research that supports the traditional system of grading as being good for learning or being accurate for measuring learning.

Margin of Error

I did not grade the top of the stack of papers the same as the bottom of the stack. We can not be laser sharp consistent from student to student let alone assignment to assignment. Every single thing we grade has a margin of error and that margin of error can be quite significant. Give the same paper out to a room full of teachers to score, the gap between the highest and lowest score can be 30%! THREE LETTER GRADES! Lots of things influence our grading including how tired we are, how many other papers we’ve looked at, how much we like the kid…

We do NOT have the ability to be laser sharp to grade on a 100 point scale. Let alone have our grades be accurate to the tenths place.

You are a human being and not a robot. It is impossible for you to be accurate and consistent. You make mistakes in assessing students. Students make mistakes in demonstrating what they really know. Sometimes they know more than the points gathered on a test show.

Statistically Valid Assessment?

I taught AP Stats one year where THREE of the free response questions were thrown out due to a statistical analysis showing them to be invalid after the students took the test. And you know those questions were field tested before the students took the test. How do we know our assessments (tests, quizzes in particular) are statistically valid measurements of learning?

Probably our questions are biased, poorly written in some cases, confusing, etc… I personally have never ran a field test or ran a statistical analysis on any of my quizzes. How can I justify that the cumulation of these points to the HUNDREDTHS place is valid?

Math Errors

I can not even tell you how many times I’ve helped a teacher with their gradebook to discover it is not calculating the grades the way the teacher thought they were. The settings of the electronic gradebook were confusing. I cry to think how many students nationwide get a grade on their report card that does not match how the syllabus said it was going to be calculated. Almost always when I catch an error, the gradebook is not calculating in the student’s favor. In fact, this is oftentimes how the existence of an error is discovered. The teacher realizes the grades are lower than they think they should be.

You or the Gradebook?

Who assess your students? YOU or a math formula created by an engineer with no teaching experience? Look at the kid. Look at the evidence. Does your knowledge of the student and the evidence match what the gradebook spits out? No? YOU YOU YOU YOU assess your students.

Your Margin of Error

Take a guess, what margin of error would you allow yourself considering that you’re a human being. How accurate would you guess your assessment methods to be? Just for fun, here is a quick Google Form for you to record your guess.

Let’s say your margin of error on any particular assignment is 4%. Could be 4% higher or 4% lower. If a student has an 89.4% that is within your margin of error, how can you not justify rounding that up? If a student has a 96%… that is within your margin of error, how would you not more closely look at the evidence to not consider rounding that up? What if your margin of error is 30%? Seriously ask yourself, does the grade the gradebook is spitting out match my assessment of the student? The numbers are NOT an accurate measurement. Assess like a human being instead of like a robot.

Assessment Experts

I really enjoy the research of Thomas Guskey and Rick Wormeli. I encourage you to follow them on Twitter, get their books and read their research.

Digiprove sealCopyright secured by Digiprove © 2017