Wednesday, March 30, 2016

Variables

Sixth grade sucked.

I was at a new school, new kids, new teachers, new community, new city, new state.  It's one thing to entirely pick up your things and go; it's another to do so at the onset of adolescence.  A witness daily, the students are reduced to social cannibals:  they'll consume the well being of another simply to raise themselves up.  The anxiety that washed over me walking into the cafeteria to my assigned table of "peers" still subtlety blips on my radar as I get nostalgic.  That emotional scab that eventually became a scar.  It doesn't hurt anymore; but I know it's still there.

In sixth grade was the final year students were required to complete a science fair project.  The logic made sense:  many kids did not do them; most of them that did, sucked; the ones that were actually good probably didn't need the science fair to affirm their status of Lords of Inquiry.  Either way, I was a good student, so ensuring I put forth a decent effort would suffice.  The teacher would surely accept my work and move on.
I remember walking to the front of the class and setting up my tri-fold.  My crush was watching.  Now was my chance to dispatch the greatest pheromone of all:  knowledge.  I knew that if I explained my project and my exemplary discoveries, her insides would soon begin to quake.  She would be mine.

My proposed question was simple:  Does caffeine influence the growth of plants?  My experiment centered around the concept that I would give my plants varying amounts of caffeinated beverages in the idea that the stimulant would possibly cause a greater growth.  A slave to marketing, I knew the coolness and excitement of Mountain Dew would be the perfect vessel for my stimulant.  I also included Pepsi, Diet Coke, and Sprite.  Eventually, my results yielded that any soft drink containing the caffeine did not grow...or so I thought.

As I systematically discussed my purpose, hypothesis, and experiment, I was suddenly interrupted.  I was thrown off track.  I had been betrayed.  You don't put the good kids on the spot like that.  I was your Golden Child, Mr. B.  A rose amongst the thorns.
"So you tested caffeine on plants?"
"Yeah..."
"Why did you choose soft drinks?"
"Well...because I know they have caffeine in them."
"Do you know how much caffeine each one had in them?"
"Not exactly.  I know Mountain Dew has a lot."
"So are you sure it was the caffeine was what killed the plants?"
"I'm not sure..."
"I ask, because these are all different soft drinks with different ingredients.  How do you know what killed them?"
"I don't know..."
"You could have maybe used caffeine pills or tablets..."

I could feel the blood rush to my face.  It was inevitable.  The muscles at the corner at my mouth began to ache, as I fought back my frown that preluded tears.  Eventually Mr. B eased up and let me die in piece in front of the class.  I looked over at my crush, and she was talking to her friends;  probably about how shitty I am and how I made diabetic plants.  "That's it," I probably thought, "my penis is useless now."

My experiment wasn't valid.  I learned this the hard way.  As a man who instructs science, who teaches not just a science class, but a STEM course as well;  a man who basically has a damn Master's in the concepts of Scientific Inquiry, I now know better.  The problem wasn't the Mountain Dew or the fact that I had committed horticultural genocide.  I didn't know whether it was the caffeine, the dyes, the sugar, or the artificial sweetener that caused the lack of growth.  Had I just gotten simple caffeine pills, I would have been okay.  Hell, even Jesse Spano knew this, and she ended being a stripper.  In short, my experiment was worthless because it didn't answer my original question.

I had tested too many variables.
"I'm addicted to over the counter stimulants!"
------------------------------------------------------------------------------------------------------------

Without a doubt one of the most controversial developments in education in Ohio is the creation of a new teacher evaluation system.  Rather than leave it up to local districts to determine the effectiveness of teachers, the state department of education opted to create a vast template and evaluation system that would score and rate teachers.  One portion is student data, which is determined from a variety of sources.  Sometimes the student data derived from state testing (which is ever changing), sometimes it comes from a local vendor test (which may not address the standards that you teach), or sometimes it comes from a pre and post test that the teacher creates (but not all teachers are eligible).  Needless to say there are a variety of hoops one can jump through, each of a different size, shape, and flammability.  At the end of all the hoops, there's a woman with a beard who may be into you, but you avoid her because you're married.  Also, the beard.

The second portion, and arguably the most controllable, is the classroom observation.  A teacher is subject to no less than two formal observations from an administrator.  Each lesson observation is a three day process:  pre-observation, which you basically inform them "here is what I am going to do, here's why I'm doing it, and here's why it's good;" a formal observation, which you do the damn thing; and post-observation in which you say "here's what I did and why, and here's what I will do next time."
At the end of the year, your rating for both student data and classroom observation is plugged into the chart below, and they deem you "Accomplished," "Skilled," "Developing," or "Ineffective."
So, for example, say that the teacher had a horrible lesson in which they decided to and was rated as "Ineffective" in their classroom observation, yet the students performed very well on their assigned assessment and received a "Most Effective" rating, you would add the two scores together (0+600) and average them for your overall score (300).  According to this, the teacher would be "Skilled."  Skilled is a good thing.  From most administrators and evaluators, they say that Skilled is pretty much where most decent to good teachers will be.  "Accomplished?"  To quote an old principal of mine, "you have to so much and be so efficient in your class, it's ridiculous.  Nobody will get Accomplished.  You can aim for it, sure, but the detail they're looking for is obscene."

Needless to say, I'm still teaching so you can assume that, with the exception of that infamous "outlier year," I'm doing pretty well.  Not that I'm boasting, but I feel like if I wasn't too good at my job, I would probably know by now.  Recently I had an opportunity to put the system to a test.  To evaluate my evaluations and, in a way, to evaluate the evaluation system as a whole...

With my second observation of this school year, I contacted my evaluator immediately to schedule a time.  I didn't want this lingering over me for months, especially before our state and district assessments.  It was like the common cold:  it was kind of annoying and I just wanted to be rid of it.  As I looked at my calendar and my lesson plan timeline, I saw an opportunity.  We agreed on a date and time and we were going to do the damn thing.

The idea was simple:  I was going to do the same lesson that Mr. Evans had shit all over not two years before.  I wasn't going to make any changes.  It was going to be the EXACT SAME LESSON.  I had to be sure.  Keep only one variable.

Everything went as smooth as I had hoped.  I had the pre-assessment data to back up my lesson goals.  The learning targets were aligned with the standards.  My assessment strategies were solid.  I was confident going into the post-conference, but you can never be too sure with these things.  I didn't expect the rug to be jerked out from under me, but I did approach it somewhat guarded and prepared for battle if needed.

It was not needed.

The evaluation was fairly informal in structure, as we talked about the different aspects of the lesson. This is how I envisioned how an evaluation should go:  two professionals actually discussing the pros and cons, rather than just listening to one person drone on saying "here's what I saw, here's what I think of you as a teacher, and that's it."  I openly talked about how I liked the lesson and what things I would like to see better in my class.  A strength of a good teacher is admitting and understanding shortcomings.  You can't grow if you don't acknowledge this.

I was amazed by the stark contrast of how the two evaluations went.  Mr. Evans, as you recall, spent the better part of 40 minutes discussing my lesson, what he saw, what he liked, and what he disliked.  He then dropped the "buuuuuuut I'd like to see you another lesson" bomb with minutes to spare.  He mostly made a laundry list of bullshit "issues" as evidence for a 3rd and final observation:  I called on one student more than others, another student was reading her book as students were packing up before being dismissed;  those types of things.  I was surprised he didn't mention how one girl got up during my instructions to blow her nose.  "You need to be sure you have control of your class," he would probably say, "having a student get up and blow her nose could be distracting to the rest of the group."

As my evaluation wrapped up, my assistant principal showed me his rubric.  As he scrolled down, I saw the markings and feedback he had made.  With that, he concluded, "I have no worries about you.  I can tell you that I take this very, very seriously.  I don't hand out these evaluations to just anyone, and I don't take it lightly.  I'll go ahead and wrap this up, send it to you.  Look it over, and let me know if you want to talk about any of it.  If not, go ahead and pin it and you'll be done for the year."

Two days later I had my answer to both "Am I doing alright?" and "Was Mr. Evans a really shitty administrator and/or human being?"

Inline image 1
There, amidst a sea of verbiage was the solitary word that answered my two inquiries:
ACCOMPLISHED.

[drops mic]