I use a standardsbased assessment system in my classroom. Assessments are marked based on evidence of conceptual understanding using a pretty extensive rubric. Most of my students are new to this idea, and it’s a struggle for them to understand. The students who struggle the most are often the ones who have had some success in traditional math classes, where grades are based more on the ability to perform algorithms, procedures, and calculations fluently. Assessing for understanding requires that they not only are able to perform procedures correctly (which is of course still important), but also show evidence that they understand the underlying concepts. This is incredibly frustrating for students who are able to learn procedures without understanding why they work. It’s also tricky for me, as a teacher, to illustrate why this is so important. On a recent precalculus assessment over vectors, some students produced interesting responses that, sometimes within the same student’s test, made me think a great deal about this difference, and provided me with some great fodder for explaining it. Context: The learning target for this assessment is NVM.A and B: Represent and model with vector quantities and perform operations on vectors. I knew the students needed some scaffolding for the process of adding vectors. It is complicated, with many opportunities for errors. It’s also a great example of the kind of problem where you can do the calculations absolutely correctly without having any understanding of what you’re doing: just give me some formulas for r and theta, and I’m golden, right? Well, not really, especially when you have to figure out the angle at the end. There might be a bunch of “rules” to teach students about when to subtract from 180, add to 180, ditto for 360, but I don’t use ‘em, or know ‘em: in my opinion, you really have to understand what you're doing to reach valid solutions on these problems. I don't know a better way to show this understanding than visual models. The week before the test, we went through this problem together as a class. Students worked it out, and put it in their notebooks for reference. My assessments are opennotes, so the intention of this exercise was to give students a thorough walkthrough of one vector addition problem to use to solve problems on the assessment. The point I made repeatedly while working this through with three sections of students was this:
Student A’s response to the boating problem made me really hopeful: precise use of notation, clear reasoning, good calculations, and a visual model that, while not very accurate, at least shows that A has a reasonable idea of where the boat’s going. Then I saw the angle at the end, and said to myself, “Dangit! A is just blindly following the procedure we did in class from his notebook. What a bummer!” Student C used COLORS!!!, which they know makes me biased. But hey! C screwed up the calculation for the angle at the end. WRONG, RIGHT? Well, yeah, until C did this supersweet confirmation to check if the answer made sense. C gets it, just made a calculation error. Proficient conceptual understanding, needs work on procedures and showing reasoning. Here’s student D, who’s a little further from the goal than student B, but using some correct procedures. What’s missing? Well, there are quite a few things missing, but most of all, it’s sensemaking. Getting negative x and y values for a vector in the first quadrant should be a red flag for a student who understands what s/he is doing. D’s calculator is in radian mode. These are just a few examples of how interesting assessing responses can be when you look for understanding and reasoning rather than right or wrong. I’d love to hear opinions from anyone who would like to discuss.
But back to the students. I’m only giving them written feedback on this assessment, no grade (we take two assessments over every topic, and this is just the first). I’m hoping that sharing these responses with the students next week and having them do a little assessment or comparison of their own work will help make the point clear: using the right algorithms, even if you do it well, isn’t enough to prove that you understand the concepts.
0 Comments
Back in January, I was wracking my brain about how to make assessments go better in my classes; there was too much stress, too much concentration on the wrong things: it felt too much like a big bad test that everyone should be stressed out about. So I thought to myself, what if they could take a break, take a walk, talk about the test, get rid of misconceptions and jitters, etc.
And that's what we did. Evolution: Phase 1 I didn't really have a good idea about how to do this the first time, so I was kind of loose with everything. "Work on the test for 10 minutes, then we'll take a walk. Come back, work for another 30, we'll do it again. then 20 minutes to finish." The idea here was to have some time to get into it, then ask questions that might have come up, then some more work, then final questions. I didn't talk or answer any questions. I let students take notebooks and pencils with them, and bring them back into the room.
Student Feedback
After each class's first test walk, I asked for some feedback about the process:
Ok, nothing really shocking there. I used the "cheating" results to have some conversations about why I write assessment questions the way I do, why I ask for so much from one question, why my assessments are only 12 questions.
The next few questions on the survey gave me some more interesting feedback (summarizing and cherrypicking some results here). Did anything bother you about taking a break during the test?
Describe any suggestions you have for how we can make tests less stressful.
Phase 2: Teachers with more foresight than me probably know exactly what went wrong with this: some students used the breaks, especially the last one, to just copy each others work. Of course they did! The process with the rest of the classes in this first round of tests allowed us to have some great discussions about how to show your understanding (and how to show that it's YOUR understanding). I pretty quickly stopped letting students take notes or any paper with them, instead sending a basket of whiteboards and markers with them that got erased before they came back into class. I also got rid of the second break: work 10 minutes, 10 minutes to discuss, 40 minutes for the rest of the test. Results? I don't have really good data on this. Visual modeling increased, but also paragraphs of writing where some good algebra steps would do just fine. "Less tell, more show!" became my most often used comment on assessments. I think it "felt" better, at least to me. At least most of the time. It also gave me some more freedom (along with some more directed testprep on my part) to ask more openended questions: "Make up your own triangle and solve it to show me you understand trigonometry". Then there was The oblique asymptote incident: So, one of my precal sections, my "difficult" class, last period of the day, test over rational expressions and functions, Illustrative Math question about fuel efficiency... In this class, I have one student who's way ahead of everyone, a transfer this year from another school where his algebra 2 class covered most of what I have to cover to meet the needs of the students here. He's the "go to guy" during test walks, the one the other students crowd around. He and I had had a discussion about a similar problem, and and how to tie the idea of an oblique asymptote to the context and the solution. It wasn't something that most of the students were ready for, but he was. Here's some of the nonsense I got back on this test:
I threw these and a few others into a presentation (yes, along with some positive things, too) and used it to have a very pointed discussion about cheating. I also found ways to remove this student from the conversation during walks (by having my own conversation with him) so that the others wouldn't get distracted by things they're not ready for.
Phase 3: I settled on a 3 minute reading period (look over the test and strategize: no talking, no writing, no calculators) followed by a 10 minute walk before each test, with whiteboards if students want to use them. Some students still try to memorize the entire problem and get classmates to give them the "answer" (whatever that means), but this is where I think I can feel comfortable. This is what I did for the rest of the year. Reflection: Why do this? Test walks are a pain in my ass, mostly because they cause me to spend so much time thinking about and watching for academic dishonesty. I'm not sure if they really help with assessment results because I'm too lazy to do a real look into the scores, and my records aren't good enough to call this useful data yet. I still have assessments where the majority of students miss the mark completely and/or give nonsense answers to questions. I do this, and I'm going to continue to do this, because of the mathematical discourse it produces. The "pressure" of these discussions produces the best mathematical discussions I ever get to witness, even from the students who are the most disengaged in the classroom. Students argue their case, critique the reasoning of their classmates, ask questions, and don't stop asking until they get it. On one of the last walks this year I tried to capture this in a video. It's kind of hard to hear what they're saying, but you could probably get the idea with the sound off.
So, the question I'm working on now is: How do I get this kind of engagement as a normal part of my classroom... without having to give a test every day!
I need to start this off with a big thank you to my badass wife Rani for her help making the badass posters you'll see below. She's a badass 5th grade teacher. My last two posts have been about my journey towards understanding SBA and some new understandings about assessment I gained this summer. I’m about to start my third week of school, and I think I have something ready to present to students (and also parents; back to school night is this Tuesday). I’ve broken it up into three sections. What I Assess Based on the four claims, here’s what I’ve put together in an attempt to make this clear to a population who has had no experience with standards thus far. I debated what exactly to present here, and decided just to keep it simple: these are the four things that matter  really matter  in understanding mathematics. I chose to leave out the Practice Standards and just include some of their wording in the descriptions. I might make a different decision if these students had any experience with standards, but I think this sums things up without overloading. How I Assess I have some experience with a 14 scale, so I’m sticking with that. I think I can explain it clearly and assess fairly using this model. The big idea is that I am assessing your level of understanding, not your ability to do a certain percentage of math problems correctly. I hope I can make this clear. How this all translates to grades
I have to give a percentage grade, there’s no way around it at my school. This is sort of the hardest part for me. My last school was an IB school, so I used historical data from IB exams to set up my system there. It’s a much more forgiving system, in terms of percentages, than the classic American system where 60% is the minimum passing grade. I’ve done a lot of blog reading about this, and I think I’ve got something I can work with: Grading Scale (what I put in the grading system) 0=0% 1=50% 2=70% 3=85% 4=100% Weighting 10%: Practice work (includes homework, classwork, etc, and is pretty exclusively completion grades) 65%: Summative assessments 20%: Cumulative exams 5%: Awesomeness Awesomeness, you ask? Yeah, this is something I threw in to try to keep kids on their toes.
So, by the time parents come in for back to school night on Tuesday and hear about this, I’ll have presented it to all of my students as well. I expect some pushback, but I’ve thought about this for a long time now, and I’m feeling confident and ready to support my position clearly. Wish me luck! And, as always, let me know what you think. Where I'm coming from
I got my teaching license in 2003. None of my coursework for teaching, including my student teaching, had anything to do with standards. Then I got a job outside of the traditional school setting, out of the loop as far as current bestpractices and educational reform were concerned. I kept this job for 8 years. In 2012, I got back into classroom teaching by landing a job at brand new international school in China. The leadership of the school was pretty progressive, and decided from the beginning to go with the most current researchbased practices. I still remember the staff meeting where we were introduced to StandardsBased (SB) Grading, Assessment, and Reporting (I'm going to use SBG, SBA, SBR, and try to explain why I differentiate later). Few of the staff (some with much more experience than I) knew about or had much experience with SB anything, and it was kind of a shock to most of us. Oh yeah, BTW, this meeting happened AFTER two or so weeks of instruction, AFTER some of us had already distributed syllabi, grading scales, etc. I had no idea what anyone was talking about  the only grading I knew was percentages and ABCs. I cried in the bathroom that day... Pretty quickly, watching others struggle with this, I came to the understanding that my lack of experience was an advantage. I didn't have to deal with, or unlearn, years of assessing any other way, I just had to get my head around doing it this way. I was also the only secondary math teacher in the school, so I had an incredible, and often intimidating, level of freedom in developing my curriculum and classroom practices. So I bought in, did the work, and started learning. Needless to say, I learned more through the experience of teaching than I ever had in any class about teaching. I learned more through personal research, struggling with frustrations, searching for my own answers, than I ever have from professional development. After five years with this school, I feel like I have a relatively good grasp on the idea of SB, although I'm still struggling with the practice and implementation. Where I am now As far as I can tell, so is everyone else. I ask educators and administrators about their implementations whenever I can, and I read a lot of blogs and articles on the subject. Over the last five years, only one educator I've spoken with said that his school had "completely figured out" SBG; further conversation revealed that what he really meant was that his school had aligned a 14 grading scale with a percentage grading scale in a way that the majority of teachers, students, parents, and other stakeholders accepted. Everyone else tells the truth; it's a journey, a learning process that no one seems to have nailed down completely yet. There are great ideas out there, but there doesn't seem to be anyone (other than that guy) who's willing to say they've got all the answers and they know exactly how it should be implemented. I like to separate SB, especially when it involves grades, into three areas that help me think about my own practice. These distinctions are mine, from my experience, and may be different from others'. (they may also be wrong! :)
Where I'm going Earlier this summer, I participated in an assessment workshop for AERO which gave me a whole new way to think about the standards, and I really want to write a post about it later. The shift that's rolling around in my head involves using clusters (not specific standards) to come up with targets, and couching the targets in the four claims from the Smarter Balanced Assessment Consortium. In a few days, I'm heading out to start a new job in Pakistan! I don't know everything about how things work there, but I don't think they're using SBR yet (kind of a relief to me). They use percentage scales and letter grades for reports, but I've been reading a lot on how other teachers are doing SBG within their own classrooms, even if it's not a schoolwide practice, and even if they eventually have to show a letter grade. Overall I don't think I can do assessment any other way, so SBA will be a part of what I do no matter where I teach. If anyone made it this far, thanks for reading. I hope to keep posting on this journey, reading about what others are doing, and refining my practice. People who've helped me think about this (not an exhaustive list, INPO): Follow the links for some great posts on SBG Michael Matera Dan Meyer Dane Ehlert Jonathan Claydon Nora Oswald 
Jon LindLet's see if I can keep up with a blog! Archives
May 2018
Categories
All
