I use a standards-based assessment system in my classroom. Assessments are marked based on evidence of conceptual understanding using a pretty extensive rubric. Most of my students are new to this idea, and it’s a struggle for them to understand.
The students who struggle the most are often the ones who have had some success in traditional math classes, where grades are based more on the ability to perform algorithms, procedures, and calculations fluently. Assessing for understanding requires that they not only are able to perform procedures correctly (which is of course still important), but also show evidence that they understand the underlying concepts. This is incredibly frustrating for students who are able to learn procedures without understanding why they work.
It’s also tricky for me, as a teacher, to illustrate why this is so important. On a recent precalculus assessment over vectors, some students produced interesting responses that, sometimes within the same student’s test, made me think a great deal about this difference, and provided me with some great fodder for explaining it.
The learning target for this assessment is N-VM.A and B: Represent and model with vector quantities and perform operations on vectors. I knew the students needed some scaffolding for the process of adding vectors. It is complicated, with many opportunities for errors. It’s also a great example of the kind of problem where you can do the calculations absolutely correctly without having any understanding of what you’re doing: just give me some formulas for r and theta, and I’m golden, right? Well, not really, especially when you have to figure out the angle at the end. There might be a bunch of “rules” to teach students about when to subtract from 180, add to 180, ditto for 360, but I don’t use ‘em, or know ‘em: in my opinion, you really have to understand what you're doing to reach valid solutions on these problems. I don't know a better way to show this understanding than visual models.
The week before the test, we went through this problem together as a class. Students worked it out, and put it in their notebooks for reference. My assessments are open-notes, so the intention of this exercise was to give students a thorough walk-through of one vector addition problem to use to solve problems on the assessment.
The point I made repeatedly while working this through with three sections of students was this:
Student A’s response to the boating problem made me really hopeful: precise use of notation, clear reasoning, good calculations, and a visual model that, while not very accurate, at least shows that A has a reasonable idea of where the boat’s going. Then I saw the angle at the end, and said to myself, “Dangit! A is just blindly following the procedure we did in class from his notebook. What a bummer!”
Student C used COLORS!!!, which they know makes me biased. But hey! C screwed up the calculation for the angle at the end. WRONG, RIGHT? Well, yeah, until C did this super-sweet confirmation to check if the answer made sense. C gets it, just made a calculation error. Proficient conceptual understanding, needs work on procedures and showing reasoning.
Here’s student D, who’s a little further from the goal than student B, but using some correct procedures. What’s missing? Well, there are quite a few things missing, but most of all, it’s sense-making. Getting negative x and y values for a vector in the first quadrant should be a red flag for a student who understands what s/he is doing. D’s calculator is in radian mode.
These are just a few examples of how interesting assessing responses can be when you look for understanding and reasoning rather than right or wrong. I’d love to hear opinions from anyone who would like to discuss.
But back to the students. I’m only giving them written feedback on this assessment, no grade (we take two assessments over every topic, and this is just the first). I’m hoping that sharing these responses with the students next week and having them do a little assessment or comparison of their own work will help make the point clear: using the right algorithms, even if you do it well, isn’t enough to prove that you understand the concepts.
I know this isn't an original idea, but it seemed like a good fit for the end of Precal (for juniors: my classes had both juniors/seniors, and seniors needed some time to review for their final, which was two weeks earlier than the juniors'). Here's how my version of a Desmos Art project went this year.
This semester, we studied polynomial, rational, and trigonometric functions. Use these functions (plus any other functions or relations you want) to create an original piece of art using Desmos.
This is an Awesomeness project, which means there are 5 points available. To earn them, make sure your creation:
1. Uses domain and/or range restrictions
2. Uses inequalities for shading
3. Uses all three function types studied this semester.
4. Uses three or more functions or relations not studied this semester and/or not studied in this class.
5. Is awesome! Convince me on the last slide by discussing any struggles you had, anything new you learned, what your creation means to you, etc.
Pretty good results: some kids got into it, and I'm trying to stay positive about things at the end of the year so I won't get into those who didn't (you can probably tell), or those who borrowed a little too heavily from stuff you can find online (or right on the staff picks page, ffs- if you see anything down there that bothers you, please let me know). Some of these are great! The sword and the mosques, the war/peace, the mountain scene: pretty sure all of these were original, and noticed these students putting in some effort.
I read a lot of science fiction
I struggle sometimes to come up with good contexts for problems
= Alien Math
My first escapade into alien math was a problem about alien units, the point of which was to understand conversions without getting hung up on feet/meters/etc. I wrote it a few years ago, and it works pretty well. I use it when the opportunity presents itself.
This year, I wanted to do something with my Geometry classes for the final modeling unit after we talk about volume, density, etc. There are lots of problems about surface area and volume and density, making comparisons, maximizing, etc. But I wanted to have a little fun with it, so I came up with
The Flerkus Miners of Gleep!
Introduced it to the ss after our last assessment. There was some satisfying (for me) confusion, then people got to work!
Step 1: Make sense of the problem
Step 2: Plan and calculate
Turned out to take a couple of class periods for most kids to get a handle on this, which was a very interesting process to watch. And I did my best to JUST watch, not giving clues or hints or judgement, just observing their thinking and how they got into it.
Step 3: Build it!
Things were messy and chaotic, but it was pretty fun. Couple of days on this, running out of class time, then we were ready to start
Step 4: Product Testing
A. I made a submission form for students to check off requirements and write down measurements. Mostly, this gave some students a bit of time to frantically add last-minute touches.
B. Blamium distribution ended up taking entirely too much time, and was kind of hard to judge, so I scrapped it after these first few gave it a try.
C. The Flerkus Weigh Station was tons of fun! Rice everywhere, students fighting to be the one who got to pour the flerkus, be the scale expert, make predictions, etc.
Step 5: Awesomeness!
(Yes, I'm focusing on the positive here)
The kids came up with some great stuff for their "Awesomeness" point.
This was fun, and a great way to end the year. The whole points thing needs some work, and the logistics of the testing, too. I think it was worth the time we spent, got more out of the students than anything else would've the last few weeks of school, and left them with a good "taste in their mouth" as they leave my class for the year.
So, when I made up my grading scale in August, I left 5 points for "awesomeness", which I didn't define very well. Here's my description from my assessment document:
Part 1: Graph My Room!
Senior "Foundations" class ended up on 3D graphing at the end of the year, and ss had a really hard time visualizing and conceptualizing 3D spaces. We did tons of activities, made 3D coordinate planes, even installed semi-permanent x/y/z axes in my classroom. When we were getting ready for our last summative assessment, one of the students, being a senior, asked, "can we just, like, do a project or something instead of a test for this?" So I, after a little bit of mwahaha, said, "OF COURSE WE CAN!!!" Then I came up with this:
Final Awesomeness Project- Graph My Room!
Using isometric graphing paper, make an accurate scale drawing of my classroom. Use your drawing to find the following.
A. The distances PQ and RT (These points will be posted around the classroom).
B. The equations of all of the planes that make up the walls, floor, ceiling.
Prepare all of this in a poster.
There are 5 awesome points available.
1-2. Distances are within 50 cm of the actual distance
3. Plane equations are accurate
4. Working for the above is clear
5. The poster is awesome!
About 4-5 class periods to get this done, with varying levels of success. A few notes on the process:
I've spent this year really focusing on summative assessment. For the last few months, I've been thinking a great deal about formative assessment, and changing the way I introduce students to concepts, figure out what they know and what they need some help with, and move towards a goal.
In hindsight, this project feels to me like it could have been THE UNIT rather than something we did at the end. I have dreams of one day having a good, meaty problem to START. Create the headache, etc.
I'm really starting to think this is going to be my focus for next year, and I'm pretty excited about it.
Back in January, I was wracking my brain about how to make assessments go better in my classes; there was too much stress, too much concentration on the wrong things: it felt too much like a big bad test that everyone should be stressed out about. So I thought to myself, what if they could take a break, take a walk, talk about the test, get rid of misconceptions and jitters, etc.
And that's what we did.
I didn't really have a good idea about how to do this the first time, so I was kind of loose with everything.
"Work on the test for 10 minutes, then we'll take a walk. Come back, work for another 30, we'll do it again. then 20 minutes to finish."
The idea here was to have some time to get into it, then ask questions that might have come up, then some more work, then final questions.
I didn't talk or answer any questions. I let students take notebooks and pencils with them, and bring them back into the room.
After each class's first test walk, I asked for some feedback about the process:
Ok, nothing really shocking there. I used the "cheating" results to have some conversations about why I write assessment questions the way I do, why I ask for so much from one question, why my assessments are only 1-2 questions.
The next few questions on the survey gave me some more interesting feedback (summarizing and cherry-picking some results here).
Did anything bother you about taking a break during the test?
Describe any suggestions you have for how we can make tests less stressful.
Teachers with more foresight than me probably know exactly what went wrong with this: some students used the breaks, especially the last one, to just copy each others work. Of course they did!
The process with the rest of the classes in this first round of tests allowed us to have some great discussions about how to show your understanding (and how to show that it's YOUR understanding).
I pretty quickly stopped letting students take notes or any paper with them, instead sending a basket of whiteboards and markers with them that got erased before they came back into class. I also got rid of the second break: work 10 minutes, 10 minutes to discuss, 40 minutes for the rest of the test.
I don't have really good data on this. Visual modeling increased, but also paragraphs of writing where some good algebra steps would do just fine. "Less tell, more show!" became my most often used comment on assessments.
I think it "felt" better, at least to me. At least most of the time. It also gave me some more freedom (along with some more directed test-prep on my part) to ask more open-ended questions: "Make up your own triangle and solve it to show me you understand trigonometry".
Then there was
The oblique asymptote incident:
So, one of my precal sections, my "difficult" class, last period of the day, test over rational expressions and functions, Illustrative Math question about fuel efficiency...
In this class, I have one student who's way ahead of everyone, a transfer this year from another school where his algebra 2 class covered most of what I have to cover to meet the needs of the students here. He's the "go to guy" during test walks, the one the other students crowd around. He and I had had a discussion about a similar problem, and and how to tie the idea of an oblique asymptote to the context and the solution. It wasn't something that most of the students were ready for, but he was. Here's some of the nonsense I got back on this test:
I threw these and a few others into a presentation (yes, along with some positive things, too) and used it to have a very pointed discussion about cheating. I also found ways to remove this student from the conversation during walks (by having my own conversation with him) so that the others wouldn't get distracted by things they're not ready for.
I settled on a 3 minute reading period (look over the test and strategize: no talking, no writing, no calculators) followed by a 10 minute walk before each test, with whiteboards if students want to use them. Some students still try to memorize the entire problem and get classmates to give them the "answer" (whatever that means), but this is where I think I can feel comfortable. This is what I did for the rest of the year.
Reflection: Why do this?
Test walks are a pain in my ass, mostly because they cause me to spend so much time thinking about and watching for academic dishonesty. I'm not sure if they really help with assessment results because I'm too lazy to do a real look into the scores, and my records aren't good enough to call this useful data yet. I still have assessments where the majority of students miss the mark completely and/or give nonsense answers to questions.
I do this, and I'm going to continue to do this, because of the mathematical discourse it produces. The "pressure" of these discussions produces the best mathematical discussions I ever get to witness, even from the students who are the most disengaged in the classroom. Students argue their case, critique the reasoning of their classmates, ask questions, and don't stop asking until they get it. On one of the last walks this year I tried to capture this in a video. It's kind of hard to hear what they're saying, but you could probably get the idea with the sound off.
So, the question I'm working on now is: How do I get this kind of engagement as a normal part of my classroom... without having to give a test every day!
Using student feedback, problem solving, and modeling to come to a better understanding of assessment
Continued from this previous post
So, I let the issue marinate a bit, had the same discussions with my other two classes the next day (block scheduling), and formulated the following situation. I took a break from the curriculum and devoted the next two days of classes (5 different sections, 90 minute classes) to investigating in the hopes that students might come to a greater understanding of how their grades are calculated, and maybe even learn how weighted averages work in the process.
As I mentioned in the previous post, I wrote this shortly after a workshop I attended (the first of five institutes for the Math Fellows in International Schools program). Whether consciously or not (again, this was way back in October), the discussions we had on mathematical modeling really informed the writing of this problem. Previously, I would have given some numbers, but here I required students to simply use my grading scale, and come up with their own interpretations of intentionally vague descriptors such as "or slightly better" or "just squeaking by".
I didn't (still don't) have a great understanding of "mathematical modeling", and found (still find) it difficult to include in the work I ask students to do. This problem feels like my first decent attempt at formulating a modeling situation.
Everyone had trouble getting started
"You want me to make up my own numbers?"
- "Yup, just make sure they make sense"
Most of the students started off with their "feelings" about the problem, which is great. I think we want students to speculate about the solutions before they get into the math of it.
However, many of them, particularly the older students, were weirdly overconfident and considered themselves "done", without doing the math to confirm.
The images above show work the students asked me to check, to see if they were right...
The model that ended up making the most sense to the most students (which I didn't get a picture of, unfortunately) was each category of grades (practice work, summative assessments) being a bowl full of available points. You earn a percentage of those points based on your average in that category.
These guys had an "aha" moment when they figured out the total is 95, not 100 (I have 5% in reserve for a category we don't use yet).
Further, if I don't grade practice work, it's out of 85...
So, the whole point of doing this was to really investigate the question:
In what case does grading practice work further my ultimate goal, which is to make grades reflect the level of understanding of my students?
Working through this problem allowed most of the students (with a little help from me) to come to the following (seemingly obvious, but important to understand) general conclusion: grading practice work is only beneficial if your practice work average is greater than your assessment average.
What does this mean?
Which leads us to the new policy: I'll keep track of practice work, because it's formative assessment. I need to be able to see if students are learning the skills they need to "play the game" of performing on assessments. However, I'll only count it if it helps your final grade.
I had students go back and check out their first quarter grades, and whether/how practice work affected them. This allowed me to address some really wild misconceptions ("It helped my grade went up by about 30%. It is very beneficial for me."), but also allowed most students to see that it has a very minimal effect, and that effect is sometimes negative.
For my own reflection, I had very mixed emotions after going through this with all 5 classes. A couple of them got it, and it felt like we came to a really powerful understanding. For others, including the class that inspired the problem, it was a big "Meh..."
If you've ever taught the same lesson 5 times in a row, you've probably had the experience of waning excitement on your own part: what seems fresh and exciting for the first two classes can start to seem repetitive and dull for the last two. I think that's some of what I was feeling, but there's also this vast difference in the dynamics of my classrooms that I've been struggling with all year.
I hate that I devoted this much time to talking about grades, when I really need to get the focus of these students away from grades and onto understanding.
I love the problem itself, and the fact that it was directly inspired by a need expressed by the students. I'd like to try to incorporate more problems like this in my classroom instruction. Problems where students have to build a model of a real-world situation, work through the mathematics of that model, and use their results to come to a real-world conclusion.
I started writing the first part of these posts directly after teaching it to my first two classes, hence the title. I remember feeling like "Wow! Look at this amazing thing that's happening!" The third class later that day just totally burst that bubble (the class I wrote the problem for, the source of the most emphatic "Meh"s), and really brought me down. I didn't finish the post; within a couple weeks I had stopped posting on my photo blog and gone twitter-dark, and the rest of the semester was really an unsatisfactory struggle.
With some time to re-charge over the winter break, and re-inspire at a recent Math Summit in Shanghai, I feel like I'm ready to get back into things now. There's an exciting curriculum switch coming up at our school, which I get to play a big role in. I've got some engaging project/assessments going in all of my classes to give everyone a good start on the second semester. We just bought a bunch of potted plants for our house, which we're giving silly names. Things are looking up...
I started this post back at the end of October on a real high, right after teaching the lesson to some really receptive students, then just got bummed out frustrated (with the reactions of some other classes and some other things) and never finished it up. Here's part 1, and I'll do my best to finish soon.
Using student feedback, problem solving, and modeling to come to a better understanding of assessment.
I ended my last post noticing how valuable being challenged by my students is turning out to be, especially as I go through this process of changing how I do everything. This is continuing to be the case.
We just finished up our first quarter here. I had a bit of a mad rush to squeeze some assessments in, so I was grading up to the last minute, and was really disappointed with the results, pretty much across the board. Also, every slacker in the class was emailing me to take care of practice work zeros in the gradebook, so I'd go look at the assignments, and half of them still weren't done... OK, my blood pressure is going up, so I'll stop, but you get the picture.
Anyway, got the grades in, then had a little time to reflect about practices so far. There were a couple of things that were bothering me:
Starting the conversation
I started these classes by asking everyone, "what do you think would happen if I stopped grading homework?" Most immediate reactions to this question are, "well, no one would do it." We talked about whether or not that was true, and I gave them my answer: I think the students who do it now understand the purpose of practicing and would keep doing it, and nothing I've ever done has had much of an effect on the others.
Then, I tell them about the new plan and have them do a practice warm-up.
A tale of two attitudes
Now, my first two classes, which I have been known to refer to as my "easy" classes, are very compliant, and just sort of say, "ok," and go on about their business.
My third class (22 juniors and seniors, last period of the day) is my most challenging group. After my first test, they told on me to the principal (who sat through my class on how to read test scores and improve, and totally backed me up). We've had a couple of breakdowns, where I just stop the class and lecture them about how useless they're being (they have a really hard time starting problems and staying focused). They complain about my teaching methods (they want more direct instruction) regularly, and are completely grade (percentage) focused. This class freaked out about #1:
<picking up here in January>
So, It's been awhile, and I don't remember exactly, but I'm sure I went home from that class frustrated and burned out, as usual. But somewhere in the night, it started coming to me (inspired, I'm sure by a recent workshop with Erma Anderson)...
And I made a math problem out of it.
To be continued...
I was lying in bed not sleeping at 2AM or so (as one does), and had one of those ideas that I just had to use in class today (again, as one does: why is this getting more and more normal for me?). It worked so well I had to blog about it.
When I introduced the 0-4 "Levels of Understanding" to my students, some of them asked me to show some examples of the difference between the levels, and of course I didn't have any handy. I told the students we'd come up with them as we went along, and tried to forget about it. That obviously didn't work...
So this morning I threw together my best stab at an example of 1-4 for a Geometry construction by working one out in four different ways. Take a look and see what you think.
I put 'em up on 4 walls, and had the kids circulate and "grade" each. I didn't tell them that there was one of each score, and I didn't give them any more pointers than to use the posters in the room (4 claims and levels of understanding).
-kids were referring to the posters, going up and reading more closely; signs they were thinking about it
-kids were discussing what proficient meant
-I think we started to get to the difference between a drawing and a construction.
-Could've done electronically, but it was worth 4 sheets of paper to see the kids moving around the room, discussing, walking up to check the posters.
What the numbers say:
I collected their post-it votes and (why the hell not?) recorded their responses and made a chart.
I didn't have time to prepare something like this for my two sections of precal later in the day, but we got a little discussion in, and I had them start to try to formulate 1-4 responses for a question. Definitely doing this next week with them.
Man, this feels good. I had the energy today that comes with going into a 5 day weekend, and I got to translate that into something that felt really productive. The students' notions of what a math class is and how it is supposed to work are being challenged, and they are challenging me about how I assess, and ultimately grade. Conversation started! Let's keep it going.
I need to start this off with a big thank you to my badass wife Rani for her help making the badass posters you'll see below. She's a badass 5th grade teacher.
My last two posts have been about my journey towards understanding SBA and some new understandings about assessment I gained this summer. I’m about to start my third week of school, and I think I have something ready to present to students (and also parents; back to school night is this Tuesday). I’ve broken it up into three sections.
What I Assess
Based on the four claims, here’s what I’ve put together in an attempt to make this clear to a population who has had no experience with standards thus far. I debated what exactly to present here, and decided just to keep it simple: these are the four things that matter - really matter - in understanding mathematics. I chose to leave out the Practice Standards and just include some of their wording in the descriptions. I might make a different decision if these students had any experience with standards, but I think this sums things up without overloading.
How I Assess
I have some experience with a 1-4 scale, so I’m sticking with that. I think I can explain it clearly and assess fairly using this model. The big idea is that I am assessing your level of understanding, not your ability to do a certain percentage of math problems correctly. I hope I can make this clear.
How this all translates to grades
I have to give a percentage grade, there’s no way around it at my school. This is sort of the hardest part for me. My last school was an IB school, so I used historical data from IB exams to set up my system there. It’s a much more forgiving system, in terms of percentages, than the classic American system where 60% is the minimum passing grade. I’ve done a lot of blog reading about this, and I think I’ve got something I can work with:
Grading Scale (what I put in the grading system)
10%: Practice work (includes homework, classwork, etc, and is pretty exclusively completion grades)
65%: Summative assessments
20%: Cumulative exams
Awesomeness, you ask? Yeah, this is something I threw in to try to keep kids on their toes.
So, by the time parents come in for back to school night on Tuesday and hear about this, I’ll have presented it to all of my students as well. I expect some pushback, but I’ve thought about this for a long time now, and I’m feeling confident and ready to support my position clearly. Wish me luck! And, as always, let me know what you think.
This past summer, I was invited to participate in a Math Assessment Workshop for AERO (American Education Reaches Out), sponsored by the U.S. State department. For anyone who doesn't know what AERO is (I didn't really), it's basically Common Core for international schools; the goal is to create "a framework for curriculum consistency across grades K-12 and for stability of curriculum in overseas schools, which typically have a high rate of teacher turnover." The math standards are essentially the same as Common Core (in fact, I think they're the precursor to CC, but I'm not solid on my history there). The workshop was led by Erma Anderson (@ermaander), an impressive individual with a wealth of knowledge who I'm glad to have met and been able to work with.
We were a small group of 8 teachers from schools around the world, and from all different age groups (2 K-5, 3 MS, 3 HS). The rest of the group had participated in workshops before for the MSIS (Math Specialist in International Schools) program run by AERO; I was sort of an outsider who slipped in because my wife is doing MSIS, but now I want more!
Now, I thought I was pretty up-to-date on SBA in the math community (see the previous post for my history), but this workshop turned me on to a new framework I'd never used, read about, or seen before: the four "Claims for the Mathematics Summative Assessment" from the Smarter Balanced Assessment Consortium. The purposes of the workshop were to
After getting our feet wet with this process, we turned to creating a student profile based on the four claims. We started out calling this a "rubric", which led to a lot of confusion about the purpose of the document. Once we changed our focus to creating a profile, we started coming together towards a final product. This process took two days, but we felt pretty good about having created a document that we all felt comfortable applying K-12
After this, we got back to writing and critiquing assessments. The high school group borrowed heavily from Illustrative Mathematics problems, and the following are three problems we felt pretty good about.
If anyone reading this is interested in checking out any of the K-8 problems, contact me.
Overall, it was energizing to be part of this group of math teachers who were focused and interested in what they were doing. I hope we can keep in touch through #AEROmaththinktank on Twitter.
Because of my unfamiliarity with the claims, I needed to do some independent study and research to help get my head around this way of approaching assessment. Here are two big ideas I'm going to try to use in assessment this year (but this will take some time, and I've got a lot of newness to deal with this year).
1. The four claims are the boss:
These are the things we are always assessing in math assessments, regardless of the specific learning target or subject matter. They should be considered whenever we are designing assessments. Part of Erma's instruction included a link between the four claims and the standards for mathematical practice (SMP); I'm much more familiar and comfortable with the SMP, so I found this helpful. The mess below is me visualizing this (and playing with MS OneNote on a new touchscreen computer).
2. Clusters are learning targets
I spent my first five years trying to write learning targets based on specific standards. Anyone who's done this knows how muddled it can get. Erma blew my mind with the idea of using the Common Core clusters as larger learning targets for assessment's sake. This is fitting in with the idea of a SBA "skills list", which I haven't used before, and the following is the beginnings of my attempt to do this for my geometry class this year. I'm using New Visions' curriculum as a jumping off point here.
Ok, I'm stopping there because the first day of work at my new school is tomorrow! Just had to get some of this down and out of my head before getting into work.