Monday, February 17, 2014

Test Results in a BYOD Class

Another happy outcome of changing my math class over to BYOD last semester was that the amount of student tracking I was doing - and reflection on the results - dramatically increased. I found myself much more on top of who-had-handed-in-what, more on top of my marking, and more analytical of my overall test and quiz results.

These are things all good teachers should be doing (as was drilled into my head at teachers' college), but it often falls to the wayside in the busy-ness of just keeping up with the day-to-day teaching tasks. In fact, as much as I benefited from the extra tracking in my math class, I couldn't get around to the same level of tracking in my other classes. But I felt I had to in the BYOD course.

The reason for this is simple - 
I had a lot more riding on the success of this BYOD class: if I was going to go to the trouble of changing the entire course, and subjecting my students to learning in a way completely unlike anything they had experienced until now, I had better be able to show that it was worth it.

One could argue that improved test scores indicate success - the better the understanding of the material, the higher students would score on tests. Fair enough. I know tests don't tell the whole story, but test results are easy enough to obtain, so let's look at that.

Here is a graph of how my students did on our four unit tests across the semester. Each line is one student; the tests go in chronological order, and do not necessarily increase in difficulty, as each test covered different topics. To avoid identification of any one student, any student who joined the course after the first test, or dropped the course before the fourth test, was omitted from the data.

What a mess!

What I would have LOVED to see is a general upward trend from test 1 to test 4, indicating more comfort with independent learning, and that students had, over time, found their rhythm in the course. An increase in confidence should have lead to better performance on subsequent tests. Instead, I'm getting this scattered mess (and what the heck happened on test 3??).

What else do these test results tell us? Nothing surprising, and nothing unlike what you'd see in any other math course:
  • Many students found the content to be challenging the whole way through the course. 
  • Those who tended to do well on tests at the beginning of the course, continued to do well on tests throughout. 
  • Those who tended to get below 50% at the beginning, continued to get low results on tests throughout.

Initially, I was devastated.
As a whole, the class did not improve in their ability to succeed on tests. Was my BYOD experiment a failure? Did I do these students a huge disservice by switching to independent and proficiency-based learning? What did I do wrong?

But then I got thinking: this doesn't mean that my students didn't get better at math over the semester (they did), or that they didn't improve their inquiry skills (many of them did), or that they were less willing to take risks (indeed, I found the opposite). It really just speaks to my students' test-taking skills, which did not improve.

BYOD is not meant to make students better test-takers.
It is meant to make students better collaborators, better problem-solvers, and better learners. My students became more comfortable with investigative tasks and communicating their discoveries. They became more resilient, figured out how they best learn, and how to best demonstrate what they learned. A summative test is not always the best demonstration (and certainly not the one my students would choose, if given the choice).

If I want my students to do better on tests, I need to teach them how to do better on tests. If I want my students to be better life-long learners and leaders in their fields, I need to teach them those skills. Test preparation is but a part of that.

Test-taking is important - many skills are still evaluated this way as students make their way into college and university - but it is definitely not the whole picture. As I prepare my new BYOD math class for our first unit test later this week, I'll be keeping this in mind.

2 comments:

  1. I wonder if the type of test questions contribute to the results? Would students do better if they collaborated or contributed to the test creation process? I think they would better understand the thinking behind test item creation. They would also be better equipped to collaborate in meaningful test preparation. I agree that students need to learn how to take tests - having them part of the process might serve to demystify them and make them less intimidating.

    ReplyDelete
    Replies
    1. That's a good point re: type of test question. I kept all test questions knowledge-based only, directly reflecting the learning goals, which matched their learning resources, exit slips, quizzes and review questions to a tee. I tried to alleviate all guesswork when it came to studying, and made the test as predictable as possible, so it would just test basic course content. But there were students who still struggled (either due to not performing well in a test situation, or not studying, or not getting to all the learning goals in required time).

      I wonder how I could involve students in the test creation process, when everyone is in a potentially different place in the unit coming into the test, and there is no creativity involved in creating the test in the first place? Would it suffice to show them the types of questions (from the review), and then have them create their own? I'll play around with this tomorrow in class (the grade 10s will be reviewing for their unit test on Wednesday). Thanks for the comment!

      Delete

Note: Only a member of this blog may post a comment.