I had an interesting conversation with a doctoral student who is doing her work on how teachers are evaluating and assessing student digital writing projects. I am part of her study (well, as a teacher, I am participating) and yesterday, we had a 90 minute chat about assessing digital work. Interestingly, this connected in my head to the inquiry going on at Rhizomatic Learning this week, too.
During our time, she asked me a lot of questions and I admit, I struggled to explain how I best assess digital writing projects that my sixth graders make. Even after co-editing a book on assessment of digital writing (Teaching The New Writing), and even after years of bringing various projects into my classroom to push at the notions of what it means to write in a digital age …. I am often as lost as I have ever been.
In particular, yesterday, I did a very close look at a video game design project and “talked through” what I was seeing, using my own project indicator sheet as my guide.
It had been some time since I had played this game called Into An Animal Cell (you can play it, too) , and I remained impressed by the work of this student. Talking the game through with my assessment lens on as I played it was another way to examine the moves of the student around game design, story narrative and the use of science as the underpinning theme.
But I openly admitted to her: grading/assessing something that has many modalities — here, for example, game design, science concept, story narrative, science vocabulary, etc. — is something I continue to grapple with on so many levels. If I get too specific, then I lose the flavor of the whole. Too general, as I am in the assessment tool for this project, and it is nearly meaningless. And then there is the element of “newness” here — I’m lucky if one of two of my 80-odd students ever designed and published a video game. Assessing the newness of the skill tugs in contrast to the learning experience I want them to have in the end, which is a design mentality and expanded notion of story narrative flow in a multimodal space.
I still seek (and yet, have not yet found) the balance here to create an assessment that will do what an assessment tool is designed to do: guide the student to make improvements so that they can further their work and learn from the experience. I still feel as if I am designing assessment tools to give them a grade. The tool is for me more than for them, as a way for me to justify why we are making video games.
I need to turn that whole perspective on its head. I need to better figure out how to create something more meaningful for my students. I’m still struggling with this. As it turns out, so are many others, as evidenced by some of this researcher’s other interviewees.
Peace (in the pondering),