Further musings on assessing essay plans: thinking about feedback

There can be few terms that polarise discussions in the way the word ‘feedback’ can. In the academic context of a Russell Group Institution, I encounter feedback in a variety of guises on a daily basis, either receiving it or possibly more often, giving it. Feedback is never the same, feedback comes in a variety of guises, and can vary from the very personal and specific to the really rather generic. It can also be given verbally or written down, typed or handwritten, on a piece of paper or- increasingly- sent and received electronically, for example through Moodle and Turnitin. Another factor that affects feedback is  timing: is feedback available at the formative stages of a task, or available on completion of a task, or maybe it becomes reflective thinking back over a set of issues and then reflecting forward, trying to identify key recommendations that may allow somebody to perform better for a future assignment? And is feedback built into the assessment stages and offered at set times only, or is it available as and when students ask for it? Which then raises the question of how many times feedback should be offered? Questions, questions and more questions, all of them frequently asked and much debated, and surely like all academics, I have spent much of my time thinking about feedback and trying to work out how best to give feedback to my students. Given my teaching philosophy of learning by doing, I do experiment with modes of feedback, and this blog post seeks to follow through on an experiment with assessing  essay plans I have previously blogged about (click here).

Just a quick recap if you have not seen the earlier post: last semester, I introduced  an element of formative assessment to two modules I was running in the Autumn Semester of 2013/14. For these two modules, the assessment would be split into 50% exam, 40% essay and 10% essay plan. The two modules were both of comparative size (one with 25 students, the other with 24), but both modules were at different levels (one at Level 3, one at level 2). Marks at level 2 are weighted 40% towards the final degree classification, whereas marks at level 3 are weighted at 60%; for both sets of students, the marks achieved for these modules contribute to their final degree classification, but the greater weighting at level 3 tends to put any marks performance at this level into very sharp focus for the students concerned. Anecdotal evidence suggests that students at this level choose particular modules based on their form of assessment, and often also based on the reputation of the marker, so the slightly different assessment pattern for my Autumn Semester modules introduced an element of the new for both sets of students. Both modules recruited fully, so the assessment may have attracted rather than deterred students taking these modules. Again, both modules were electives, that is non-compulsory modules that had been freely chosen by students drawn from a range of options available to them. For both modules,  the overwhelming number of participants were Single or Joint Honours Art History students from my department (on one module, one exchange student participated in the module, while that number doubled to two on the other module). So, the modules were broadly comparable, with the biggest variant being the Year Level. In order to make the assessment of essay plans possible, students needed to follow the same format of plan, and for both modules, I worked with the Mumford Method essay plan model, which asks in particular that a plan is thought through completely, and that equal emphasis in the plan is placed on all of the components of the essay. For the students, following the Mumford method was set as a formatting requirement; I did not offer feedback on different types of essay plans (spider diagrams, mind maps etc as in order to be able to mark them in accordance to marking criteria,the submissions have to be comparable.

The first module, Renaissance Luxuries, was taught over a 11 week term, with 3 hours of contact time each week, consisting of one 1-hour lecture, and one 2-hour seminar. Renaissance Luxuries is a final year module (Level 3), and was selected by 25 students for study, many of whom I had worked with before, and some of whom had previously used the Mumford Method for essay planning. Within the first 4 weeks of teaching the module, I set seminar time apart to prepare for the Essay Plan assignment. During class time, students were introduced to the Mumford Method essay plan model, with discussion focusing on how to use the plan for structuring a case study-based art history essay. In other words, the students were introduced to a method, the method was discussed, and we considered its implementation, so they were offered help in interpreting the instructions in a seminar devoted to Academic Writing. I then offered two sets of deadlines for submitting the completed essay plan through the VLE (Moodle) supporting the module, and for each of the deadlines for the plan, undertook to return the annotated and marked plans to the students within a week, leaving them plenty of time before the actual essay submission deadline for writing the essay itself. The link  below shows the correlation between marks for the Essay Plan (blue) to the marks for the actual essay itself (shown in green):

Year 3 Data set

I had expected a much more uniform picture, of the blue essay plan being consistently marked lower than the (green) essay itself, but this test group of third years produced a much more varied response, with the largest percentage of students (41.7 %)  actually performing less well in the actual essay itself than they had done in the essay plan exercise; in one case, the difference was a very extreme 26% marks difference. The second largest group of students (37.5%) performed as I had expected, that is the marks improved between the essay plan submission and the actual essay itself; in most cases, the difference was between 3-5% improvement, but there were a couple of 10% + differences.  The last, and final group (20.3%) was the small group of students with identical marks for the essay plan and essay. So, 57.5% of students performed equally well or better after the submission of an essay plan, which was marked and returned with (arguable both formative and summative) feedback.

The picture for the Level 2 group is similar. Here, the module concerned, Renaissance Venice, was again taught over a 11 week term, taught through lectures and seminars and assessed through both coursework and exam. As before, the essay plan submission was prepared through a seminar discussion of the actual method, and as for the Level 3 group, two sets of essay plan deadlines were offered. For the Year 2 student group, the biggest group, of 44%, saw an increase in the marks between the submission of the essay plan and the mark recorded for the essay itself; for this group, an improvement of 5% between the planning stage and essay was average, with the biggest increase working out as a 12% increase. The second largest group (36%) saw marks decline, again by an average of ca 3-5%, but one student’s mark dropped 14%. The remaining 20% of students recorded identical marks for the two writing stages. 64% of students appear to have benefited from the exercise, with marks between the two stages either improving, or staying the same.

These are of course only the bald statistical findings, and these figures need interpreting. So, for example, if one looks at the students with the very wild mark variations, each and every case has a story behind it that explains the variation, and in most cases, it is BECAUSE of the wide divergence between those two sets of recorded marks that stories have been identified and in all cases followed up. Here, the availability of an early set of marks has allowed for issues to be picked up at a comparatively early stage of term, hopefully with a longer term positive impact on the students’ engagement with their work and their enjoyment of a course.

Another issue that needs to be teased out from the bald statistics is the question of whether within the year groups, there was a noticeable difference in results between the groups who opted for an early submission of the essay plan over the groups that selected to work towards the later deadline. It is here that an interesting story emerges, because here one can observe much clearer patterns of marks difference. For the Level 3, Renaissance Luxuries group, only 5 students submitted their plan for the early deadline; of those 5, all returned improved marks between the two stages of assessment. The same observation applies to the Level 2 group, so there seems to be a correlation between submitting a plan early enough and having time to revise it properly, and doing well. Its common sense of course, but nice to have statistical data to back this up for once!

Most interesting though is a closer look at the plans that were so much better than the finished essays, in order to understand the factors that have come into play there. In all of these cases, the plans were poorly converted, or, in some cases, actually entirely disregarded, with the essay that was submitted differing substantially from the essay plan. Poor conversion often centred on the fact that the point that had been made in the essay plan, and that suggested an engagement with a set of issues and arguments, was faithfully carried across, but not developed, so simply fell flat. There was a skeleton of a structure, a sound framework on which to hang material, but the skeleton remained bare and the framework needed fleshing out. Following the conclusion of the marking process, and once the students’ work had been de-anonymised, I have been able to follow this up with them, and what this experiment has certainly enabled me to do has been to give very directed feedback on working methods, with a view towards not providing feedback, but feeding forward.

There is much more that can be teased out from these initial findings, such as for example whether students get better at using the Mumford Method with more practice, or whether the value of this exercise is less in the actual feedback as a formative tool, but more as a mechanism for assurance that an essay is ‘on the right track’. But for more conclusions to be drawn, I need to both analyse my findings further, but also generate more data, and increase the material I have available for analysis. In particular, I would like to follow a cohort of students through a number of modules, and look at how- if- they continue to use essay plans. My suspicion is that the essay plans produce better results at more advanced stages of study, especially as pieces of writing get longer and more complex?

Much to think about, and as regards musings on assessments, there will be many more to come yet….


One thought on “Further musings on assessing essay plans: thinking about feedback

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s