Making the most of Personal Learning Checklists

(Featured picture: ‘untitled’ by AJC1 is licensed under CC BY 2.0)

A ‘Sharing best practice’ post by Kate Rolfe (Humanities)

Whether you call them Personal Learning Checklists (PLCs), RAG lists or as we refer to them, Module Outline/Review Lists, you have a tool which if used effectively, can cover a multitude of uses to support learning.


Picture 1: A Module Outline Sheet

In Humanities where pupils study two subjects (Geography and History) with the same teacher, we use our ‘module outline sheets’ and ‘module review sheets’, as a way to signal the beginning and end of topics. The first module outline sheet is used with pupils to discuss the structure of the term and key assessment points. It also allows pupils to engage with success criteria and the objectives for the term in order to select a target to aim for based on past progress and predicted targets. Finally, the RAG (Red, Amber Green) aspect of the sheet allows pupils to judge their current understanding of a topic and accept that red sections provide opportunities for new learning. It is also helpful for the teacher as it can highlight areas of overlap between subjects, where pupils may have already covered some of the content, so teaching of these topics can be modified accordingly.


Picture 2: A Module Review Sheet

The module review sheet allows pupils to reflect on their progress on a termly basis and over a longer period of time than specific assessments. By completing the RAG section a second time pupils are able to compare easily their perceived progress over time. It is also useful for the teacher as if there are any common “red” areas then these can be addressed through revision or other means. The right hand side of the page is a chance for the pupil to reflect on particularly strong areas of a topic and areas they could improve on. This could be related to specific skills or general attitude to learning. This has become more explicit in lessons through our school ‘Excellence Programme’, where pupils are asked to find a piece of work that they are particularly proud of in order to reflect on how they achieved excellence in learning. The teacher WWW and EBI section allows the teacher to give more generalised feedback to a pupil about their attitude to learning/response to feedback/homework etc.


Picture 3: A GCSE Geography Module Outline Sheet

At GCSE level module outline sheets use the terminology of the exam specification. This is because this is where a large number of questions originate from. For example, during a mock exam, a question referred to “how vegetation is adapted to the soil and is in harmony with it”. The term “harmony” was used on the exam specification but had not been used explicitly in the textbook or lessons. As such, although the pupils had the knowledge required to answer the question, the wording had thrown them. The module outline sheets can also be used to track topics and completed work. Now that the Geography GCSE exam has much more content, each topic can take up to two terms to complete. By dating work pupils can track any lessons they have missed in order to catch up on that work.



Engaging Disaffected Learners (3)

An Action Research project by Megan Dunsby

Project overview

Working with two other members of the department (Anna Watkins and Hannah Gale), we established the following aims:

  • To develop our pedagogical understanding of the reasons behind disaffection;
  • To establish and broaden a range of strategies to re-engage disaffected learners;
  • To build resilience and inspire self-confidence.

These give an overview of our foci, and from here we endeavoured to use different strategies to engage disaffected learners. We decided to all concentrate on year 10 students and boys in particular. Having a shared year group meant that we were able to support each other and help each other to develop strategies.

There are a myriad of reasons why students become disaffected, but in the experiences that we had had in our year 10 classrooms we felt that these were the central reasons why:

  • Disaffection hides a literacy weakness;
  • Pressure of year 10 GCSE (especially with current changes);
  • English is compulsory and relevance isn’t obvious to all.

Initial Research

The National Literacy Trust had the following to say on boy’s literacy levels:

  • “Research consistently shows a gender gap in children’s reading. Boys’ attitudes towards reading and writing, the amount of time they spend reading and their achievement in literacy are all poorer than those of girls.”
  • “Unfortunately it is those boys who are least likely to be socially mobile who are often most vulnerable to these triggers. For example, white working-class boys are one of the groups with lowest achievement in literacy”.
  • By GCSE, for achievement at grades A* to C in English, the gap [between boys and girls] is 14 percentage points” (National Literacy Trust).

So, why is this gap so big and what can cause boys to become disaffected learners in English in particular? Firstly, the changes to the examination system at GCSE mean that students must sit examinations at the end of Year 11 in which they must recall and apply two years’ worth of learning. This is an overwhelming and stressful prospect for many students, who are immediately disengaged by their own assumption that they will fail at this challenge. This can be a huge cause of disaffection at the beginning of Year 10.

The English curriculum has also become more traditional, favouring more 19th century literature and classic British literature, which means that students are working with challenging texts and unfamiliar language. Some boys in particular find it difficult to understand the purpose of studying these texts, which can provoke disaffection, particularly given that English is a compulsory subject that students have not opted into. Indeed, Caroline Bentley-Davies suggests that a teacher must “signal exactly why you are doing something” (2010, p.165) when improving standards of boys.

In addition, GCSE assessment has become more rigorous; to achieve a Grade 5, students are expected to have a command of subject terminology and an ability to use a range of punctuation and sentence structures with accuracy and for specific effect. Those with weak literacy skills can therefore become disaffected to mask their difficulties.

My personal project overview

After a number of discussions with Anna and Hannah, I decided that I would look at ways in which it is possible to re-engage students through tasks that are influenced by a project based pedagogy.

The literature surrounding project based learning regularly demonstrates its effectiveness at embedding skills and knowledge in a way that all students engage with on a meaningful level. Polman sings its praises stating that, ‘the most significant contributions of PBL have been in schools languishing in poverty stricken areas; when students take responsibility, or ownership, for their learning, their self-esteem soars. It also helps to create better work habits and attitudes toward learning. In standardized tests, languishing schools have been able to raise their testing grades a full level by implementing Project Based Learning (PBL), (2000).

Initial research undertaken indicates that boys who are ‘less socially mobile’, (The National Literacy Trust), are likely to be amongst the lowest literacy levels compared to their socially mobile peers. Patton’s research seems to indicate that it is this demographic of students who are likely to benefit from the autonomy and ownership of PBL experiences.

However the beneficial effect of PBL is certainly not limited only to these students, autonomy is a powerful motivator for all learners, according to Rowe et al, ‘in order to feel any intrinsic motivation whatsoever, students mist feel a sense of autonomy, like thy are in control of an element of their learning’. On boys literacy they comment that ‘In the early years of secondary schooling boys constitute 75 – 85% of students identified at risk of poor achievement progress in literacy. Of some concern is the flattening out of boys’ literacy achievements from year 4 to year 9,’ (Rowe et al).

When reading this research I began to investigate whether this ‘flattening out’ was a feature of my most disengaged year 10 student, Richard. After looking at his spotlight assessments from year 7 to 10, he was a perfect example of the pattern that Rowe et al discuss. After having taught Richard for six months I could see that his dis-engagement came from his belief that he could not achieve in English, together with the fact that he felt the subject was completely irrelevant for him. I began to focus on how I could create a project that would make learning the skills he required to pass English, obvious and attainable.

Spotlight entry 7.1 7.3 7.6 8.1 8.3 8.6 9.1 9.3 9.6
APP Level 4b 4b 5c 4a 5b 5b 5b 5b 5a

(Richard’s Spotlight assessments from Years 7-9)

I began looking at what constituted a project, and Thomas in PBL; A Handbook, (2000) provided a very helpful five point checklist for educators designing projects. He instructs that projects must be:

  • Central not peripheral to the curriculum:
  • Central concept and principle of a discipline
  • Projects include constructive investigation
  • Projects are usually, but not always, cross curricular
  • Projects are realistic, not ‘school like

Thomas’ pointers focussed my creation of a project, but also provided realisation that projects were a time consuming endeavour. Further research acknowledges this as one of the main pitfalls of such learning. Wethers et al have found that, ‘subject orientated secondary teachers have been less inclined to embrace cross disciplinary curriculum, in the form of projects or a more traditional approach, despite it being proven successful in reengaging previously disengaged secondary students, (2012).’ Hope goes on to explain that even though teachers are, ‘frustrated by national standardised tests that are a primary reason for disengaging boys from their learning’ (2010) PBL takes time and commitment that the majority of secondary schools simply don’t have. Wethers surmises that a lack of resources (time and financial) are a ‘fundamental reason that PBL is not a regular feature of the secondary school classroom, (2012).

Despite this, all of these articles unanimously measure a greater level of success from students in all walks of life when given the opportunity to learn in a project based environment. I became interested in investigating whether the disengaged students in my year 10 class, particularly Richard, could benefit from a version of PBL that I was able to facilitate with a deficiency in time and financial resources.

My project in the classroom

Hi Tech High, California became my next area of investigation. This American High School facilitates an entirely project based curriculum and 94% of their students in 2014 went onto college and university. I decided to replicate a project that they call the visual essay for my year 10 English students.

By considering a knowledge, process product model for differentiation I examined my current pedagogy for teaching essay writing skills to boys with low literacy.


By investigating what I actually meant by ‘learn how to write an essay’ I automatically referred back to the exam boards assessment objectives. In Hi Tech High’s case, they take to raw knowledge and work out a way of presenting it in an informative and engaging way that is open to the public. I decided that my raw knowledge would be my assessment objectives.


Richard decided that he would focus on the subject terminology knowledge, and created a glossary to put on the essay.


Strategies and an evaluation of their efficacy

What worked about the task:

Richard was engaged in the task and through assessment it became clear that Richard knew a number of subject terms that he did not before. Richard also felt a sense of achievement at having completed his section of the task and became aware of a crucial element of the success criteria. Richard stated, ‘It was a good task because I just got on with it. I didn’t have to write loads’ and ‘I got to choose what I wanted to do and just focussed on one bit’

What needed improvement:

On reflection I decided that the project was too ‘school-like’ and it didn’t really hit the real world criteria set out by Thomas. Ways to overcome this might include an open evening where parents come to see a display of a series of visual essays or a competition. Once Richard had decided how to incorporate his subject terminology he did not constructively solve much of a problem; this was another of Thomas’ project criteria and time limitations prevented this from becoming a reality.


The most valuable element of this research for me was to fully recognise the power of and potential of allowing students to be creative, curious, problem solving and autonomous. Whilst there are time restrictions placed on us as teachers, I will endeavour to create as many opportunities for students to practise being these things as possible. I fully agree with Sir Ken Robinson, who advocates that, ‘designing your curriculum around project-based learning is a dynamic way of engaging learners and of cultivating their powers of imagination, creativity and enquiry, (Robinson, K. 2011).


Bentley-Davies, C. (2010) How to be an Amazing Teacher. Carmarthen, Wales: Crown House Publishing.

National Literacy Trust (2012) Boys’ Reading Commission. All-Party Parliamentary Literacy Group Commission.

Patton, M. (2012) Subject to Change, New thinking of the curriculum ATL The Education Union.

Polman, JL (2000) Project Based Learning in the Secondary School Classroom, a constructive approach Cambridge Journal of Education, (46) 4. pg 12-26.

Thomas, B (2012) Work that matters; a teacher’s guide to project based learning, London: Paul Hamlyn Foundation.

Hope, S (2006) The Constructive Classroom, Journal of Problem Based Learning in Higher Education, (6) 34. Pg 76-90.

Life without levels: The Development of assessment and reporting in the curriculum

Life without levels: The Development of assessment and reporting in the curriculum

An Action Research project by Ed Walker

Synopsis: In the context of the DFE’s recent decision to remove the requirement to use KS3 levels and alongside this introduce a new KS4. This action research paper seeks to outline the necessary changes and the approach that will be taken to aspects of grading, assessment, reporting and the curriculum at St. Bernadette School.


With the introduction of new GCSEs, the removal of requirements for levels at KS3, and the emphasis on a knowledge based curriculum there is a need to adjust our curriculum, assessment, reporting and levelling system throughout the school. Over the past three years the first decision has had to be when will the new levelling system be put into place? The decision to delay the implementation of a new levelling system until the 2016/17 academic year was taken to ensure that the creation of new assessments and grading throughout the school was in line with a closer understanding of precisely the form that levels would take in the majority of subjects based on new GCSE criteria. This delay has allowed for a coherent package could be developed that includes assessment, reporting and the new grading system.

Why move away from the current system?

The Final report of the Commission on Assessment without levels (DFE, 2015), led by John McIntosh, set out a very clear rationale for the abolition of key stage levels in schools. The DFE reports its concern that as the levels were so complex in the national curriculum that the application of best fit criteria often led to serious gaps in students’ knowledge. The report also highlighted that often this meant that it was not clear to parents and teachers where the gaps in students’ knowledge were.  The DFE was also concerned by the lack of depth in students’ learning that emerged from the use of national curriculum levels as mere thresholds for students to achieve that did not secure students’ understanding.

The DFE report was clear that the Government have ‘not sought to prescribe any specific model of assessment.’ In summary, the report highlighted that a move from the current system is necessary, yet there is not any one solution that they favour. Instead of issuing a clear recommendation the report asks schools to consider the following three points; ‘why pupils are being assessed, what the assessment is intended to achieve and how the assessment information will be used.’ The development of our own assessment system had begun prior to the publication of this report, as had been the case in many schools. However, the timing of the report was extremely useful in driving the detailed guidance that has been developed in the 2016/17 academic year.

The decision to use any grading system

An argument can be made on analysis of the 2015 DFE paper that the use of any type of grading system may not be the correct decision. Indeed some schools have moved to a system without any school wide levels or grades whatsoever. These schools instead prefer to use the above, at or below expected progress model. Such a system was considered. The main reasons for deciding not to run with such a system in our school’s context are as follows; 1. We have a successful raising achievement programme without the use of any sort of grades the intervention mechanisms would become unnecessarily complicated. 2. The need for a structure, aims and a focus in the steps that students need to make throughout the school we believe will give students, parents and teachers clarification as to the required standard that was broadly expected of students at various points in the year, although we cannot definitely determine the required stage for students to have reached we can at least give them some structure and targets, this is shown to be important later in this paper. 3. Systems that use above, at or below expected progress end up using the same descriptors simply without the grade attached, again without the link to gradings this can become too abstract for children (Pollock, 1994). Finally, there is a lot of confusion around the educational world regarding the setting of grades at GCSE (NAHT, 2016). We are fully aware that we will not be able to definitively provide accurate definitions, however by preparing our grades in such a way that we are teaching to the top and giving students clear targets we will at least remove any ambiguity in our system. Some educational blogs have misunderstood the new GCSE grading system, believing it to be a norm referenced based system; they imply rather than state this. However, the September 2014 ‘Board paper for new GCSEs published by Ofqual [1] (Ofqual, 2014) illustrates that it is neither a purely norm referenced based system, nor a purely criteria based system, rather a combination of the two. As norm-based systems do not reliably lead to grade descriptors we have decided to use a criteria based system that takes note of the exam board comments and the grade 3,5 and 7 grade descriptors provided but does not solely rely on these, that is flexible and will need to be adjusted periodically. Ultimately the 1-9 system that we have produced gives an indication of the final GCSE grade we believe students will achieve but is not purely driven by this, but rather by our own standards of excellence.

Deciding upon a new grading and assessment system.

The approach that the school has taken is one of criteria referenced testing. Criteria referenced test is the process of evaluating and grading the learning of students against a set of pre-specified criteria (Brown 2003).  This was felt to be the most natural approach for the school to take in light of the national approach being taken with the 1-9 grading system at KS4. The reasons for this decision are set out in this document by exploring some of the alternative systems used as well as the rationale and supporting evidence for the use of the system we have decided upon.  It was clear that we should not be aiming for a simple recreation of key stage three levels, the DFE paper notes that they ‘have been concerned by evidence that some schools are trying to recreate Key Stage 3 levels based on the new national curriculum.’   (DFE, 2015)

There have been a variety of different approaches taken by schools to the new levelling system. The first approach that we considered was that taken by the ‘school A’ chain of schools. The approach taken by ‘school A’ has been to set the criteria reference in each year group for every subject; this is one version of what is commonly referred to as an age independent model (Green, 2002). In their system a student who achieves a grade 5 in Year 7 will be predicted to achieve a grade 5 by the end of Year 11. The grade 5 standard therefore changes in every year group, getting progressively more challenging as students move through the school. There are advantages to such a system. There is a clear path of progression throughout the school. Parents and students, assuming successful explanation, will be able to clearly understand the grade that students are predicted to achieve at the end of their time in the school. However, there were several reasons that we decided that the age independent model would not be the one that we use at St. Bernadette School.

Firstly, the work of Dweck (1986) has illustrated that achieving the same grade at ages 11,14 and 16 can have a negative effect on self-esteem and motivation. This is as students can feel that their ability is fixed over time.

The second reason is the ability of students to feel that they have progressed over a period of time. This can lead to increased levels of motivation as students see their grade improving each year as they work towards an end grade (William, 2001).

Thirdly, there is a real danger that in different year groups teachers end up not effectively referencing against the soon to be established 1-9 criteria, but instead by developing what becomes effectively a separate system for different year groups that in practice is norm referencing against the other students in that particular year. As a school we have an intake that changes in terms of prior ability a relatively large amount each year,[2] the quality and accuracy of the grading would be reduced.  Pollitt (1994) states “We are in danger of implementing a system of tests that behave like thermometers, all pretending to measure on the Celsius scale, but which actually each have their own freezing point and each their own idea of what constitutes a nice summer’s day.”

Finally, as an age independent model often gives extremely detailed criteria to define assessment levels and the progress that had taken place it can be extremely time-consuming to both design and implement. Such a system has been accused of being too mechanistic and over complicating the grading process (Hall, 2002). Like the DFE Life without levels paper (2015) over complication, time consumption and being too specific within the grades are seen as not creating the best environment for successful assessment and therefore teaching and learning.

Locally, School ‘B’ have also adopted an approach that is also based upon the age independent model yet is different to that of ‘school A’. This model provides ‘School B’ levels (see the table below)   that seeks to set a threshold standard each year if students are considered to have ‘passed’ the year.  This removes the concern of students being ‘stuck’ at the same grade each year, however this adds a separate concern that there is no overt link with the 1-9 grading and hence preparation for KS4.  Although, as this system was launched before the grading 1-9 were in place one would imagine that it will now be updated to reflect these changes. However, this will mean a considerable further investment in time and possible confusion as to the required standard.


The work of School ‘B’ and St Bernadette has been in part based upon that of Shaun Allison and Dan Brinton. Allison recommends allowing teachers to set the standard of excellence that they want their students to achieve and that we be selective about what they assess in order to prepare them successfully for GCSE. The difference between the School ‘B’ and our approach has been that although both systems allow teachers to set the standard of excellence we have linked this far more explicitly to the GCSE criteria and build back down age related thresholds to meet that change the definitions of the final GCSE grading.

To return to the questions posed by the DFE in their 2015 report; ‘why pupils are being assessed, what the assessment is intended to achieve and how the assessment information will be used.’ Firstly, we set out to ensure that students deepen their knowledge and understanding and precisely what they need to do to make progress. The assessments will also be used so that teachers can see how their groups are progressing in comparison to other groups and provide useful benchmarks to national progress and expectations. This in turn will allow best practice to be shared and provide a basis for interventions to be made to boost academic performance. It will also enable us to compare the progress that groups are making and develop strategies to improve academic performance. To ensure that these aims would be met we felt we should consider the following when developing the new grading and assessment system:

  1. Set standards of excellence that prepares students effectively for their GCSE examinations.
  2. Allow students to clearly understand the level that they are working at and how they can make progress.
  3. Be succinct for teachers that will allow them to clearly identify and moderate the grades that they are awarding within the assessments covered.

In summary, at St. Bernadette we have decided to use a criteria based system that uses one key set of 1-9 criteria for each subject throughout Years 7 to 11. The main aim of this is to ensure that students are able to best make progress from KS2 to KS4 and that the steps are made clear to all learners as to what they need to do at each step. All faculties have been encouraged to base these levels upon the new 1-9 GCSE criteria for their subject area. This criterion emphasises a link to GCSE expectations throughout the school.  We will analyse over the next three academic years the success and appropriateness of this link.

To help to ensure that students both clearly understand the grade that they are working at and that teachers are able to clearly identify the progress that students are making we need a blend of both summative and formative assessments in place. We therefore have had to ensure that the criterion based reference system that we have developed is flexible enough to take account of ongoing changes and formative progress in class. The system must also be able to make summative judgements about the grade that the student is currently working at.

When developing the structure of the 1-9 grading criterion we decided that this should not be as prescriptive as that set out by Allison (2014), he suggests insisting that all faculties have clearly defined a rigid knowledge and skills set of definitions to the thresholds that they expect students to meet each year, each with their own year by year subject definitions in the knowledge and skills required.

We have considered using only the distinction between skills and knowledge. In some subject areas, History for example a skills versus knowledge approach has been taken, see appendix 1. Other subjects such as MFL and English have focused on skills driven areas to assess with knowledge used within, for example by using productive and receptive skills, see appendix 2. At St Bernadette we have felt that this flexibility is important to ensure that subject areas are not forced to use broad headings that are not appropriate to their faculty area. By taking this approach we hope faculties more able to create assessments that are more closely linked to the demands of their individual curriculums.

The closer linking to the curriculum of the grades is designed to help students understand exactly where they are each year in relation to the progress that they need to make. The need for a close link to subject curriculums is highlighted by the DFE 2015 report ‘The new national curriculum puts greater emphasis on the specific knowledge pupils should acquire by the end of each key stage and requires greater depth and detail of learning.’ Therefore, ‘removing levels encourages schools to develop approaches to in-school assessment which are better tied to curriculum content’ (DFE, 2015).

Both ‘School C’ and ‘School D’[3] use a similar system to the one that we will be using. However, when analysing their systems we felt that in places it was unclear as to how much understanding and certainty students would have in each year at deciding whether or not they were making progress. We have still needed, as commented upon by Sizmuir and Sainsbury (1997), to use some form of ‘descriptors… as a means of imposing coherence on diverse elements of attainment.’ To help to clarify where we expect students to be without developing a different set of assessment criteria for every year group, as the other models we have considered have done, we have developed with faculties the steps at the end of each year group we would expect students would have reached to secure a particular grade. It is important to note that these use the same descriptors as the generic criteria but also have specific content areas that students would have covered in each year group.  It is important, so as to avoid simply recreating a version of the national curriculum, that these grades are viewed holistically and do not offer specific steps to reach various sub-levels within the grades.  Instead that through module sheets/PLCs that students are clear as to the next steps that they need to take to improve their learning in line with the curriculum that they are studying.

The success of the new grading system will in large part be determined by the accuracy of the assessment that takes place. Faculties have developed their own criteria for success in assessments based upon the GCSE grading criteria. It is of great importance over future years, and as the GCSE grading progresses that the moderation process is robust and looks beyond using only the grade descriptors but also seeks to work with other schools and institutions to create ‘a common yardstick’, (Sainsbury and Sizmur, 1998). For formative assessment teachers must not become fixated with using the 1-9 assessment in the classroom as this may distort effective feedback. However, if it us useful and aids understanding for students to develop it should not be deliberately avoided, professional judgement remains important.

What will become increasing important is, as suggested by Hall (2002), that ‘teachers need to interpret loosely framed level descriptions through a well-defined community of practice.’ To help to ensure the accuracy in this new system, alongside moderation, a new approach has also been developed to the analysis of results. In 2016/17 results will be carefully compared and detailed breakdowns given to faculties that consider accuracy of predictions at various grade boundaries. There is also the opportunity to join external moderation activities such as Pixl Curve, which seek to give nationwide security to the grades given. The national moderation process that will take place with 1-9 grades also ensures stability in the system that we have created as this relative grade certainty in Year 11 will trickle down to other year groups.

Updating reporting practices

The DFE’s Workload Challenge (2015) highlighted that many teachers found the data entry and data management ‘burdensome.’ The change to a new levelling system has allowed us to reconsider the value of the reporting system that we have had in place, particularly the impact that this has had on teaching and learning and raising achievement in the classroom. The same report also reminds us that ‘Ofsted does not require progress to be recorded with any particular frequency.’ Research from the DFE (2011) has also shown that in their survey of teachers that according to 77% of staff surveyed there needs to be more involvement from staff in the use of data, and that according to 84% of those surveyed felt data is often felt not to successfully impact on the development of teaching and learning.

We have when reviewing the operation of the reporting system considered three main areas: How will we collect the data that we collect to raise achievement and improve teaching and learning? How much time will the collection and analysis of this data add to teacher workload? How accurate is the information received?

Targeted Progress Points[4]

The 2015 DFE report suggests that targets are not always helpful as they guide teachers to simply meet certain thresholds. However, the report also states ‘pupils should develop a better understanding of how they are doing and where they need to target their efforts to progress in order to foster a sense of responsibility for their own learning.’ Hence to ensure that there is clarity in the progress that students will need to make from various starting points new transition points of targeted progress points have been developed as shown below. This is supported by Professor Cox (1995) ‘difficulties can arise when descriptions do not give clear definitions of progress or do not relate to realistic progression.’ There will be a main target level for all students. The term ‘minimum target’ will no longer be used.  For level 3, level 4 or below a score of 107 on entry this will be a minimum of 3 grades of progress. For disadvantaged and level 5 or above 107 on entry   the target grade will be 4 grades of progress. When students reach KS4 we will also give them a target Attainment 8 grade.[5] Challenge targets will also be set for students; these will always be one whole level above their target level.  Target levels are suitably challenging for the majority of students. There is an expectation that challenge targets will be used when the target level is not challenging enough for individual students.  Heads of learning and class teachers are responsible for when challenge targets are to be used in their classes and faculty areas.  It is anticipated that the challenge target will be most frequently used for more able students to stretch them towards grades 8 and 9.  There is a challenge, particularly in KS3, that these targets do not become the main focus of the classroom teacher, but rather that the knowledge and skills that students need to develop to progress are emphasised and the target grades are used to accurately reflect this progress.

Where students are significantly below their targeted grade this will be highlighted in spotlight reports as below targeted grade (over one whole grade behind), where they are at their minimum progress point this will be highlighted as minimum expected progress (between one sub level and three sub levels behind), where they are at their targeted grade this will be highlighted as good progress, where they are above their targeted grade this will be highlighted as excellent progress.


Average grades and target setting for subjects in all year groups.

It is imperative that ‘descriptions are written with reference to empirical data on pupil performance, to avoid the danger that unrealistic standards will be set.’ (Green, 2002). As such we know that broad targets for faculties based on three levels of progress in the past have been unhelpful and demotivating to teachers. Therefore the average grades in target setting will be based on a calculation using the national transition matrices, national attainment data and considering the numbers of students from each starting point in each year group for all of the measures used to allow for ambitious and realistic targets.  These may be updated each year as more accurate data becomes available, particularly in the light of new GCSEs.

For average grades there will then be a calculation made that sets an appropriate average grade for the stage that the students are working at. This will use the same step up points as those used for progress step ups. Therefore, if a final Year 11 average grade of 7+ is set for a faculty area this will be grade 6= at the end of Year 9. For options subjects these will be recalculated, as the composition of these groups changes from Year 9 to Year 10.

The average grades measure will continue to be used alongside the number of students on track for a ‘good’ grade at GCSE in KS4, the % of students on track for targeted progress and those students above targeted progress.

It is important that as part of this process we ensure that all teachers and Heads of Learning are aware that such a system is used to highlight trends that occur and is not a driving factor in performance management. This emphasis should help teachers not to merely push their students towards meeting a particular threshold, as the DFE warn against.


Conclusions and next steps

Research from the DFE and a range of academics has emphasised the need for a flexible system yet one that still allows all stakeholders to understand precisely the progress that students have made.  There are a variety of approaches that have been taken nationally that have taken various positions on this scale, these are largely all based around criterion based referencing. The in school summative system that we are seeking to develop is informed by national standardised summative assessment in our 1-9 system. We are confident in the new levelling system that we have developed for our context. However, there are a number of key next steps that must be considered over the coming academic years if this system is to be deemed a success.  The timeline in appendix 7 gives an overview of how we will address these challenges in the coming academic year.

  • Launching and explaining the new 1-9 system to students, teachers and parents.
  • Ensure that the 1-9 system encourages effective formative assessment rather than restrict it. As an approach to developing formative assessment teaching and learning should explore the benefits of mastery in enhancing students’ knowledge and understanding.
  • The moderation process of grades and links to final GCSE moderation is crucial to the accuracy of the grading process. (Appendix 5)
  • The key steps taken in each year group for students will be important in helping students understand how they are progressing and how they can improve the standard of their work.
  • We should avoid building a too detailed version of the grade descriptors to avoid recreating the complex national curriculum. However we should further link the curriculum provision in subject areas more closely to assessment systems. Building a clear body of assessment in each subject that links to the 1-9 system whilst offering clear steps for precisely what students need to do to improve. This process has begun in 2015/16.
  • Review the terminology used for the 1-9 system in KS3.
  • Further develop assessment systems to support SEND students.
  • Review faculty base line tests for diagnosing ways in which students need to improve and consolidate their learning.
  • The descriptors should be closely linked to the excellence we expect of students at St. Bernadette School without becoming too prescriptive.


  • Angoff, W.H. (1974) Criterion referencing, norm referencing and the SAT, College Board Review, 92, pp. 2-5.
  • Brown, S. (1988) ‘Criterion referenced assessment: what role for research?’ in Black, H.D. and Dockerell, W.D., New developments in educational assessment, British Journal of Educational Psychology, Monograph series no. 3, pp. 1-14.
  • Cox, B. (1995) The Battle for the English Curriculum London: Hodder & Stoughton.
  • Dearing, R. (1993) The National Curriculum and its Assessment: interim report London: National Curriculum Council and Schools Examinations and Assessment Council.
  • Dweck, C.S. (1986) Motivational processes affecting learning, American Psychologist (special issue: Psychological science and education), 41 (10), pp. 1040-1048.
  • DFE (2015). Final report of the Commission on Assessment without Levels. Chaired by John McIntosh CBE
  • DFE (2014) Report into the use of data within state schools in England and Wales.
  • Hall, K. and Harding, A. (2002) Level descriptions and teacher assessment in England: Towards a community of assessment practice. Forthcoming article, Educational Research.
  • O’Neil, J. (1994) Aiming for new outcomes: The promise and the reality, Educational Leadership, 5, March.
  • Pollitt, A. (1994) ‘Measuring and evaluating reliability in national curriculum assessments’ in Hutchinson, D. and Schagen, I. eds, (1994) How reliable is national curriculum assessment? London: NFER.
  • Popham, W.J. (1980) ‘Domain specification strategies’ in Berk, R.A. ed, (1980) Criterion referenced measurement: the state of the art, pp. 15-31. Baltimore and London: John .Hopkins University Press.
  • Sainsbury, M. and Sizmur, S. (1998) Level descriptions in the National Curriculum: What kind of criterion-referencing is this? Oxford Review of Education, 24.2, pp. 181-193.
  • Sizmur, S. and Sainsbury, M. (1997) Criterion referencing and level descriptions in National Curriculum assessment, British Journal of Curriculum and Assessment, 7.1, pp. 9-11.
  • Wiliam, D. (1993) Validity, dependability and reliability in National Curriculum assessment, The Curriculum Journal, 4.3, pp. 335-350.Appendix 1

Appendix 1


Appendix 2


Appendix 3

Assessment and curriculum 2016/17 (Version A)

Subject area: ____________________


Notes: Students that fail to achieve their targeted grade will be required to re-sit the assessment later in the school year.

Appendix 4

Curriculum and assessment points 2016/17 (Version B)

Subject area:________________


Appendix 5

Sample moderation top sheet 2016/2017


Appendix 6


Appendix 7



[2] From 2011 to 2015 there has been a 20% increase in the number of level 5 students on entry.

[3] As recommended as examples in the 2014 DFE report into life without levels.

[4] As is currently the case this will be different for disadvantaged students, by one level. Challenge targets are always one whole level above the targeted level.

[5] For Year 10 2016/17 this will not be available until February 2017.