Designing and embedding computer based assessment within KS3 Computing

An Action Research project by Anthony Stephens (Computing)

Reading time: 10 mins

PROJECT INTRODUCTION

The move from ICT to Computing is still in its infancy. This has resulted in an emerging curriculum and a need for critical thinking and coding which is at odds with pupils’ experiences of traditional learning. Inconsistencies between Primary School experiences and lower literacy rates, in some cases, make it challenging to upskill learning in line with academic expectations at KS4. Students often find Computing assessments intimidating as there can be a disconnect between their experiences of learning and coding in front of a computer and switching these skills into a more traditional paper based assessment. As a result, these assessments favour those who are able to manage the stress of traditional assessment, rather than demonstrate the knowledge a pupil has. It is therefore difficult, with accuracy, to accurately measure their knowledge in this way.

PROJECT OBJECTIVES

The project has some key deliverables to benefit the teaching and learning within the classroom. The key objectives of the assessment are as follows:
1. Provides consistency of assessment experience to all KS3 pupils.
2. Accurately measures progress against new 9-1 curriculum grades*.
3. Develop standardised automatic assessment methodology that is measurable and makes assessment more time efficient.
4. Robust enough to be flexible so as to demonstrate deeper learning knowledge.
5. Provide opportunities for pupil voice which feeds back into curriculum development, monitors engagement and can be applied to building a ‘bigger picture’ of a pupil.

*at this time an internally established set of grade criteria are being used.

PROBLEMS WHICH MAY OCCUR

There are a number of potential pitfalls of designing a suitable computer based assessment. It is important to understand what these are and will be considered during the project. These include: cost of technology; complexity of solution for students; setup versus time-saving; measuring qualitative data; priority being diverted elsewhere due to time pressures of role and administrative load.

PROJECT MILESTONE STAGES:

The project will be managed in stages that will provide a logical development cycle to achieve a suitable outcome.

These include:
1. Research/Investigation of technology options
2. Evaluation and selection of appropriate technology.
3. Self-training and development of solution to match objectives.
4. Pilot solution.
5. Build automated assessments and assessment criteria for units.
6. Evaluate outcomes of assessments across units.

PROJECT STAGE 1: RESEARCH/INVESTIGATION OF TECHNOLOGY OPTIONS.

All options considered were web based platforms. This was due to ease of delivery of the assessment to students in an internet connected classroom. The lack of depth, customized options, inability to export data and cost discounted many choices, such as: Socrative, Edumodo, ClassMarker, PollEverywhere, Topgrade. Superficially they can produce assessments but not in the depth and scope required to fulfil the project objectives.

PROJECT STAGE 2: EVALUATION AND SELECTION OF APPROPRIATE TECHNOLOGY.

The 2 web systems that matched the project objectives were the following tools. After initial assessments and prototyping assessments the following advantages and disadvantages were found.

questbase v google forms

* The price for a School license was (at the time of writing) between GBP 449 to GBP 1,350 depending on the type of package sought.

SOFTWARE SELECTION:

Questbase would be more suitable and engaging for the pupils, but the high start-up cost would need to be carefully considered. As a proof of concept Google forms will be used to develop a KS3 solution. It does not have the depth and features of Questbase but does have enough depth and customisation to measure learning and gather this data in a meaningful format.

It is important at this stage to revisit and map Google Forms against the project objectives. This will ensure the software is fit for purpose and that the project is on track to be successful.

1. STANDARDISED AUTOMATIC ASSESSMENT METHODOLOGY THAT IS MEASURABLE AND MAKES ASSESSMENT MORE MANAGEABLE AND TIME EFFICIENT.

Once assessment has been developed and questioning carefully considered, Google forms can successfully centralise data that match pupils across classes and students that feeds into reporting.

2. PROVIDES CONSISTENCY OF ASSESSMENT AND ASSESSMENT EXPERIENCE FOR ALL PUPILS WITHIN KS3.

Methodology of assessment can be built in, so all pupils experience the same guidelines, instructions, assessment environment and conditions before taking the assessment. Google forms provides an identical assessment per student.

3. ACCURATELY MEASURES PROGRESS AGAINST NEW 9-1 CURRICULUM LEVELS.

Google forms has the possibility of mapping grading to levels using advanced VLOOKUP functions within google sheet that the data can be published to.

4. ROBUST ENOUGH TO BE FLEXIBLE AS TO DEMONSTRATE DEEPER LEARNING KNOWLEDGE.

Research has indicated assessment design is key. I will be utilising “Exploring the potential of Multiple-Choice Questions in Assessment (Higgins and Tatham, 2003) as a foundation for assessment question design. There will be a combination of multiple choice questions and open ended questions in assessment design.

5. PROVIDE OPPORTUNITIES FOR PUPIL VOICE WHICH FEEDS BACK INTO CURRICULUM DEVELOPMENT, MONITORS ENGAGEMENT AND CAN BE APPLIED TO BUILDING A ‘BIGGER PICTURE’ OF A PUPIL.

Pupil voice and student experience will be incorporated into assessment design, as well as pupil self-reflection into their own progress and learning.

PROJECT STAGE 3 AND 4: SELF-TRAINING AND DEVELOPMENT OF SOLUTION TO MATCH OBJECTIVES AND PILOT SOLUTION.

Google forms used to use flubaroo as an add-on to analysis. This consisted of an answer key that matched against pupil responses. The administrative load of analysing data was higher and required significant manual analysis after completing the assessment. Subsequently Google forms have launched an inbuilt grading system based on allocated points which has provided the option of easier data collection. I have managed to map results against levels using google form formulas but will need to consider levelling against results carefully.

The pilot stage has been implemented across Y7 classes and although training and setup was time consuming, the assessments were trouble free. All students were introduced to the assessment method through a screen captured instruction video at the start of the assessment. Pupils were graded against their Understanding Computing unit of work, which enabled me to develop a wide range of questions. Some deeper knowledge questions were possible on the subject of Binary to Denary number conversions. Pupils were provided with instant feedback and were able to print their assessment which highlighted weakness in their own knowledge, for revision and improvement purposes. It was also possible to weight scores of more difficult questions (Fig 1.0).

Self-marking feedback
Figure 1.0: Self Marking visual feedback for pupils

LESSONS LEARNED FROM PILOT

Improvements that need to be considered at this stage are that many pupils enter their own names differently from their registered names; this makes it difficult to match the data to the student and is currently a manual process. Future assessments will include names as a drop down list to prevent this. The weighting of open ended questions also needs to be considered to achieve a more robust assessment. As well as the economy of printed, versus online bookmarked assessments. The number of possible answers (3 choices) were not enough to ensure an accurate recall of knowledge (as having a 33% chance of randomly selecting a correct answer is too high).

PROJECT STAGE 5: BUILD AUTOMATED ASSESSMENTS AND ASSESSMENT CRITERIA FOR UNITS

Using the lessons learned from the pilot, assessments were developed for a year 9 unit of coding in HTML & CSS. It was technical assessment which focussed on various aspects of web authoring taught to them throughout the term. Using “Exploring the potential of Multiple-Choice Questions in Assessment (Higgins and Tatham, 2003), the questions were formulated as a wide range of knowledge was tested.

Many of the potential problems were ‘designed out’ with well-written questions. Particularly ensuring they did not:
– Give away clues to the answer.
– Fail to test the skills required by the intended learning outcomes.
– Contain “implausible distracters” (obviously wrong answers) which can be eliminated       by students with only limited knowledge.
– Confuse or frustrate students with ambiguity.
– I have considered that at least 4 options per question, reduces the likelihood of random     guessing achieving an inaccurate result.

Unit assessments are being built using the 1-9 grade criteria. These are based on descriptors which map to skills that can be demonstrated through the unit of work. These descriptors will form the foundation on the online assessment for the unit in question, which will check understanding. Online assessments are more suited to theoretical elements on a knowledge based curriculum. They will be used in conjunction with practical assessments. So online assessments will not completely alleviate the time pressures of all assessments.

Open ended questions were used to check understanding that was directly linked to the levelling criteria of the unit of work. Such as how a student would debug their own code; only effectively answerable if achieved through practical application.

HTML levelling criteriaHTML levelling 2

Figure 2.0: Example of an assessment task on debugging and troubleshooting

PROJECT STAGE 6: EVALUATE OUTCOMES OF ASSESSMENTS ACROSS UNITS AND IMPACT ON PUPILS ACROSS UNITS

Assessments were trialled across Years 7, 8 & 9. Questions were varied and directly linked to assessment criteria for each unit of work. The units were identified due to their theoretical knowledge based content. Which relates to an emphasis on the new knowledge based computing curriculum at key stage 4. The units of work tested were as shown below in figure 3.0.

Year 7 – Using Computers Unit. This was a theoretical unit of work which included some technical aspects of Computing, such as hardware, networking and emailing.
Year 8 – Web Design Unit. This assessment was related to the technical aspects of HTML & CSS coding. This was run alongside practical assessment through the unit of work.
Year 8 – Command Line Programming. This assessment was related to the text based programming and closely followed the scheme of work. The students were often presented with programs visually and had to demonstrate their skills by showing understanding.
Year 9 – Understanding Computers Unit. This theoretical unit consisted of hardware, computer components, binary and denary.
Year 9 Python Programming. Related to programming concepts learnt and more advanced text based coding.

Figure 3.0: Assessment Trialling across KS3

IMPACT ON PUPILS

To measure the impact of the automated assessment I performed qualitative research across Year 7, 8 & 9. This was in the form of interviews. Pupils highlighted to the following impact of online assessments versus paper based testing.

LESS DISCONNECT BETWEEN COMPUTER CLASS WORK AND ASSESSMENT

Pupils stated that they see the assessment as a more direct link to what is learnt in lessons and more relevant. The process of gaining skills and knowledge on a computer and being assessed on their computer is more natural for them. As a teacher I have observed reduced anxiety and more of a logical flow between lesson content and assessment. A pupil commented, “It feels less like a test and more like what we do in lessons.”

MORE MOTIVATED TO REVISE AND DO WELL AS THEY COMPARE RESULTS ON ASSESSMENT COMPLETION

Pupils were visibly showing more pride in their achievement as they were receiving instant feedback. There was a competitive element amongst peers and as a result pupils took a more mature attitude to revision. Some asking to take their books home to ensure they were prepared for the assessment. Pupils have been more engaged and talk more about their results at the end of their assessment. In some cases it is very clear that revision was the reason some pupils did poorly. One pupil stated, “I’m always keen to see if I get better score than everyone else”, and another “I know I could have done better because they [naming a pupil] did.” It provides them with a benchmark against the rest of the class and provides an instant link between knowledge recall and outcome.

INSTANT FEEDBACK AND AREAS TO IMPROVE

It is often not useful to a student to receive marked work a couple of weeks later, no matter the quality of the assessment. In summative assessment, many pupils feel they have often moved on from the topic and are being pulled back to previous topics and restricted in moving forward. Instant feedback makes it more relevant to the pupils, the knowledge they have learnt, revised and demonstrated, layered with concepts that they have not got right, but are provided with feedback on how to tackle them correctly. I would argue that it is more meaningful for a pupil to take on board feedback at the point just after the assessment, when it is more focused in their minds. As one pupil put it, “It’s annoying when I almost got a question right, but when I look at the right answer.  I will get it next time.”

The results are printed and added into pupils’ books. This provides personalised feedback in terms of the areas they need to improve on in their learning and acts as an important revision tool for pupils.

embededded automatic

Figure 4.0: Personalised feedback of assessment results to pupils

ANOTHER OPPORTUNITY FOR PERSONALISED COMMUNICATION WITH THE TEACHER

The opportunity of a two way dialog with the teacher can be meaningful in establishing better student to teacher relationships and fostering a personalised learning approach. It is motivating to pupils to feel they are able to express their opinions as this not only makes make them feel they are listened to; but more importantly gives them a stake in their own learning. This is, however, only if the pupils’ options are discussed and acted upon.

IMPACT ON TEACHING

CONSISTENCY OF ASSESSMENT AND DIRECTLY COMPARABLE

Being able to deliver an assessment without bias across different teachers by having a uniform method is important. This provides more meaningful comparative data of students in relation to the year group and how they are placed within this, irrespective of their teacher. It provides accurate recall of knowledge and reduces some of the assessment anxiety, which more closely matches the pupil lesson environment.

TIME EFFICIENT AND CENTRALISED, AND ECONOMICAL FOR HIGH NUMBER OF CLASSES

Once the system is set up, the assessments are time efficient and economical, due to their centralised nature. Assessment sheets require no printing, which keeps costs down. Replicating the assessment across multiple classes is efficient. The need for data inputting of results are greatly reduced with good design as the results can be output in Excel.

HIGHLIGHTS AREAS NOT UNDERSTOOD AND OPPORTUNITIES OF CURRICULUM IMPROVEMENT

The ability to have personalised feedback after an assessment per pupil is a positive step for computer based assessment. From a pupil point of view it provides clear targets for areas that a pupil needs to focus on. The centralisation of results also provides an overview of how a year group did per question and some of the misconceptions that may occur. The teacher can then refine and focus on these areas when delivering this content again. In figure 5.0 it shows some of the data available. It highlighted pupils misunderstanding of the use of computational variables. So when delivering this unit, variables will be taught differently and understanding will be checked.

embedded automatic 2

Figure 5.0: Centralised results for all class respondents

PROVIDES MORE OWNERSHIP OF LEARNING FOR PUPILS

The ownership of progress and demonstration of work and revision of a topic is clear using this system. It provides more pupil focus on knowledge retention and recall. It will then provide a natural system of weaknesses in this knowledge which pupils can rectify. Ownership of work and progress provides more of a personal stake and investment in a pupil’s education. This in turn will help with self-motivation and focus.

HELPS ASSESS MORE KNOWLEDGE BASED CURRICULUM

With the new 1-9 knowledge based curriculum if provides a mechanism for measuring more factual information. The teacher can map numerical thresholds that map results against 1-9 grading. This provides more timely feedback on where a pupil or class lies, in terms of progress. The adjustment of the threshold for different grading can be applied using formulas in the output spreadsheet.  Results can be mapped to 9-1 Grades.

NEXT STEPS

In conclusion, the new automated assessment mechanism is useful in the 1-9 knowledge based curriculum. It provides a mechanism for measuring more factual information as well as providing personalised feedback for each student. The teacher can map numerical thresholds that align results against 1-9 grading. This provides more timely feedback on where a pupil or class lies, in terms of progress. The adjustment of the threshold for different grading can be applied using formulas in the output spreadsheet. Developing broader questions which increase in complexity, such as debugging screenshots of code can be investigated further and implemented further to extend and challenge higher ability students. The integration of testing with google classroom can also be investigated further as is the integration within google sites.

Overall, it is clear to see the benefit to the students and the standardised approach is important when comparing students’ outcomes against each other. The assessments are therefore an organic process that can be developed and changed with the needs of the subject and students during subsequent years.

REFERENCES

[1] Higgins, E & Tatham, L. (2003). Exploring the potential of Multiple-Choice Questions in Assessment. Available: http://www.celt.mmu.ac.uk/ltia/issue4/higginstatham.shtml. Last accessed Dec, 2016.

Featured image: ‘Coding’ by Pexels on Pixabay.  Licensed under Creative Commons CC0

Advertisements