Tracking pupil progress in design & technology

The demise of levels and the government position that Ofsted will expect to see evidence of good quality pupil tracking data, but will not expect schools to keep records of pupil attainment in a specific format puts a considerable onus on schools to develop new approaches. The Report of the NAHT Commission on Assessment notes on page 7 that schools should be able to use “suitably modified National Curriculum levels as an interim measure in 2014 but that any use of levels in relation to the new curriculum can only be a temporary arrangement to enable them to develop, implement and embed a robust new framework for assessment. Schools need to be conscious that the new curriculum is not in alignment with the old National Curriculum levels.” So no pressure then although it is worth noting that the Commission did recommend “schools should work in collaboration, for example in clusters, to ensure a consistent approach to assessment” (page 6).I’ve always found the whole business of assessment in design & technology tricky territory so I was really pleased when Dylan Wiliam sent me a draft of his pamphlet Principled assessment design soon to be published by the Specialist Schools and Academies Trust. I’ve used this, as a reference piece to ensure my thinking on assessment is sound. Whatever we choose to assess or how we choose to assess it we need to have general agreement about the nature and content of the subject, what Dylan refers to as the ‘construct’ of the subject. This can be seen as the BIG ideas that comprise the subject – an approach used in science education by Wynne Harlen et al in their publication Principles and big ideas of science education and one which I’ve discussed with regard to design & technology in a previous post. I think we perhaps need to step back and look to the purpose of teaching such BIG ideas and in the case of design & technology I’d argue for at least two purposes; giving young people a) technological capability (usually seen as teaching them to design and make products of various sorts that they consider to be worthwhile) and b) technological perspective (usually seen as teaching them to have insight into ‘how technology works’ such that they develop a constructively critical view of technology, and are able to consider how technology might be used to provide products and systems that help create the sort of society in which they wish to live). So we can see pupils’ learning journeys in design & technology needing the teaching of BIG ideas which enable pupils to acquire knowledge, understanding and skills in the context of developing capability and perspective. To my mind there is little point in, or indeed possibility of, tracking pupil progress if the teaching team involved don’t agree on some such overall purpose for the endeavour.

So if we are at the point where there is consensus on the overall purpose of the teaching the next step for the team is to devise an agreed learning journey that pursues this purpose. The exact detail of the journey I want to leave on one side for the moment and paint a broad brush description of the learning activities that such a journey will include. Clearly there will be some designing and making activities. I suggest that there should be a place for designing without making and for making without designing plus some activities devised to enable pupils to explore the relationship between technology and society. The learning journey would be a sequence of such learning activities and within each activity there would be different degrees of direction and choice for the pupils made according to the teachers’ professional judgement. Now each teacher is in a position to build up a record of the learning journey of each pupil he or she teaches and include in that record data which can be used to make inferences about the progress of that pupil. Dylan is adamant that it is important that any data collected is useful, not overwhelming, should be collected in a way that supports immediate use, and that we should not become complacent in its use. Assuming we keep these warnings in mind the record of the learning journey in terms of the broad activities it involves is starting to look like a decent way to track progress. The key question now is what should be recorded about the response of a pupil to each of the learning activities in the context of this tracking with a view to keeping the whole thing a) manageable and b) useful to those concerned with pupil progress?

For a designing and making assignment it seems to me that it is important for teachers to be very clear about both the learning intention of the task in terms of the BIG ideas that are needed by the pupils to tackle the designing and making AND the capability that might be achieved through this learning in designing, making and evaluating. Hence for any designing and making assignment it will be important for teachers to write criteria describing different levels of achievement in any response to the assignment. I feel that three different levels for each of designing, making and evaluating would provide enough discrimination between pupils differing achievement so a table summarizing this might look as follows:


This could be represented as a tripod with pupil’s performance across the criteria could be presented by drawing a line between the variously coloured discs as shown:


It would be essential to have a description of the assignment plus an image of the item the pupil had produced and the table listing the criteria all on the same sheet so that anyone reading the sheet had all the information they needed to think about the pupils achievement in this particular designing and making assignment. I can also imagine at the bottom of such a record of achievement sheet a sequence of such tripods each with the appropriate lines showing how the pupil had performed in previous designing and making assignments thus giving a record of how the pupil was progressing with this aspect of design and technology across time across time – say five or six designing and making assignments across Key Stage 3.

in a row jpeg

It’s important to remember that the criteria on which this performance is assessed are assignment specific and it will be part of the teaching team’s remit to ensure that the assignments become progressively more demanding across time and this increasing demand is reflected in the criteria they write. This won’t be an easy task but one that will focus the team’s efforts into devising a learning journey that is progressive with regard to designing and making. I can see very useful discussions taking place with pupils and parents around such a progress tracking sheet. During Key Stage 3 the on going data provided could be used to help the pupil see what they need to do to improve. Towards the end of Key Stage 3 the overview provided should be a great help to parents and pupils in thinking about GCSE prospects in design & technology should the pupil choose the subject. I see that this approach to progress tracking won’t give a test score and that this is often seen as a desirable feature of assessment. However I am heartened by these recommendations made in the Assessment Commission Report (page 6):

  • Pupils should be assessed against objective and agreed criteria rather than ranked against each other.
  • Pupil progress and achievement should be communicated in terms of descriptive profiles rather than condensed to numerical summaries (although schools may wish to use numerical data for internal purposes).

In response to the last point I’m sure it is possible to give the red, yellow and green discs a number value and then calculate scores for individual designing and making assignments and perhaps aggregate scores across time but I think the resulting numbers would be pretty meaningless. And it is worth noting that the summary information on the design & making tracking sheets can be supplemented at any time during a discussion with stakeholders by reference to the pupil’s class work – available in a traditional ‘design sketch book’ perhaps or in digital form as an on line design diary. So if this approach to tracking progress works for the designing and making assignment activities then can it be used with the other suggested activities?

Designing without making stems from the Young Foresight Project and was recommended in the National Strategy for design & technology in 2004. Pupils are required to design but NOT make a product and associated services using a new and emerging technology as a starting point. The design has to be justified from four perspectives – technical feasibility, meeting peoples needs and wants, acceptability in society and marketability. It seems to me that the SOLO (structure of observed learning outcomes) approach might be useful for assessing such a complex set of responses. Thanks again to Dylan for information on SOLO taxonomy which identifies five different levels of structure that we might observe in students’ work:

  • Pre-structural:          the response does not address the requirements of the task;
  • Unistructural:          the response addresses a single aspect of the task;
  • Multistructural:          the response addresses multiple aspects of the task, but these multiple aspects are treated independently;
  • Relational:          the response addresses different aspects of the task that are related to each other and therefore become an integrated whole;
  • Extended abstract:          the ‘integrated whole’ is conceptualized as a higher level, for example as an element of a more abstract structure, or is applied to new relevant areas.

Pupils usually produce a sketch of their design idea with ‘justification annotations’ and this could be scrutinized using the SOLO taxonomy. I’m not sure any would reach the ‘extended abstract’ category but the other categories would be appropriate. So in terms of tracking I envisage a sheet with an image of the pupil’s idea plus a teacher comment using SOLO. And note David Didau is tweeting about SOLO!

Making without designing is particularly useful for teaching making skills and it is not difficult to create a set of criteria by which the results of such making might be assessed: the level of precision in producing the parts, the correctness of the assembly and the care taken over finishing for example. So in terms of tracking I envisage a record sheet with an image of the pupil’s made item plus a teacher comment using the agreed criteria.

Exploring technology and society activities are likely to cover a wide range of considerations such as the possibility of developing a circular economy , the possible impacts of disruptive technologies, the influences of iconic designers etc. Here again it seems to me that a SOLO approach to assessing this work would be appropriate. So in terms of tracking I envisage a summary sheet describing the exploring technology and society tasks plus teacher comments using SOLO.

So taken together what would I have as my tracking record for a pupil? There would be a collection of sheets assessing progress in designing and making plus a single sheet showing assessment of making, designing and exploring technology and society, one sheet for each year of the key stage describing two making and two designing activities plus a range of the exploring technology and society activities. This seems to be very rich data that would be extremely useful in any conversations with individual pupils and parents and also provide the teaching team with an easily accessible feedback on the effectiveness of the various task types.

The methods for tracking that I am suggesting combine tracking with a clear statement of the teaching intentions and learning experience. This I think is particularly useful for design & technology as it is a much-misunderstood subject. It confused the Expert Panel in 2011 who found it so difficult to comprehend that they questioned its epistemological roots. With the BIG ideas identified and the way in which they are taught through particular approaches made clear this is no longer the case. I believe the tracking I’ve described here takes this into account and will inform pupils, parents and SLT as to the nature and worth of the subject and the way pupils make progress. Note I have deliberately concentrated on the learning experience and pupils response to that experience as the method of tracking. I have deliberately avoided setting ‘tests’ but used an approach that is minimally invasive in that it uses the work produced by pupils as part of their everyday learning in the subject as the basis for tracking. There may be a case for setting tests particularly with regard to finding out about pupils learning of so called ‘troublesome knowledge’ – concepts that are known to be, for example conceptually difficult, as in the relationship between force applied and movement achieved in mechanisms or alien in that they require pupils to adopt different perspectives in order to understand the views of different stakeholders. Such tests could reflect the nature of questions used on GCSE written paper questions – more of this in a future blog.

I must now return to the question, ”Would this be useful to those concerned with tracking pupil progress?” To my mind it provides very rich data that would be extremely useful in any conversations with individual pupils and parents, provide the teaching team with an easily accessible feedback on the effectiveness of the various task types. It doesn’t easily yield any sort of quantitative data so an SLT wedded to figures might not see it as that useful but I think it would paint a good picture of the messy business of what it means to make progress in design & technology and the sort of progress being made by individual pupils. But would collecting all this data be manageable? Here I have my doubts but would welcome comments from teachers and heads of department. Lots of effort would be required to set up such a system but once set up would it be that demanding to maintain? Could it respond flexibly to a changing curriculum? Could it be generated and stored digitally? Questions which only those working in schools can answer.







Legitimate Accountability or Educational Attainment Determinism

At first the following statement by David Laws in October 2013 seemed reasonable: “The current situation is unfair due to focus on C/D borderline, not rewarding progress from E to D or C to B or B to A. A pupil’s key stage 2 results, achieved at the end of primary school, will be used to set a reasonable expectation of what they should achieve at GCSE. Schools will get credit where pupils outperform these expectations. A child who gets an A when they are expected to get a B, or a D when they were expected to get an E, will score points for their school. This approach will ensure that all pupils matter, and will matter equally.”

The immediate question to spring to mind was, “Can achievements made by the end of primary school accurately predict achievements in a GCSE examination five years later?” I wasn’t sure so I asked Professor Robert Coe in an informal conversation. He was very clear in his reply. Statistically and on an individual basis the predictions are robust for English, Mathematics and Science. I told him I was particularly interested in predicted performance with regard to design & technology. He admitted straight away that data from primary schools was unlikely to provide an accurate predictor. “What should d&t teachers do?” I asked as it seemed clear to me that they would be expected to make predictions and held to account for them. Robert indicated that there were several ways forward, all involving not inconsiderable effort. Much closer liaison with feeder primary schools was a possibility. In an ideal world yes but given that it is widely acknowledged there is considerable variability in primary d&t provision and some schools have 10 plus feeder schools this didn’t seem that feasible. Make the prediction at the end of Year 7 after teachers have had a chance to teach and assess pupils’ responses. He agreed that this was useful but stressed that there should be some cross-school comparisons to ensure that the d&t curriculum and its assessment in a particular school wasn’t so at variance with typical data from other schools as to skew the possibility of making a reasonable prediction of GCSE performance in d&t. This made sense but added significantly to the work load of the task.

Then it struck me. Just how useful is it for the pupil aged 11 (or their parents) to know what he or she is likely to achieve in an examination in 5 years time, even if that prediction is accurate (which seems unlikely in d&t)? What is likely to be the response of the pupil told that their predicted grade is low, or high or middle of the road. Will high grade expectation cause the pupil to slack, will the low grade expectation cause the pupil to give up? Will the expectations be achieved on a self fulfillment prophecy basis? Why should pupils starting secondary school and their parents be saddled with this information? What seemed important to me was that the curriculum they were offered was engaging and enjoyable in that it intrigued without baffling, challenged without being daunting and as a result the pupils worked as hard as they could and received useful on going feedback on how to improve. Clearly it is very important to track progress across Key Stage 3 (more on this in a future post) and from this tracking and the knowledge of pupil progress it gives to be able to have a three way conversation (pupil, parent, teacher) at the end of Key Stage 3 about whether a pupil should continue with a subject to GCSE level (assuming it is optional) and if the pupil does opt for it what sort of result can be expected (based on three years worth of information and experience) and what might be done to improve on the expected result. I can’t see that delaying the act of prediction of GCSE performance until the pupil is 14 will do much to damage accountability. I don’t see that this would invalidate information that primary schools might provide secondary schools. Such information is useful if it provides a snapshot of where the pupil is and any particular strengths or weaknesses that pupil might have. This will help secondary teachers enhance the learning prospects for that pupil.

Are these new accountability measures a form of educational attainment determinism? What do others think?