POLITICS

Some queries for Umalusi - Gavin Davis

DA MP says standardisation process may lead to an artificial inflation of the matric marks this year

Matric 2016: Open letter to Umalusi

Dear Dr Rakometsi

I am writing this open letter to you in the spirit of constructive engagement. In doing so, my intention is to safeguard the integrity of our examination system, and to ensure that all learners get a fair deal.

This open letter follows my personal correspondence with you, which contained a series of questions related to the standardisation of the matric marks. In your rather defensive reply marked ‘private and confidential’, you declined to answer my questions.

As I am sure you will appreciate, as a Member of Parliament I have a constitutional obligation to “maintain oversight of the exercise of national executive authority.” The standardisation of our matric marks – as a joint exercise undertaken by Umalusi and the Department of Basic Education (DBE) – is such an exercise of executive authority.

I would therefore be failing in my constitutional obligations if I did not raise questions and concerns relating to the standardisation process when they arise. It is worth adding that your refusal to answer these questions is a serious failure on your part. Indeed, it is a contradiction of one of the principles of standardisation as outlined in the preface of Umalusi’s own guidebook entitled “Understanding Statistical Moderation” which states:

“Sharing this information forms part of Umalusi’s commitment to making its processes transparent to all who have an interest in the examinations Umalusi quality assures and certificates.”

This open letter is your opportunity to re-affirm Umalusi’s commitment to transparency, and I look forward to your open and honest response. I am going to divide the rest of the letter into three sections to make it easier to read and respond to.

1. The general trend towards upward adjustments

At the standardisation meeting in Pretoria last Friday, we learned that 32 of the 58 subjects had their marks adjusted (compared to 29 in 2015). Of the 32 adjusted subjects, 28 had their marks adjusted upwards and only four downwards.

Some of the subjects saw a dramatic upwards adjustment. The following six subjects saw the biggest upward adjustment from the raw mark:

Subject

Candidates

Raw mark rejected

(mean)

Adjusted mark accepted (mean)

Adjustment

Historical mean (2011-16)

Maths Literacy

389 015

30,06

37,22

+7,16

37,20

Mathematics

285 439

27,01

30,79

+3,78

30,94

Business Studies

248 733

33,07

38,74

+5,67

38,72

IsiZulu HL

172 611

55,12

61,79

+6,67

61,77

Xitsonga HL

27 405

54,91

64,83

+9,92

67,32

Hospitality Studies

8 317

46,22

52,26

+6,04

52,23

According to Umalusi and the DBE, adjusting the raw mark upwards is justified if the exam paper was demonstrably more difficult (i.e. more cognitively demanding) than previous years. However, no evidence has been put forward to demonstrate that these papers were of a higher standard.

I noticed at the standardisation meeting that the starting point for adjusting the marks was not the papers themselves, but the results. In cases when the raw mark was worse than last year’s, the DBE went back to the paper and found difficult questions to explain the drop in the raw mark. The DBE then motivated for the raw mark to be adjusted upwards accordingly.

This methodology seems incorrect. It would seem that – as a general principle – the cognitive demand of the papers should be assessed independently of the marks. This should preferably happen before the papers are written so that the need to adjust the marks afterwards is minimised.

The obvious problem of using the raw marks as an indicator of a paper’s cognitive demand is that there may be cases in which the paper was of the appropriate standard, but the learners (for whatever reason) were below the standard of previous years. Adjusting the marks upwards in such cases would therefore mask systemic problems that need to be addressed.

I believe that there is an obligation on Umalusi to publish evidence that the exam papers for all 28 subjects adjusted upwards were more cognitively demanding than previous years. The public has a right to know why the marks were adjusted in each case.

2. The impact of progressed learners

109 400 progressed learners (13.4% of the total enrolment) wrote the National Senior Certificate (NSC) examination in 2016, up from 66 088 in 2015. It follows, therefore, that that there was a significant increase in the number of weaker students who wrote the NSC this year. This raises the question of whether the inclusion of progressed learners in the standardisation process leads to certain anomalies.

If the raw marks are used as an indicator of an exam paper’s cognitive demand (as described above), then it is likely that the inclusion of progressed learners in the standardisation process will skew the data. This is because, if the raw marks are lower than previous years (when there were no progressed learners), then these low marks could be as a result of the inclusion of weaker students (i.e. progressed learners) and not because the papers were more difficult.

As set out earlier, the only legitimate justification for adjusting the raw mark is if the paper was more or less cognitively demanding than previous years. Could it be that the inclusion of progressed learners in the standardisation process creates additional impetus to adjust the marks upwards, for reasons not related to the cognitive demand of the papers? And, if so, would this not mean that the marks of non-progressed learners will end up higher than previous years when there were no progressed learners?

3. The retention of raw marks when they are high

A closer look at the 26 subjects that retained their raw score reveals another anomaly that could potentially threaten the integrity of the standardisation process.

I mentioned earlier how, at the standardisation meeting, when the raw mark for a subject was low, DBE and/or Umalusi went back to the exam paper and identified tough questions in the paper to justify an upward adjustment. Curiously, I did not observe the same methodology being employed when the raw mark was better than the historical mean.

There was little interrogation of why the raw mark was better than last year’s, and whether this could have been because the paper was ‘too easy’. In some cases, instead of adjusting the marks downwards, the good raw mark was accepted as a welcome sign that the system is improving. Indeed, this logic was evident in your press statement released yesterday: “It is pleasing to see that the marks of subjects such as Physical Science and History could be left as unadjusted…”

A glance at the data, however, shows that there may have been good reason to adjust these marks downwards to bring them into line with the historical mean. At the very least, there needs to be an explanation (based on an assessment of the cognitive demands of the papers) as to why such a downward adjustment was not made.

Choosing the (higher) raw mark over the (lower) computer-adjusted mark occurred in the following eight cases:

Subject

Candidates

Raw mark accepted

(mean)

Adjusted mark rejected (mean)

Difference between raw & adjusted

Historical mean (2011-16)

Physical Science

204 661

35,48

34,45

+1,03

34,47

History

165 315

44,56

44,01

+0,55

44,00

English HL

109 781

54,72

54,34

+0,38

54,33

Sesotho FAL

494

58,31

57,81

+0,50

57,74

Dance Studies

472

62,70

60,91

+1,79

60,80

Setswana FAL

172

64,56

62,71

+1,85

62,51

isiXhosa SAL

80

72,46

71,53

+0,93

70,89

isiZulu SAL

14

79,46

75,49

+3,97

73,01

In Physical Science, for example, the raw mark (mean) was 35,48. The computer recommended a score of 34,45 to bring the final mark in line with the historical mean of 34,47. However, Umalusi and DBE rejected this computer adjustment.

In all eight cases above, the computer recommended an adjustment that would bring the final mark closer to the historical mean, but this recommendation was rejected and the higher raw mark was used instead.

Why is this? And why was the computer’s recommendation of a downward adjustment rejected in these eight cases, but its recommendation of an upward adjustment accepted in the 28 cases mentioned in section 1?

* * *

To sum up, there is reason to believe that the standardisation process may lead to an artificial inflation of the matric marks this year:

Firstly, there was an upward adjustment in 28 subjects. This is not in and of itself a problem if it can be demonstrated that these adjustments were effected due to the increased cognitive demand of the exam papers. However, at this stage, no such evidence has been forthcoming.

Secondly, 13,4% of the learners who wrote the National Senior Certificate were progressed learners. The inclusion of progressed learners in the standardisation process could lead to an artificial inflation of the marks of non-progressed learners, as marks are adjusted upwards to compensate for the inclusion of non-progressed learners in the cohort.

Thirdly, there was a tendency to retain the raw marks when the raw marks were higher than the recommended computer adjustment. If you adjust upwards when the raw mark is low, and keep the raw mark when the raw mark is high, I fear that we may end up artificially inflating the marks. This could result in a number of matriculants with passes and bachelor passes on paper, but lacking the actual cognitive skills to succeed in the real world of tertiary study and work.

There may be good explanations for all of the issues mentioned above. If so, please share them so that we can satisfy ourselves that standards are maintained, the integrity of the 2016 matric exams is intact and that no learners have been unduly advantaged or disadvantaged in any way.

I look forward to hearing from you.

Kind regards

Gavin Davis MP

Issued by Gavin Davis MP, DA Shadow Minister of Basic Education, 30 December 2016