As a status-seeking species, humans like to know where they stand.  Comfortable at discovering that some are not doing as well as us, those who are doing better we explain away as an anomaly.  The more access we have to data, the more fascinated we have become with statistics, and how to manipulate them.

 

“Lies, damned lies and statistics”, exclaimed Disraeli1 trying to explain something he intuitively knew to be correct but which his opponents, by a mischievous use of data, were trying to deny.  Einstein was wary of statistics saying, “Not everything that can be counted actually counts, and not everything that counts, can be counted”2.  Later, demonstrating the limitations of logical thinking, he said, “When I was young, I found that the big toe always ends up making a hole in a sock.  So, I stopped wearing socks!”  In a complex society, constantly competing with others for resources on which our livelihood depends, it is of critical importance for English educationalists to know just what variables correlate with what if we are to predict the likely consequences.

 

We suffer from a surfeit of statistics.  Standard Assessment Tasks (SATs)3 were a late introduction to the 1988 Education Act, made through Thatcher’s direct intervention into what she saw was the emerging colossus of the national curriculum.  To be taken by all pupils at the ages of seven, eleven, fourteen and sixteen SATs were designed to give an easy analysis of pupil’s progress against a common standard.  Because the results were to be published schools became preoccupied with cramming for the test.  Having given the media such easy ammunition, government the found itself entrapped by the same media to prove that their policies were working.  Desperate to have appropriate statistics, politicians then drove the teachers to put pressure on the children to provide the results.

 

The annual announcement of ‘A’ Level and GCSE results now provides the media with a field day.  League table analysis has become a national pastime4.  With ever more sophisticated technology, newspapers are able to announce immediately which were the nation’s best, and worst, schools, and rank all the other thousands in between.  The techniques they used were above reproach.  It was what the data reflected that was questionable.  Take ‘A’ levels.  Two things had happened.  Twenty years before only 15% of the population took ‘A’ level and only 10% of these (because of regulations in the examination process) got an ‘A’ grade.  Now more than a third of pupils sit the exam, with regulations stipulating that at least 20% get a top grade; there are now six times as many ‘A’ grades as previously5.  Furthermore, in the 1980s, sixteen-year-olds went into their GCSE exams in almost total ignorance of what the examiners might ask them.  now their teachers know exactly how the examiners will mark pupils’ scripts, so it is easy to drill their pupils in ways that enable them to “hit” all the key phrases, words, ideas and facts their teachers know will be what the examiners will look for6.  “At GCSE our children go into the exam incredibly well-prepared”, a teacher explained, “but by God are they bored by school”.

 

By the late 1990s the public were becoming increasingly aware that the earlier rate of improvement in SAT scores had slowed down.  More questions were asked about the effectiveness of the national curriculum, and politicians, by now led by Tony Blair, became nervous.  They massively over-reacted, and required every primary school to introduce a National Literacy and Numeracy Strategy7.  This was to be supplementary to the national curriculum; schools would have to add an hour of separate English and Maths every day.  Creaking at every joint SAT results in those subjects started to go up again, but frequently at the cost of ignoring other subjects.  An embarrassed Ofsted report in October 2002 stated that “It is the aspects of subjects that bring them to life, enquiry, problem-solving and practical work that have suffered most, and this represents a serious narrowing of the curriculum”8.  It was these aspects that had been sacrificed.  The most successful primary schools, those that had succeeded in not narrowing the curriculum, were those led by very experienced headteachers, “the majority of whom had been a long time in their current post, or had been headteachers elsewhere”9.  Stop and think about that one.  It is a wonder it was ever published.  These were headteachers who had most likely been pupils themselves in Plowden-type primary schools, and learnt their craft as teachers in the 1970s at the very time when HMI had been highly critical of a basic skills curriculum saying (1978) “there is no evidence… to suggest that a narrower curriculum enables children to do better in the basic skills, or led to work being more aptly chosen to suit the capacities of the children”10.

 

It was not just the English who had become infatuated with the analysis of data.  In 2000 the Organisation for Economic Co-Operation and Development (OECD) initiated a Programme for International Student Assessment (PISA) looking at educational achievement of fifteen-year-olds in over 40 countries, at three-yearly intervals.  National governments, especially in England, took this data very seriously11.  An Irish Senator stated “The problem with the OECD Report is that it drives us through a right-wing three-Rs measurement of education, which does not measure up (for we now require people with) tolerance, mercy, understanding, creativity, risk-taking, leadership… this results in a dull generation of leaders who are total experts in a narrow field… this is not the way to run a country”12.  When OECD was asked to comment on why Finland did so well, it enumerated two factors that were not to be found within the statistics; namely the quality and social standing of its teachers.  It is words that are needed for a full explanation, not just numbers.

Thesis 84:    27th August 2006