Für neue Autoren:
kostenlos, einfach und schnell
Für bereits registrierte Autoren
1. The project
2. Claims and disputes
3. Summary of conclusions and contentions
3.1 The inspection of specialist providers
3.2 The education and training and development of inspectors
3.3 Performance criteria
3.4 The management of the performance of inspectors
3.5 Quality control and assurance
3.6 New standard, quality control and assurance
3.7 New education and training and development programmes for inspectors
3.8 Professionalisation of inspectors and inspections
1. What did the project aim to achieve?
2 Inspection judgements
2.1. The reasons for the explorations, analyses and evaluations
2.2 Judgement criteria and evaluative statements: an explanation
3 Ofsted’s education and training and development programmes for inspectors
3.1 The concerns about the initial training programmes for inspectors
3.2 Ofsted’s education and training and development programmes for inspectors: questions
4 Continuing Professional Development (CPD) programmes for inspectors
5 The management of the quality of inspections
5.1 Codes of Practice
5.2 Contract Management
6 Summary of the aims of the project
7 The colleges and their inspections 2002 –
8 The methodological steps
8.1 Observation of inspectors
8.2 Ethnographic interviews
8.2.1 Ofsted and RISPs: refusal of interview access
8.3 Documentary sources
8.3.1 Colleges’ documents
8.3.2 Ofsted’s documents
9 The arrangements of the chapters of the book
Chapter 1: The origins of the project
1. Encounters with inspectors (2002)
1.2 Encounters with inspectors (2004 – 2013): teaching methodologies
1.2.1 Poor knowledge of the structures, teaching and assessments of specific qualification
options within the National Qualifications Framework (NQF)
1.2.2 Interpretations of qualitative and quantitative data
1.2.3 Research evidence: the impact of social and economic factors on educational progress
1.3 The problems of Ofsted’s methodologies and objectivity
1.4 The dearth of literature
Chapter 2: Outcomes for learners: judging the achievements and progress of learners
2.1 The judgement criteria and evaluative statements
2.1.1 Vague and ambiguous judgement criteria and evaluative statements
2.1.2 Outcomes for learners: Grade Characteristics
2.2 The achievement and progress of learners
2.3 Narrowing the achievement gaps
2.4 Functional and general skills
2.5 Progression to higher qualifications and the labour market
2.6 Explorations, analyses and evaluations of outcomes for learners
2.6.1 Outcomes for learners: whose outcomes?
2.6.2 The misrepresentation and misinterpretation of statistical data and evidence
2.6.3 Interpretive shortcomings
2.6.4 The misinterpretations of judgement criteria and evaluative statements
2.6.5 The impact of population characteristics on outcomes for learners
2.6.6 Where do tables 3 – 6 stand?
2.6.7 Retention, attendance and punctuality
2.6.8 Alternative arguments and contests: evidence
2.6.9 What factors are likely to affect retention, attendance and punctuality?
2.6.10 The factors that affect retention, attendance and punctuality
2.7 Judgement criterion three: a testimony to incompetence?
2.7 Progress and progression: Issues and consequences
2.3 What has Ofsted contributed to the standard of education in England?
Chapter 3: Methods in lesson observations
3.1 Lesson observations methodologies: a strategic approach or typology?
3.1.1 To prepare lesson plans or not prepare lesson plans?
3.2 Lesson observations focused on students: inspection by walking about
3.3 Collaborative lesson observations: covert observations of the observer
3.3.1 Collaborative lesson observations: covert measurements of quality and information
Chapter 4: Judging teaching, learning and assessment: summative assessments of teachers?
4.1 Lesson observations focused on teachers: the objectives of lesson observations
4.1.1 Questioning Ofsted’s definition of the objective of teaching
4.2 The judgement criteria and evaluative statements for lesson observations
4.2.1 Post-observations grading of lessons
4.3 Lesson observations: evidence collection, analyses, evaluations and interpretations
4.3.1 The psychologies and attitudes of teachers
4.3.2 The first group of criteria and statements: problems and consequences
4.4 The technical skills and competencies of teachers: developing and planning the
curriculum and the learning
4.4.1 The educational achievements of teachers: subject knowledge
4.5 How well teachers support students in lessons
4.5.1 Support or pastoral care?
4.6 Evaluating teacher effectiveness: issues, implications and consequences
4.6.1 Too much attention to technicalities and too little attention to the humanness of
education, teaching and learning
4.6.2 Too much confusion
6.6.3 What does Ofsted mean by ‘best practice’ and ‘weak teaching’?
4.64 Lesson observations grades: what do they measure?
4.6.5 Inspectors are partial to teaching methods
4.6.6 Grade Characteristics: confusing Ofsted-speak?
4.6.7 Do inspectors actually know best?
Chapter 5: Judging leadership and management
5.1 The effectiveness of leadership and management
5.1.1 The effectiveness of leadership and management: Grade Characteristics
5.2 Strategic leadership and management
5.2.1 A management consultancy?
5.2.2 What does Ofsted mean by leaders and managers?
5.2.3 Strategic leadership and management: a college bureaucracy?
5.3 Performance management
5.3.1 Performance management and CPD
5.4 The quality of provision and improvement in the quality of provision
5.4.1 Learner experience and the quality of learning experience
5.4.2 Provision: what provision?
5.5 Provision and the labour market
5.5.1 Provision and the labour market: corporate self-interest
5.6 Leadership and management: the human and legislative dimensions
5.7 Equality and Diversity
5.7.1 Equality and Diversity: curriculum development and teaching
5.7.2 The consequences of Equality and Diversity: teaching, curriculum development and
5.7.3 The consequences of Equality and Diversity: the rates of achievement
5.8 Safeguarding students
5.8.1 Safeguarding students: who is a child?
5.9 Comparative institutions: a case study
5.9.1 The case histories
5.9.2 Comparability: what were the points of comparability between the colleges?
5.9.3 The methodological criticisms demonstrated in the case
5.9.4 The inspectors judged E&D Support structures and not the practice of E&D
5.9.5 The implications and outcomes of the case
5.9.6 What kinds of conclusions can be drawn from the case?
5.10 Leadership and management: a summary?
Chapter 6: Judging the overall effectiveness of providers
6.1 The overall effectiveness of the colleges
6.2 Grade Characteristics: an objective or subjective summation?
6.2.1 Are tables 16 – 19 objective indicators of the overall effectiveness of the colleges’
6.3 The computations of ‘Contributory Grades: what kinds of analyses do inspectors carry
out in order to reach their judgements?
6.3.1 The convention of averages
6.3.2 Which judgement criteria and evaluative statements were involved in the
computations of the grades for teaching, learning and assessment ?
6.3.3 How was lesson observations data analysed and incorporated in the averages?
6.3.4 New inspections reports for College 3: a re-examination of table
6.4 The internal weaknesses in Ofsted’s computational methods: how were the grading of
outcomes for learners and the effectiveness leadership and management computed?
Chapter 7: The education and training and development of HMIs
7.1 The structure of the education and training and development of HMIs 1992 –
7.2 The contents and syllabuses of HMIs’ education and training and development
programmes 1992 – 2013?
7.2.1 Ofsted’s induction programme
7.2.2 The remit training activities for schools and colleges
7.2.3 Training in Additional areas
7.2.4 Questions arising from the list of modules
7.3 Delivering Ofsted’s education and training and developments programme
7.4 The assessment of HMIs’ learning
7.5 What were the omissions from the remit training activities for schools and colleges?
7.6 Does the absence of complaints against Ofsted mean quality inspections?
7.6.1 The reasons why colleges do not complain against Ofsted: a case example
7.6.2 The dominant discourses in Britain’s industrial ethnography
7.6.3 The weaknesses in the quality of thoughts within Ofsted
7.6.4 Should one size inspection fit all?
7.7 Inspector learning: a vocational or academic learning – or both?
7.7.1 What were the issues raised about Ofsted’ education and training and development
programmes in the HMI’s arguments
Chapter 8: The programmes of training for Additional Inspectors
8.1 Training additional Inspectors: course contents?
8.2 What has Ofsted said about the education and training and development of Additional
8.3 The accreditation and validation of Additional Inspectors’ education and training and
Chapter 9: Continuing Professional Development
9.1 A system of continuous evaluations of Ofsted’s CPD programmes?
9.2 CPD and Ofsted’s performance management programmes
9.2.1 The importance of CPD: research evidence
9.2.2 A haphazard and fractional approaches to performance management
9.3 Has Ofsted defined CPD programmes for HMIs and AIs?
9.4 Ofsted’s roles in the CPD of practicing AIs
9.5 A mandatory or voluntary CPD?
9.5.1 Open discussions: a CPD programmes?
9.5.2 An individual CPD plan?
Chapter 10: The management of the practice of inspections
10.1 Education: ‘raising standards’ and ‘improving lives’
10.1.1 Contract Management: a management control?
10.1.2 Contract Management defined
10.1.3 Britain’s industrial ethnography revisited
10.3 Signing Off
10.3.1 Remit specific Signing Off
10.3.2 Inspection reports Signing Off
10.3.3 Who Signs Off AIs?
10.4 Codes of conduct
10.5 The quality assurance of inspections: quality assurance visits
10.5.1 The focus of the quality assurance visits
10.5.2 The duration of the visits
10.5.3 The criteria for selecting samples of inspection for quality assurance visits
10.5.4 The systems for reporting the outcomes of quality assurance visits
Chapter 11: Is Ofsted fit for purpose?
11.1 Ofsted’s operational incapabilities revisited
11.2 Ofsted is functional organisation
11.3 The implications of functionalism for Ofsted
11.3.1 Ofsted’s frameworks are operationally inorganic
11.3.2 Ofsted’s philosophies are inward looking
11.3.3 Ofsted’s culture is a hierarchical bureaucracy
11.4 The impact of functionalism on Ofsted
11.4.1 The impact of functionalism on Ofsted’s attitudes towards teachers
11.4.2 The impact of functionalism on Ofsted’s education and training and development
11.4.3 The impact of functionalism on the quality of inspectors and inspections
11.5 Functionalism and Ofsted’s training methodologies
Chapter 12: Improving the standards of inspectors’ performances
12.1 A new approach to standard and quality
12.2 Quality control and management
12.2.1 Quality management system
12.2.2 Colleges, teachers and students are Ofsted’s customers and not the DES
12.2.3 The benefits of ISO
12.3 Summary of the main arguments
Chapter 13: Educating, training and developing inspectors to judge outcomes for learners
13.1 Professionally qualified inspectors
13.2 The CPD programmes: contents and syllabuses for judging outcomes for learners
13.2.1 Documentary and statistical analyses
188.8.131.52 Replying to critics
184.108.40.206 An appreciation of statistical analysis: performance criteria
13.2.2 Ethnographic interview skills and competencies
13.2.3 Developing and framing ethnographic interview questions
13.3 Developing ethnographic interview questions: a case example
13.4 Making judgements on outcomes for learners: Performance Criteria and Elements
13.4.1 Assessment: performance evidence
Chapter 14: Educating and training and developing inspectors to judge teaching, learning and assessment
14.1 Lesson observations
14.2 Anticipating critics and criticisms
14.2.1 Teaching: Science or Art?
14.2.2 Teachers deploy and use dual professional identities in lessons
14.2.3 Teachers think on their feet, theorise and reorder the curricular during lessons
14.2.4 A CPD in lesson observations?
14.3 The CPD programmes for judging teaching and learning and assessment
14.3.1 The contents and syllabuses CPD programmes on how to judging teaching, learning
14.3.2 Making judgements on teaching, learning and assessment: Performance Criteria and
14.3.3 Assessment: performance evidence
Chapter 15: Educating and training and developing inspectors to judge leadership and management
15.1 Interpreting the meanings of ‘financial resources’ within the 1992 Act
15.2 The contents and syllabuses of the CPD programmes on leadership and management
15.2.1 The contents and syllabus of the CPD programmes on financial management
15.2.2 Assessment: performance evidence on financial management
15.2.3 Human Resource Management
15.2.4 The contents and syllabuses of the CPD programmes on human resource
15.2.5 Assessment: performance evidence on human resource management
15.3 Leadership and Management
15.3.1 The contents and syllabuses for the CPD programmes on leadership and management
15.3.2 Assessment: performance evidence on leadership and management
15.4 The management of physical resources and Sustainability
15.4.1 Ofsted’s failings with respect to the management of physical resources and
15.4.2 The contents and syllabuses of the CPD programmes on the management of physical
resources and Sustainability
15.4.3 Assessment: performance evidence on physical resources and Sustainability
15.5 The management of technological resources
15.5.1 The objectives of CPD in the management of technological resources
15.5.2 The limitations of technological applications
15.5.3The contents and syllabuses of the CPD programmes on the management of
15.5.4 Assessment: performance evidence on the technological resources
15.6 Making judgements on the effectiveness of leadership and management
15.6.1 Eliminating redundant evaluative statements
15.6.2 The contents and syllabuses of the CPD programmes on making judgements on the effectiveness leadership and management Performance Criteria and Elements
15.6.3 Assessment: performance evidence on making judgements on the effectiveness of
leadership and management
Chapter 16: The inspection of education in England: a professional occupation?
16.1 Is there a need to professionalise inspection?
16.2 An Accredited Body for a professionalised inspection service? 441?
16.3 The Accredited Body: a reviewer of Contract Management
16.4 The Accredited Body: Administrator of Ofsted’s Codes of Conduct
This book reports on a research project which was carried out in two London colleges between 2002 and 2013. The focus of the project was the Office for Standards in Education, Children’s Services and Skills (Ofsted) and its education and training and development programmes for inspectors.
The Office for Standards in Education, Children’s Services and Skills (Ofsted) defines itself as the statutory watchdog for the preservation and management of the standard of education in England. And by its own accounts, Ofsted has claimed, firstly, that it is the promoter and upholder of high standard of educational achievements, secondly, that it is the bulwark against ‘weak teaching’ and weak leadership, management and governance, and, thirdly, that it is the champion and protector of the interests of the constituents of education in England (Ofsted 2012: 4).
This research does not dispute the fact that the above claims came with the provisions of the Education (schools) Act (1992) and the Education and Inspection Act (2006) and that the Act delegated the statutory duties to inspect, evaluate and judge and report on the standard of education in England to Ofsted. Instead the research is disputing Ofsted’s claims as follows: firstly, the research disputes the extent to which Ofsted has achieved the statutory duties delegated to it under the provisions of the 1992 and 2006 Acts. And, secondly, the research is questioning whether in its current structure Ofsted is fit for purpose and whether Ofsted and a significant proportion of its inspectors have the operational and intellectual capabilities to continue to inspect specific educational remits, and to evaluate and judge and report on the standard of education in England.
Thus the research has advanced the following arguments against Ofsted’s and its inspectors’ capabilities and fitness for purpose: the first argument is that there are incompetent inspectors within the ranks of Ofsted inspectors. The research has found that 30% of practicing Ofsted inspectors does not have the skills and competencies required to successfully achieve Ofsted’s performance criteria for inspectors, particularly in the collection, analysis, evaluation, assessment and interpretation of evidence in the three principal aspects of Ofsted’s judgements. The three aspects in question are ‘Outcomes for Learners, Teaching, Learning and Assessment’ and ‘Leadership and Management’.
The second argument is that there are significant gaps in Ofsted’s management of the standards of the quality of education in England. The gaps have arisen not only because of the population of unskilled and incompetent inspectors, but also because Ofsted’s inorganic organisational culture has prevented it, firstly, from developing the intellectual and organic operational capabilities it would have needed to enable it to design and implement effective quality assurance and control systems to manage the skills and competencies of its inspectors. Secondly, its inorganic organisational culture has prevented the development of organic Continuing Professional Development (CPD) programmes whose contents and syllabuses should have been linked to the management of the performance of inspectors and which should have been capable of adapting to the dynamics of national education discourses. And, thirdly, Ofsted’s inorganic culture has prevented it from developing and implementing management systems to continuously evaluate its CPD programmes in order to ensure that the programmes are continuously organic; are continuously fit for purpose, and are in concert with researches in education, teaching and learning.
The third argument is that Ofsted stands alone among national organisations in its ambivalence to the relationships between the quality of inspectors as human resource operatives and the quality of the outcomes of the inspections they conduct on its behalf. Additionally, it stands alone in its failure to appreciate the fact that the quality of its inspectors and hence the quality of their performances are a function of the quality of its CPD programmes. This means, firstly, that its education and training and development programmes have not changed since 1992 and are therefore inadequate preparations for the skills and competencies its inspectors would need to have in ‘cooperative cross-cultural dialogues’ (Igbino 2012: 170) if they are to be able to negotiate the cultural mosaics within the colleges in contemporary urban England. And, secondly, the programmes are too narrowly focused on training inspectors in how to operate the statutory ‘Framework for Inspection’. Again, this means that the programmes do not sufficiently prepare and equip inspectors with the skills and competencies they would need in order to understand and manage the wider issues and factors, including economic, social and cultural changes, which have consequences for education, teaching, learning and the rates of achievement in contemporary England.
The fourth argument is that Ofsted’s performance criteria for inspectors are too watery and wholly inadequate in that, firstly, they do not demand the essential skills and competencies which should enable Ofsted to assess the employability of recruit inspectors and the continuing employability of practicing inspectors. This is because the criteria do not demand that inspectors must demonstrate how they would comprehend and negotiate the multi-layered transactions which occur between teachers, students, parents and guardians and the environment in which colleges operate. Secondly, the criteria are too ambiguous. They do not give sufficient attention to the collection, analysis, assessment and interpretation of evidence, and they do not require inspectors to demonstrate the abilities to assess, interpret and distinguish between different types and classes of evidence. And, thirdly, the criteria do not require inspectors to demonstrate how they would inspect, interpret and judge the impact and consequences of students’ economic, social and cultural backgrounds for education, teaching and learning, and the rates of educational achievement and hence for ‘Outcomes for Learners’.
Thus the research questions whether there is a need for Ofsted. More importantly the research argues that on the basis of the evidence of Ofsted’s deficiencies that were uncovered during the research together with the above arguments the bases exist to conclude that Ofsted is not at this point in time in a fit state to discharge its statutory responsibilities effectively.
Underpinning the above conclusion is the documentary evidence displayed in table 24. On the bases of the contents of the table the research argues that Ofsted’s remit has become too unwieldy. The remit which Ofsted now inspects extent beyond the traditional educational institutions: its inspectors now inspect education and training and development programmes in specialist institutions and organisations including Armed Forces training.
Accordingly the research has contended as follows:
- That the inspection of the standard of education and training and development offered by specialist institutions and organisations in table 24 should be delivered by designated specialist organisations. These institutions should be removed from Ofsted’s inspection circuits so that it can focus on the inspection of the maintain schools it was created to inspect
- That Ofsted should be broken into smaller and more organic units and the smaller organic units should be independent entities, which are able to respond to developments in research in education, teaching and learning
- That there should be more recognisable and properly defined boundaries of the new entities in order to insulate and remove them from the clutches of the Secretary of State for Education and the briefers at the Department for Education and Science.
- As Ofsted’s education and training and development programme currently stand they are inadequate preparation for the skills and competences its inspectors would need to have in ‘intercultural dialogues’ in order to be able to negotiate the ‘cultural mosaics’ of the communities from which the colleges in this research project drew their constituents. The current programme could not even enable inspectors, irrespective of the educational remit involved, to begin to understand the surfaces of these dialogues how much rather enabled them to penetrate their deep recesses in order to appreciate their meanings and the implications of those meanings for the ways these colleges worked
- The S5 methodological training programme for HMIs is not good enough. The programme is poorly focused and ill-defined. The contents, syllabuses, performance criteria and the assessment methods for those criteria are not defined. The programme does not sufficiently address the analytical, evaluative and skills and competencies which should have enabled HMIs to comprehend the dynamics of the standard of education in contemporary England to any measurable depth and breadth
- Additionally the programme is too narrowly focused on training inspectors on how to operate the statutory ‘Framework for Inspection’. Therefore the depth and breadth of the programme are not sufficient to prepare and continuously develop the basic skills and competencies which HMIs would need in order to understand and manage the wider issues and factors, including economic, social and cultural changes, which have consequences for education, teaching, learning and for the rates of educational achievement in England.
- The current performance criteria for inspectors are inadequate. They are inadequate because they do not demand the essential skills and competencies that should enable Ofsted to assess the employability of recruits and the continuing employability of practicing inspectors. Thus as they currently stand the performance criteria do not demand that inspectors must demonstrate the skills and competencies that should enable them to interrogate the multi-layered transactions which occur between teachers, students, parents and the environment in which the colleges operate
- The performance criteria are too ambiguous. They do not give sufficient attention to the collection and assessment of evidence and they do not require inspectors to demonstrate the abilities to assess and distinguish between different types of evidence. This is a very serious defect because inspectors must be able to gather and assess and interpret both qualitative and quantitative evidence in order to be able to judge ‘Outcomes for Learners, Teaching, Learning and Assessment’ and ‘Leadership and Management’
- The performance criteria do not require inspectors to demonstrate the skills and competencies needed to negotiate the impact and the consequences of the economic, social and cross-cultural developments in contemporary England for education, teaching and learning
- The performance criteria do not require inspectors to demonstrate the abilities to analyse and evaluate and interpret the consequences of these developments for the rates of educational achievements and hence ‘Outcomes for Learners’.
- The management of the performances of inspectors is poor and lacked focus. The objectives are unclear and the processes are totally divorced from the established performance criteria. This is because there is no evidence of the analyses and evaluations of the job of inspectors, descriptions of job components and of the evaluation of the strengths and weaknesses of the contribution of the jobs to the effectiveness of Ofsted as an organisation and, indeed, its inspection activities
- The system for managing inspectors’ performance lacked direction. The system lacked defined and recognisable processes for accountability and responsibility for remedial action in cases where inspectors have failed to meet established performance criteria
- The Roles of Inspection Delivery Directorate; the Protect-Departmental; the Inspector Performance Management Administration (IPMA), and the Performance and Development Planning (PaDP) in terms of performance management and in the conduct and delivery of inspection services are unclear
- There are no systems which have been developed to ensure that the results of performance management, particularly the performance data collected and disseminated by the IPMA, are used to inform the contents and syllabuses of Ofsted’s Continuing Professional Development programmes.
- There is no robust system designed to manage, control and monitor the performance of the Regional Inspection Service Providers (RISPs). The current system is inadequate and wholly worthless because it relies entirely on market-based activation and compliance with ‘Contract Management’ by the RISPs
- There is no clarity to how underperforming and underachieving RISPs and AIs are managed by Ofsted and the roles of ‘Contract Management’ in this respect are unclear
- There is no clarity to how ‘Contract Management’ is administered. And there is no clarity to how ‘Contract Management’ covers underperforming and underachieving RISPs and AIs
- There are no monitoring, feedbacks, and action plans designed to interrogate the workings of ‘Contract Management’ at critical points. And the associated systems of ‘Badging’ and ‘Signing-off’ which are used to control, manage and monitor the quality, skill and competencies of Additional Inspectors (AIs) are inadequate because they are entirely under the control of HMIs and RISPs and lacked Ofsted’s oversight. The entire processes are a closed-looped system. And such a system provides opportunities for cronyism, corruption and falsification of performance evidence
- There is no evidence that Ofsted has established the procedural steps for the administration of its own codes of conduct. There is no clarity about the allocation of responsibility and accountability for enforcing the codes of conduct
- The management structures for assuring the quality of inspections are not strong enough. The systems for controlling and assuring the quality of inspections are focused on fault finding during quality assurance visits rather than fault prevention. The systems as they stand are not able to prevent poor quality performance by inspectors. They do not have the capacity to control and assure the quality of inspections and the quality of inspection judgements before these judgements are made.
- If Ofsted is to continue to play any role in the judgement of the standard of education in England, then new standards of quality control and assurance systems should be established for it. The systems should be evaluative, rather than restrictive and they must be designed to achieve the following:
- The systems should continuously assess the extent to which all aspects of Ofsted’s educational, training and developmental programmes are fit for purpose. This means that the systems must continuously equip inspectors to be in a state of readiness to respond to the dynamics of the developments in research in education
- The systems should define a set of standards which is unambiguously stated and which is understood by all employees, HMIs, AIs and RISPs alike, to represent the focus and the goal which all activities including inspections, educational, training and development programmes must achieve. The standards should exist for all concerned and should be recognised as the level below which those performances and programmes must never be allowed to fall, be the programmes administered in-house or otherwise and be the programmes for HMIs or AIs
- The systems should continuously and proactively interrogate and monitor developments before the start and not after the start or during quality assurance visits to inspection in-progress, or during education, training and development programmes in-progress. This would ensure that everyone concerned and involved in the achievement of Ofsted’s statutory duties are working towards the achievement of the defined standards
- The systems should be designed to identify and correct sub-standards and ensure that poor performances are flagged and that measures are triggered to prevent potential sub-optimal performances before they occur
- The new systems of standards and quality should embed a quality management system. The quality management system should be designed to oversee, verify and check and certify that Ofsted’s quality standards and quality subsystems are achieving the specified minimum standards
- Responsibilities and accountabilities under the management system should be made unambiguously clear.
- Inspectors should be trained in how to collect evidence, including how to frame interview questions, conduct interviews and carry out ethnographic reading and analyses of documents, so that evidence is closely focused on standards in education, which to all intents and purposes are the objectives of inspection. Additionally Inspectors should be trained so that they know that there is a correlative relationship between interview questions and the validity, reliability and the value of evidence
- Inspectors should be trained to know that the validity of the judgement decisions they would subsequently derive from their analyses and evaluations and interpretations of the interview questions in table 25 depends upon selecting and interviewing the right respondents[i]
- There should be close cooperation with accredited awarding bodies to design courses, contents, syllabuses, performance criteria, learning objectives and outcomes and assessment methodologies covering not only the entire remits which I have set out in chapters 13 – 15. But also the wider skills and competencies which inspectors should have in order to know and understand the consequences of economic, social and cultural change for education in England
- One size inspection should no longer fit all. Therefore one size education and training and development programmes and CPD programmes on ‘Outcomes for Learners’, ‘Teaching, Learning and Assessment’ and ‘Leadership and Management’ should not fit all. Accordingly the qualifications, contents, syllabuses, performance criteria, learning objectives and outcomes and assessment methodologies I have discussed in chapters 13 – 15 are adaptable and broad enough to escape the confines of the statutory ‘Framework for Inspection’ and they should embed and reflect the ethos; the real lives of colleges, their students’ and the historiographies of the communities in which the colleges reside
- Anyone wishing to become Ofsted inspector must, in order to practice in any of the remits or combinations of remits in table 24, be required to submit portfolios of evidence in which they have demonstrated how they have met the relevant performance criteria for their remits for moderation and assessment by independent Assessors.
- That the DES should consult its constituents including parents, teachers, schools’ and colleges’ leaders, governors, students, Local Authorities and MPs of the Select Committee on Education in order to institute an Independent National Organisation with the ability to establish and monitor a register of qualified Ofsted inspectors
- The organisation should accredit and register prospective inspectors who have met specified professional standards in accordance with the wider skills and competencies and the respective remits in table 24
- The organisation should establish and accredit rigorous and compulsory programmes of CPD to enable practicing inspectors to keep abreast with educational researches, research methodologies, researches and developments in teaching and assessment methodologies, and in researches and developments in resource management
- The problems posed to the quality of inspection by unskilled, incompetent and underperforming and underachieving inspectors should come under the purview of the organisation. It should deal with incidence of poor skill, incompetence, underperformance and underachievement by requiring that the condition for continuing registration to practice as inspectors, be HMIs or AIs, within any of the remits in table 24 should depend upon the successful completion of prescribed core programmes of CPD in the areas I set out in chapters 13 - 15
- The organisation should have the authority to require practicing inspectors to demonstrate competence
- The organisation should define and establish professional codes of conduct for practicing inspectors
- The organisation should maintain a register of inspectors and should, firstly, establish disciplinary structures for breaches of the codes of conducts and, secondly, it should oversee the disciplinary procedures designed to deal with misconduct and poor professional practice by HMIs and AIs
- The organisation should have the power to sanction inspectors and indeed suspend their license to practice or strike them off the register entirely
- Membership of the organisation should be compulsory for inspectors, irrespective of the remit in table 24 in which they practice
- The organisation should be funded, paid for and supported, by subscriptions from inspectors and the Regional Inspection Service Providers.
The Office for Standards in Education (Ofsted) came into existence under the provisions of the Education (Schools) Act (1992). The Act, which came into force during the Premiership of John Major, made provisions for the post of Her Majesty’s Chief Inspector (HMCI). The Act directed that the HMCI should be the Chief Officer of Ofsted, that the incumbent HMCI would be responsible for securing the inspection of schools in England. The Act stipulated that on the basis of the results of the inspection of schools the HCMI must report to the Secretary of State for Education on the standards of educational achievement attained by school pupils in England, that the HMCI must report to the Secretary of State for Education on the quality of education provided by schools, that the HMCI must report to the Secretary of State for Education on the management of financial resources in schools, and that the HMCI must report on the moral, social and cultural development of pupils in schools in England (Education Committee 1992: 1).
Thus the Education (schools) Act (1992) made the management of the standards of education in England the primary and statutory duty of Ofsted. And as a part of its responses to the responsibilities given to it by the Act Ofsted recruits and trains HMIs and contracts organisations to train and supply Additional Inspectors (AIs) to carry out the inspection of schools and colleges on its behalf.
But who exactly are Ofsted inspectors? What exactly do they inspect? What kinds of analyses and evaluation and interpretations do they do in order to arrive at their judgements about the standard of the education provided for students by colleges in England? What kinds of initial and continuing education and training and development programmes do inspectors undergo in order to enable and prepare them for the performance of their roles as inspectors? How exactly does Ofsted manage the performance of inspectors? How does Ofsted manage the quality of inspection and the quality of the judgements of its inspectors?
Accordingly, the aims of this research project are to explore some of the answers to the above questions. In particular the project will focus its explorations on the following areas:
1. Inspection Judgements
2. Ofsted’s education and training programmes for inspectors
3. Continuing Professional Development of inspectors
4. The management of the quality of inspection
When Inspectors have concluded inspections they would write and publish the reports of their findings. In the reports they would make three post-inspection judgements. In judgements they would rate schools, colleges and Education Departments at universities and, indeed, any of the remits in table 24. They then award grades on scales of 1 – 4 for each of the three aspects of judgements. On the scales grade 1 is the highest, which means that the remits were ‘Outstanding’ and that they were delivering ‘Outstanding’ standards of education to their constituents. And grade 4 is the lowest, which means that the remits were ‘Inadequate’ and that they were delivering ‘Inadequate’ standards of education to their constituents.
The three aspects on which Ofsted inspectors judge schools, colleges and Education Departments at universities using the above scales are as follows:
- Outcomes for Learners
- The Quality of Teaching, Learning and Assessment
- The Effectiveness of Leadership and Management
On the basis of the scores on each of the above aspects of judgements inspectors would compute their overall assessments of the schools, colleges and Education Departments at universities in order to reach their verdicts on the ‘Overall Effectiveness of the Provider’ they have inspected (Ofsted 2012: 62).
Accordingly, one of the main aims of the study is the exploration, analysis and evaluation of the data collection methodologies which inspectors would use to collect the data they would subsequently analyse and evaluate and interpret in order to award one or a combination of grades on the above scales and hence make each of the above judgements. And, second, the study will focus on the exploration, analyses and evaluations and interpretations which inspectors would subsequently perform on the data in order to enable them to form the above judgements. Thus the sum of the exploratory, analytical and evaluative and interpretive efforts will focus on the judgement criteria and the associated evaluative statements.
The reasons for the explorations, analyses and evaluations of the above aspects of judgements are as follows: the first reason was that through the explorations, analyses and evaluations it would become possible to determine the skills and competencies which inspectors should and must have in order that they are able to collect, analyse, evaluate and interpret inspection evidence and hence make each of the above judgements. The second reason for the explorations, analyses and evaluations of each of the above aspects of judgements was to examine the extent to which the contents and syllabuses of Ofsted’s education and training and development programmes for inspectors enabled inspectors to be equipped with the skills and competencies they should and must have in order to assess and interpret evidence and hence carry out each of the three aspects of judgements above.
Before I proceed beyond this point I want to explain the meanings of Ofsted’s judgement criteria and evaluative statements because they are central to everything Ofsted has ever thought, said, done and would subsequently do.
The simplest way to explain the relationships between the judgement criteria and the evaluative statements is that they are like a river and its tributaries: none of them, river and tributaries, can amount to anything much without the other. Thus, first, the judgement criteria refer to the sets of pre-defined performance criteria for the colleges. These pre-defined criteria spell out the standards to be applied in the measurements of the achievements of students’ the primary learning goals. During inspections Ofsted expects its inspectors to unearth evidence of performances which correlate with the factors it has specified within the judgement criteria in the colleges in order that they are able to decide on the final grades to be awarded with respect to the three aspects of judgement.
Second, the evaluative statements are the sets of reasoning underpinning each judgement criterion of the three aspects of judgements. This means that the evaluative statements correspond to the evidence which inspectors must also unearth and correlates to the factors which has been specified within each of the judgement criteria in order to prove that the colleges have, indeed, achieved the performance indicators. In other words: the judgement criteria are the events and the evaluative statements are the types of proofs, occurrences, and the frequencies and distribution of the proofs and occurrences which are required to prove and substantiate the events.
The second focus of the project is on the education and training and development programmes which Ofsted provides for its inspectors. By education and training and development I am referring to the initial programmes designed and taught to new HMIs by Ofsted or by its agents in order to enable them to use the judgement criteria and evaluative statements to make inspection judgements based on the above scales.
There have been doubts about the quality of the education and training and development of inspectors in the past. Thus some of my doubts about the quality, skills and competencies of Ofsted inspectors which subsequently occasioned this project were not new. This is because even before Ofsted carried out its first inspection of schools in England in 1993 concerns were already being expressed about the quality of the initial education and training and development of inspectors and the impact of the poor quality education and training and development on the skills and competencies of inspectors and hence the quality and standard of inspection outcomes.
The concerns about the quality of the education and training and development of inspectors were raised by the MPs of the Education Committee in 1992 when they summoned the inaugural HMCI, Professor Sutherland, to give evidence on the extent to which he thought Ofsted was prepared and equipped to carry out the inspection of schools better than the inspection system it had replaced. The questions the MPs put to the HMCI concerned the skills, competencies and the experiences of the inspectors that were being recruited and trained by Ofsted in time for the start of the inspection of secondary schools in 1993 (Education Committee 1992: 4). The MPs wanted the HMCI to explain why he thought the quality of the training of inspectors, including the programme of instruction, assessment, effectively prepared his inspectors for the tasks of inspecting schools. More crucially the MPs wanted the HMCI to explain the criteria for the assessment of successful outcomes for trainee inspectors: ‘would there be an examination at the end of the training’, they asked.
In his evidence the HMCI identified a two-stage training programme at the conclusion of which, he claimed, that successful candidates can become fully registered inspectors. According to Professor Sutherland, the first stage of the training, which I assumed corresponded to a theoretical stage, involved a five-day residential course. The HMCI did not give any indication of the contents and syllabuses that were taught on the course. He also did not give any indications of the performance criteria, methods of assessment and the desired learning outcomes. Nor did he give any details about the qualifications, skills, competencies and expertise of the lecturers, or tutors or trainers of the inspectors.
The second stage of the training process was, according to Professor Sutherland, the In-Service Training. Professor Sutherland claimed that during this stage of the training trainee inspectors participated in and were observed as members of a team of inspectors in actual inspections of schools. Again the HMCI gave no indications of who the observers were. Similarly, the HMCI did not give any clues to what the observers were to observe trainee inspectors doing. And he did not state the practical skills and competencies that were to be developed and demonstrated during the observation. Nor did he state whether the assessment was criteria referenced and what the performance criteria were (Education Committee 1992: 5).
Thus the questions, then, are as follows: are the two-staged education and training and development processes described by Professor Sutherland still in use today, more than two decades after he made his submission? The next question is have changes and improvements been made to the programmes in order to reflect the developments that have occurred in education over the period? If there had been changes and improvements, what were they? The third question is to what extent have these changes and improvements led to increases in the effectiveness and quality of Ofsted’s performances as a result of the quality of the education and training and development of its inspectors? The fourth question is to what extent have the changed and improved education and training and development programmes prepared inspectors for the educational challenges of the 21st Century? The fifth question is do Her Majesty’s Inspectors (HMIs) and Additional Inspectors (AIs) undergo similar education and training and development programmes?
Accordingly, the goals which the research aimed to achieve with regards to the education and training and development of inspectors were twofold. The first main goal was to explore, describe, analyse and evaluate Ofsted’s initial education. In order to achieve this goal the research will explore, analyse and evaluate the extent to which the programmes prepared and equipped inspectors with the skills and competencies they would need in order to be able to make the judgements set out earlier in paragraph 1. Thus the research will be focusing on the exploration and analysis and evaluation of the quality and standard of the education and training and development programmes.
Specifically, the research will be focused on the following areas:
Course contents: what do Inspectors actually learn during their education and training and development? These contents include syllabuses, units or modules specifications and elements of instruction, performance criteria and learning outcomes.
Accreditation: are the course contents and syllabuses nationally accredited? Who accredits the course contents and syllabuses? How often are the contents of the course revised, reviewed and reaccredited? Who are the Industry Lead Bodies?
Delivery of learning: how are the education and training and development programmes delivered? Where is the training delivered? What is the duration of the programmes? Who delivers the programmes? What are the educational attainments required by trainers, lecturers and tutors who deliver the programmes? What skills, competencies and experiences must be demonstrated by trainers, lecturers and tutors who deliver the programmes? What systems have been designed and used to monitor, control and assure the quality of delivery and learning outcomes?
Assessment: how is learning assessed: what are the assessment methods? Who conducts the assessment? What are the grading criteria? Are assessment decisions standardised and externally verified? Who carries out the verification? What systems have been designed and used to monitor, control and assure the quality of assessment? What are the minimum levels of achievement required before trainees are deemed to be qualified inspectors? How do the minimum levels of achievement fit into national qualification framework? What roles does Ofsted play in the monitoring, management and quality of the education and training and development of inspectors?
The focus of the third aim of the research was on the exploration, analysis and evaluation of Ofsted’s Continuing Professional Development (CPD) programmes for inspectors. In particular the research will focus on the following questions:
Are inspectors required to demonstrate that they have undertaken Continuing Professional Developments during a defined period of time? Are such Continuing Professional Developments mandatory or voluntary?
Has Ofsted specified the kinds of CPD programmes which HMIs and Additional Inspectors (AIs) must undertake? If so, what are they?
Are such Continuing Professional Developments validated and monitored by Ofsted or by other Learned Professional Organisation?
What roles does Ofsted play in the Continuing Professional Development of practicing inspectors?
Does Ofsted have programmes designed to continuously evaluate its CPD programmes in order to ensure that they are continuously fit for purpose?
The focus of the fourth aim of the project was the exploration, analysis and evaluation of the management of the practice of inspection by focusing on finding the answers to the following questions:
What are the roles of the Regional Inspection Service Providers (RISPs) in the inspection of colleges?
How does Ofsted monitor, manage and measure the performances of Inspectors during and after inspection of colleges in order to ensure that inspectors are achieving the established performance criteria with effectiveness, quality and excellence?
Are there Professional Codes of Practice devised and overseen by recognised National Associations or Organisations to which practicing inspectors must subscribe and uphold?
What disciplinary procedures and sanctions has Ofsted formulated to deal with misconduct and poor professional practice by HMIs, AIs and the Inspection service Providers?
What organisational control and management systems have been instituted by Ofsted in order to enable it to monitor the achievement obligations it has assigned to the Regional Inspection Service Providers (RISPs) and hence ensure that during the inspection of the colleges that inspectors contracted to the RISPs upheld the codes of conduct it has set out in the ‘Framework for Inspection?
The sum of the aims of the research project is as follows:
First, the project will focus on the exploration, analysis and evaluation of Ofsted judgements. These explorations, analyses and evaluations will involve the examination of the roles of inspectors in the field, particularly the data they are called upon to collect, analyse and evaluate and interpret and the judgements they are then called upon to make on the basis of their analyses and evaluation and interpretations of inspection data.
Second, the project will explore, analyse and evaluate Ofsted’s training and education programmes. The explorations, analyses and evaluations and will involve the examination of the structure, contents, syllabuses, delivery and assessment of Ofsted’s education and training and development programmes and the extent to which the contents and syllabus of those programmes equipped and prepared inspectors for their statutory roles. The rationale for the explorations, analyses and evaluations is to assess the extent to which the programmes prepare inspectors for their roles in the field.
Third, the project will carry out critical examinations of the extent to which the programmes enabled inspectors to understand the interplay between economic, cultural and social characteristics of the constituents of the colleges and academic achievements and progress.
Fourth, the project will describe the management of quality. This will involve the explorations, analyses and evaluations and interpretations of Ofsted’s quality control and assurance systems.
Fifth, the project will draw attention to the failure or success of Ofsted. This will involve considerations of how Ofsted has evolved and developed operational capabilities to cope with environmental dynamics and whether or not Ofsted is fit for purpose.
Sixth, the project will define the measures and steps Ofsted should evolved, develop and implement in order to bring itself to the 21st Century.
This project was carried out as a fraction of the main research project I was engaged in during the period 2002 to 2013. The main research project was carried out in five colleges in the London Area. However the results which will be described on these pages will use data from two of the colleges. But I should point out from the start that where it becomes important to the aims I have set out above I will not only draw on data from the other three colleges, but also I will draw on the published essays and theses based on the main research project.
Over the duration of the project there were eighteen Ofsted inspections of the two colleges. The inspections included full inspections, reinspections of departments that were judged and awarded Grade 4 during the previous full inspections, and monitoring visits in which the colleges were subjected to progress reviews. Thus during the period 2002 and 2013 the colleges were inspected as follows:
College one: Adult General FE
Number of students: 10,000 plus
Number of campuses: three
illustration not visible in this excerpt
Table 1: the inspection of a London FE College 2002 – 2012
College two: Sixth Form
Number of students 1200 – 1500
Number of campuses: 1
illustration not visible in this excerpt
Table 2: the inspection of a London College 2004 - 2013
Thus as could be seen from tables 1 and 2 between 2002 – 2013 the two colleges were inspected, reinspected and paid monitoring visits eighteen times as follows: the FE College was subjected to full inspections in 2002, 2007 and 2012; one reinspection in 2004, and monitoring visits in 2006, 2008 and 2009. The Sixth Form College was subjected to full inspections in 2004, 2008, 2011 and 2013; reinspections in 2006 and 2010, and monitoring visits in 2006, 2007, 2009, 2012 and 2013.
During the period a total of fifty-six Ofsted inspectors were involved in the inspections of the two colleges. It was these inspectors which were the subjects of observations, encounters, discussions and interviews by proxy during the project. And it was as a result of the observations, encounters, discussion and interviews by proxy of this population of inspectors that it was found that 30% of Ofsted inspectors who inspected these colleges between 2002 and 2013 did not have the skills and competencies required to meet the performance criteria defined by Ofsted for inspectors within the further education and skills remit.
This report is a culmination of more than ten years of observing Ofsted inspectors as they go about the practice of the inspection of two colleges in the London Area. As I have pointed out in the preceding section this project was an addition and an aside to the main research project on which I was engaged from the early 2000s. Accordingly the methodological steps which I am going to describe occupied the same space, interacted and ran in the same directions as the ones that were used in the main research project.
However, there were methodological steps and approaches which were specific and unique to the project. The first such approach was observation. This involved informal observations of inspectors at close quarters during the inspections of the two colleges. I used observations because I was much of the time in the classrooms and in these lessons when the inspectors were carrying out lesson observations. Additionally, the greater parts of these observations occurred in the lessons because I sometimes taught and assisted students with their work during some of these lessons. Thus parts of the research method involved participant and action research. Additionally, as a research person attached to these colleges I was a member of the colleges. I attended pre-inspection meetings, training and briefings, inspection in-progress updates and briefings, and post-inspection training, meetings and briefings by the colleges’ Quality Nominees. In other words: the methods were in parts observations and in part participant and action research.
Thus in the circumstance observation was the most appropriate to use because inspectors do not give interviews during or after inspection. They do not give written feedbacks to teachers. Indeed, they do not even answer questions from the teachers whose lessons they had observed. Instead they interviewed people. I attended and was present in these interviews and the discussions in these interviews were invariably centred on the extent to which teachers and Support Staff were converse with the colleges’ policies and practices, management practices and student characteristics. Thus there was nothing which was discussed in these interviews which I could not have learnt from interviews with teachers, managers, principals and, indeed, read in the colleges’ documents I have outlined below.
The second methodological approach was ethnographic interviews. Five rounds of interviews were planned, but the fifth rounds of interviews were not carried out because of the reasons which I would explain below.
Accordingly, the first round of interviews was proxy interviews, which involved inspectors. And by proxy interviews I meant that I did not conduct this part of the interviews. Instead they were interviews between inspectors and students. And the interviews were conducted by inspectors. But I recorded the interviews as they were being conducted. As I have pointed out above inspectors never discuss with individual teachers, with the exception of the senior management, particularly the Quality Nominees. Thus I used proxy interviewing by indirectly interviewing the inspectors through the analyses of the inspectors’ interviews with students.
The second rounds of interviews involved structured and open post-lesson observations discussions and interviews with teachers.
The third rounds of interviews involved post-inspection discussions and interviews with principals, managers, heads of departments and team leaders.
The fourth rounds of interviews involved a cross-section of the entire student population in the colleges.
As a part of the second methodological step Ofsted and the three Regional Inspection Service Providers (RISPs) were contacted via email for access to officials for interviews. The interviews with Ofsted and RISPs would have been the final rounds of interviews and they would have completed the rounds of planned interviews.
The Regional Inspection Service Providers (RISPs) that were contacted were CfBT Education Trust in the North of England, Serco Education in the Midlands, and Tribal Education in the South. The requests for interviews were refused. The grounds cited by RISPs for their refusal were, first, that they owed Ofsted contractual obligations. They claimed that the contractual obligations they owed to Ofsted forbade them from discussing Ofsted’s information with a third party. The second ground which was cited by the RISPs was the argument that they could only participate if the requests for interviews and access to documents were routed through Ofsted. In other words, I was to route my request for interviews to Ofsted for approval. The third ground which was cited by the RISPs was that Ofsted must give them written permission to participate in the project. This means that Ofsted was responsible for the transmission of the requests for interviews and access to documents to the RISPs.
For its part Ofsted has never accepted or declined the requests for interviews. Indeed it has never replied to the request for interviews with its HMIs and the RISPs. And it has never agreed or declined permission for the RISPs to participate in the project. More importantly, Ofsted has never explained why it has refused to transmit my request for interview access to the RISPs.
The third methodological step was ‘ethnographic dialogues’ with the colleges’ and Ofsted’s documents (Igbino 2012a: 79). Thus there were two main documentary sources consulted and read during the study. And these sources were as follows: a) the colleges’ documents and b) Ofsted’s documents.
The first documentary source was the colleges’ own documents. These documents included the colleges’ research analyses, particularly their researches and analyses of the social, economic and cultural backgrounds of their students, retention, attendance and punctuality, and progression within the colleges’ levels of courses and progression to University, Employment and to unknown destinations. The documents included policy statements, practices, directives and policy implementation processes, including policies on attendance, punctuality, Equality and Diversity, Lesson observations, exams and assessment, Learning and teaching, students’ financial support, child protection, bullying and harassment, and corporate risk management documents. These documents were impersonal and were made available to the research in compliance with the requirements of the provisions of the Data Protection legislations and they contained non-confidential data. These documents were also the made available to inspectors by the colleges as a matter of statutory requirements. And the inspector teams and I would have read and analysed these documents. Thus as I will show in chapter 2 it was the results of the analyses, evaluations and interpretations of these documents together with the disaggregated statistical data they contained which posed skills and competencies problems for significant proportion of the members of the inspection teams which I encountered between 2002 and 2013.
The second documentary sources were Ofsted’s documents. Access to these documents involved direct contacts between the project and Ofsted via the Freedom of Information Act (2000). Thus under the FOI (2000) Ofsted made certain documents available, including some documents from the Regional Inspection Service Providers (RISPs).
Thus given that Ofsted had already refused to cooperate, why was it necessary to contact it via the FOI for access to its documents? The answer is that the contacts with Ofsted via its documents and direct email communications were important to the project. The importance of the contacts lay, first, in the central argument of the project that 30% of the practicing Ofsted inspectors who were encountered, observed and interviewed by proxy and whose interpretations of the colleges’ records and data I have seen, read and heard during the inspection of the colleges were poorly skilled and incompetent to inspect within the further education and skills remit. And, second, that the inspectors who were involved in this project together with Ofsted’s documents that were read and analysed were emanations of Ofsted. Therefore the data relating to the encounters, observations and interviews by proxy, and the documentary data and my subsequent analyses and evaluations of the data were deemed to be legitimate evidence emanating from Ofsted.
Thus the contacts with Ofsted through the FOI and its documents were important in order to complete the research cycle by establishing direct contact with Ofsted, first, in order to explore, analyse and evaluate how it educates, trains, develops HMIs. And, second, the contact was important towards the exploration, analysis and evaluation of how Ofsted manages the performances of its inspectors in order to update their skills and competencies. Here parts of the focus of the explorations, analyses and evaluations were aimed at looking for the origins of poorly skilled and incompetent inspectors within the ranks of the inspectors that were observed during the project. Accordingly, two sets of research questions (Documents 1 , including subsequent amendments at the behest of Ofsted) based on the aims of the project were sent to Ofsted and to the three inspection contractors in April 2012 under the Freedom of Information Act (2000).
Ofsted subsequently gave access to some of its documents and some documents were written purposely in answer to my questions. Additionally, it subsequently asked the RISPs for access to their Professional Qualification in Schools Inspection (PQSI) training documents. However, it has never given its consent for me to access any of its officials, staff and HMIs for interviews. Similarly, it has refused to ask the Regional Inspection Service Providers (RISPs) for interview access on my behalf, even when some of these contractors have agreed to participate in the project.
Nevertheless, the ethnographic readings and analyses and evaluations of the documents which Ofsted has supplied have enabled me to piece together some answers, particularly answers relating to inspection decisions, judgement criteria and evaluative statements. And more importantly they have provided some answers to the questions relating to the initial education and training and development of Her Majesty’s Inspectors (HMIs) and Additional Inspectors (AIs); to questions relating to the Continuing Professional Development of HMIs; to questions relating to the management of the practice of inspection, and to questions relating to the impact of the education and training and development programmes on the quality of Ofsted inspections. A cross-section of the email transactions between this research, Ofsted, RISPs and the Information Commissioner are in the (Document 2: communications).
Every story has beginnings. The beginnings of the story could be, first, the results of the experience one has gained from exposures to trivial occurrences. Second, the beginnings of the story could be the results of the experiences one has gained from exposures to life changing occurrences. And, third, the beginnings of the story could fall between the results of the experiences one has gained from exposures to occurrences that fell between the trivial and the life changing. So it was with the beginnings of the story of this project: it was neither trivial or life changing. And it began in 2002. Thus in chapter 1 I will discuss the story of how the research project began.
Ofsted’s approaches to measuring the standards of education England are through periodic inspections, the so-called Section 5 (S5) inspections. Thus at the conclusion of each of the inspections in tables 1 and 2 the inspectors made three types of judgement decisions. The first type of judgement decision focused attention on the standards of achievements attained by the colleges’ students, given their previous educational historiographies. The standards of achievements attained by students were in turn derived from the disaggregated statistics of the results of public examinations; the results of continuous assessments, progression, retention, Equality and Diversity and value added data. Thus the overall focus of the first type of judgement is ‘Outcomes for Learners’.
Accordingly, it is the explorations, analyses and evaluations of the judgement criteria and their associated evaluative statements on which inspectors would subsequently base their judgements of the extent to which, in their opinions, they thought students have achieved their primary learning outcomes which are the objectives of chapter 2.
The explorations, analyses and evaluations would include criticisms of the judgement criteria and their associated evaluative statements as well as the notion of using ‘Outcomes for Learners’ as indices of the extent to which teaching has been deemed to have ‘promoted learning’. More importantly, the explorations, analyses and evaluations would involve the examination of the extent to which inspectors have been educated and trained and developed to engage in the kinds of analyses and evaluations and interpretations of the statistical and human factors which are likely to affect ‘Outcomes for Learners’. In the latter case the focus would be on the inspectors’ analyses, evaluations and interpretations of the data the colleges have made available to them.
The focus of the second type of inspection judgement is on the quality of provision through the assessment of teacher effectiveness. The evaluation would be derived from the assessment of the ‘Quality of Teaching, Learning and Assessment’. The collection of the data that would enable inspectors to judge the extent to which they thought teaching promoted the desired ‘Outcomes for Learners’ is through lesson observations. Thus the overall focus of the second type of evaluation is on the colleges’ inputs and processes. Accordingly, the focus of chapters 3 and 4 is on lesson observations. Thus, in chapter 3 I will explore, analyse and evaluate Ofsted’s methodological approaches to the planning of lesson observations. And in chapter 4 I will focus on the explorations, analyses and evaluations of the judgement criteria and their associated evaluative statements on which inspectors would subsequently base their judgements of lessons and hence the construction of ‘Grade Characteristics’ as a measure of the effectiveness of the colleges’ teachers. The explorations, analyses and evaluations will involve criticisms and questions about the ability of inspectors to comprehend and make sense of the multi-layered interactions and transactions occurring in lessons. And parts of the criticisms I will draw on the work of C.P. Snow (1959), Igbino (2009, 2012b), Schon (1983) and Igbinomwanhia (2010).
The focus of the third type of inspection judgement is, again, on the quality of provision, and on the colleges’ inputs and processes. However on this occasion the attention of the inspectors is on the quality of the leadership and managements and the organisational structures and relationships which the colleges have designed and implemented in order to secure the quality of provision. Principal among these structures were the ‘Effectiveness of Leadership and Management’. Thus the explorations, analyses and evaluations of the judgement criteria and the associated evaluative statements which were used to construct the ‘Grade Characteristics’ for leadership, management and governance would be attempted in chapter 5. The explorations, analyses and evaluations would, again, involve discussions of whether or not inspectors possessed the skills, competencies and expertise in organisational theories and in Organisation and Methods (O&M) in order to be able to carry out the types of leadership, management and organisational audits and analyses which were involved judgements of the effectiveness of leadership and management.
During the inspections of the colleges HMIs and privately trained and self-employed Additional Inspectors (AIs), contracted to Regional Inspection Service Providers (RISPs), arrived at the colleges to observe and grade lessons, lectures and tutorials; they carried out interviews with sample population of students; they interviewed management and teaching staff, and they examined documentary evidence of the implementation of Equality and Diversity legislations, achievements on all levels of the curricular programmes offered by the colleges, recruitment, retention, value added, corporate plans, Quality Improvement Plans (QUIPs) and Self-Assessment reports. The inspections culminated in reports in which the first, second and third judgement categories outlined above were summarised and used to compute the ‘Overall Effectiveness’ of colleges. Accordingly, the explorations, analyses and evaluations of the ‘Overall Effectiveness’ will be attempted in chapter 6. These explorations, analyses and evaluations will question the methodological approaches used in the syntheses and computations of the overall grades on the basis of the judgement criteria and the associated evaluative statements.
The continuous education and training and development of inspectors have important influence. They exert influence not only on the quality of inspectors as human resource personnel, but also they influence the performances of inspectors and the achievement of established performance criteria and hence the ability of Ofsted to continuously discharge its statutory duties efficiently. Thus in chapters 7and 8 I will discuss the education and training and development of Her Majesty’s Inspectors (HMIs) and Additional Inspectors (AIs). In particular, in chapter 7 I will focus on the exploration, analyses and evaluations of the initial education and training and development of HMIs, while in chapter 8 I will focus on the exploration, analyses and evaluations of the education and training and development of AIs.
Parts of the data which inspectors would collect, analyse and evaluate and interpret in order to form their judgements on the ‘Effectiveness of Leadership and Management’ of the colleges include the management of the performance of teachers. And yet other parts of the data would include the extent to which the results of the management of the performance teachers had been used by the colleges’ leaders and managers to inform the contents, syllabuses and structure of the colleges’ Continuing Professional Development (CPD) programmes for teachers in order to improve the quality of teaching. Thus Ofsted takes CPD very seriously. Accordingly, in chapter 9 I will explore, analyse and evaluate and report on Ofsted’s Continuing Professional Development Programmes for HMIs.
Ofsted is the manager of the statutory requirements of the Education and Inspection Act 2006 on behalf of the State. Thus the entire rationale of Ofsted inspection is the standard of education in England. To ensure that the colleges complied with the requirements of the Act Ofsted has developed a range of judgement criteria and evaluative statements. These judgement criteria and evaluative statements were the objectives chapters 2, 4 and 5. The goals of these criteria and statement are to improve the quality of outcomes, teaching and leadership and management. Thus the objective of chapter 10 is to explore the methods used by Ofsted and the Regional Inspection Service Providers (RSIPs) to manage the overall quality of inspectors and inspections.
The discussions in much of the preceding chapters would have demonstrated some of Ofsted’s deficiencies. These chapters would have demonstrated that Ofsted has in many instances failed to develop and implement the capabilities that would have enabled it to perform its statutory duties efficiently. The question that would have arisen from the evidence which I would have discussed in the preceding chapters is as follows: why has Ofsted failed to identify its own deficiencies, and develop and implement operational capabilities that would have solved or would have ameliorated the impact of these deficiencies on its performance? Accordingly, in chapter 11 I will explore and try to provide some explanations to why Ofsted has not evolved capabilities that would have enabled it to address its deficiencies.
In the foregoing chapters the project has disputed some of Ofsted’s claims and has drawn attention to specific areas where Ofsted’s has failed in the achievement of its statutory responsibilities. Accordingly, in chapters 12 – 16 I will discuss the measures which Ofsted should implement in order to improve its operational capabilities. Thus, first, in chapter 12 I will address the issues surrounding Ofsted’s quality control and assurance systems and suggests new standard and quality management system.
Second, in chapter 13 I will address the methodological weaknesses underlying the collection, analyses, evaluations and interpretations of documentary data on ‘Outcomes for Learners. The chapter will discuss the contents, syllabuses, performance criteria and the assessment of the education and training and development programmes for ‘Outcomes for Learners’.
Third, in chapter 14 I will discuss new education and training and development programmes for judging ‘Teaching, learning and Assessment’. In addition, the will chapter set out the contents, syllabuses, performance criteria and the assessment of CPD programmes on lesson observations.
Fourth, in chapter 15 I will discuss new education and training and development programmes for judging ‘Leadership and Management’. The chapter will set out the contents, syllabuses, performance criteria and the assessment of CPD programmes on the management of financial, human, physical and technological resources.
And, fifth, in chapter 16 I will discuss new systems to maintain the professionalism of practicing inspectors. The chapter will suggest the introduction of a national organisation to supervise the professional competence and conduct of practicing inspectors.
These introductory pages have sketched the aims and the methodological steps of the project, the context of the study and have raised some of the pertinent questions which were important to the project. And it was stated on these pages that on the basis of the observation of a population of Ofsted inspectors during the inspections they carried out in two London colleges between 2002 and 2013 that 30% of the inspectors that were observed during the period did not have the skills and competencies in basic methodologies, analyses and evaluation of evidence within the further education and skills remit. Additionally these introductory pages have claimed that Ofsted had been uncooperative and that Ofsted stands in breach of the FOI Act (2000) because it has failed in its statutory duties to ask the RISPs to submit documentations of the quality and standards of the contents and syllabuses, assessment methods and performance criteria of the education and training and development programmes they provide for AIs for public examination.
In this chapter I want to discuss how I came to carry out a project involving Ofsted, its inspectors and education and training and development programmes. First, I must state that I did not plan and set out to design and carry out a research project with Ofsted as the focus. And, second, I must state that the project began by accident. And that the accident occurred when circumstances threw me and Ofsted inspectors together in the classrooms of two London colleges in 2002 and 2004 respectively. Thus in this chapter I want to discuss the first accidental encounters and subsequent encounters with Ofsted inspectors during a period which stretched over more than a decade.
So, then, the question is: what exactly led to this project? The answer is, as I have indicated above, that I accidentally came across Ofsted inspectors in 2002. Up to then I knew there was Ofsted because my main research interest was post-compulsory education policy and I was converse with the provisions of the Higher and Further Education Act (1992) and the Education (school) Act (1992). I also knew that Ofsted had inspectors which inspected schools in England on its behalf. Thus the meeting in 2002 was my first real encounter with Ofsted and its inspectors. And the consequences of what transpired during the encounters raised questions in my mind about the skills and competencies of some of the inspectors I was to meet in the colleges during the period. The questions that were raised in my mind were questions which touched on the value of inspectors as human resource capital who were employees operating in the field of education standard on behalf of Ofsted.
The first sets of encounters occurred in one of the colleges in which I was doing my research projects. At the time the College was being inspected. When the encounters occurred I was with a group of Adult students and their teacher in an NVQ Level 2 Business Administration Course. The lesson came under observation by an inspector. I did not particularly pay attention to the inspectors when he came into the lesson and took his seat.
But as the lesson progressed I think he mistook me for one of the students because as he went round the room looking at students’ work he came to look at what I was writing and asked me questions about the lesson. As I answered his questions it became clear to me that that particular inspector did not know what the teacher and her students were doing. I came to this conclusion because his questions were directed at finding out what I thought about being taught manual filing in a ‘room full of computers’. My thoughts were that he should be trying to establish whether or not I understood what the teacher was teaching about manual filing involving the use of the letters of the alphabets. Yet there he was asking me about the appropriateness of teaching manual filing systems in a computer room. Was that what he was supposed to be observing in the lesson? Was he supposed to be observing and assessing the physical resources in the contexts in which lessons took place?
I could not see anything wrong with filing papers manually and doing so in a computer room. I remember thinking that he did not read the materials the teacher had meticulously prepared for him about the lesson or that had read them and did not understand what he had read. This was because the plan of the lesson and the copy of the scheme of work which the teacher had given to me and which I assumed had also been prepared for him within the document folder he was holding in his hand contained a performance criterion which had Elements concerning skills and competencies about manual filing which NVQ Level 2 Business Administration must demonstrate, so the teacher was teaching the requirements of NVQ Level 2 Business Administration. And she was not expected to use computers in this particular instance.
Moreover, I was thinking that if the purpose of the observation was to judge the quality and standard of ‘Teaching, Learning and Assessment’, why should the physical presence of computers in the room in which the lesson was being held become an equally important factor in the assessment of the quality and standard of the teaching? And I sat there listening to him I kept thinking: if this inspector failed to understand or misunderstood the evidence unfolding before him, how could he make a valid and reliable judgement? I came away with the thoughts that not only that that inspector had no understanding of the structure, contents, teaching and assessment of NVQ Business Administration Level 2 qualification, but also that he had already formed his judgements about the lesson even as the lesson, which was a two-and half hour lesson as was customary in FEs at the time, was still within the first three-quarter of the first hour.
My subsequent encounters with Ofsted and its inspectors occurred in these colleges in 2004, 2005, 2006, 2007, 2008, 2009, and between 2010, 2011, 2012 and 2013. The encounters occurred during full inspections, reinspections, fractional reinspections of specific departments and monitoring and progress visits. Parts of these encounters occurred at close quarters with members of the inspection teams, some other parts of the encounters were brief and yet other parts of the encounters were ethnographic interviews by proxy. But the encounters were discursive and intense.
During some of the above encounters I came to find patterns in the specific areas of what teachers claimed they had been told about the lessons that were observed by members of the inspection teams quite disturbing. Some of the information was disturbing because of the implications of the educational and pedagogical problems posed by specific aspects of the areas which teachers claimed had been mentioned and identified by some members of the inspection teams as the sources of the weaknesses in some of the lessons they had observed in these colleges.
At specific instances in these encounters there were inspectors who were arguing for planning which should have embedded primary educational analyses in post-compulsory education analytical contexts. Here I am not in any way suggesting or arguing that primary educational thoughts could not contribute meaningfully to ‘best practice’ and ‘good teaching’ in further education lessons and I am not arguing that the skills, competencies and proficiencies acquired in primary education, teaching, learning and assessment contexts could not be distilled and adapted into ‘best practice’ and ‘good teaching’, or for that matter ‘weak teaching’ in post compulsory education. But what I am arguing is that ‘best practice’ and ‘good teaching’ in primary and post-compulsory education contexts cannot become exact replication of each other as the significant proportion of the inspectors had argued. The reason is because education is a human enterprise and it is that humanness which makes the replication of educational thoughts in these contexts hazardous: how exactly do we argue the case that seven year olds, say, could be taught exactly the same way as seventeen year olds or indeed fifty-three year old adults?
Yet I have sat in these meetings with staff members and have listened to these inspectors discuss these thoughts and outcomes with staff. How could inspectors effectively observe teachers and make judgements on the standard of education, teaching, assessment and learning if they were unable to draw some basic distinctions between pedagogical theories and the implications of those distinctions for the approaches to teaching children and adults?
During the inspections of the colleges there were several other encounters with inspectors who did not know the teaching, assessment and grading structures of the now defunct Advanced Certificate of Vocational Education (AVCE), GNVQ and BTEC National Diploma. There were inspectors who did not know the consequences of the impact of the communities from which the colleges drew their constituents on the progress of the colleges.
What emerged from some of these encounters was that there were inspectors of the colleges which served some of England’s most economically and socially depressed urban centres formulating their judgements without the interpretation of the consequences of the interplay between the academic, social, economic, ethical and intercultural dialogues which were occurring within the colleges and between the colleges and the community in which the colleges resided and served. In other words: there were inspectors who I have met and listened to during meetings with teaching and Support Staff and during staff briefings with senior members of the colleges’ management team who have dismissed arguments that the social and economic conditions in which people lived might have some effects on their educational progress. And I have met and listen to inspectors who have argued that there should be no difference between the achievements of students in colleges in the more affluent areas of England and the achievements of students in the more economically and socially depressed England’s urban centres.
Observing, listening to and hearing these inspectors articulating these arguments was a demonstration that they lacked the skills and competencies required to be able to gather, assess and interpret the qualitative and quantitative evidence of the consequences of the social, economic, cultural and human factors prevalent in the communities in which the colleges they were inspecting resided and from which they drew their constituents for the educational outcomes of the constituents.
And the conclusion was that these populations of inspectors were unable to interpret the statistical data the colleges had prepared for them. I also had access to the same statistical data. These data tells the story of the social, economic, cultural characteristics of the colleges’ student population. The data, particularly the colleges’ qualitative and quantitative research evidence on retention, punctuality and attendance, language and ethnicity, describe in graphic details the human and physical constraints in which they try to provide education for their students.
I will exemplify the problems with the skills and competencies of some of the inspection teams that I encountered, observed and listened to during the project. I will discuss these problems in more detail in chapter 2 where I will present evidence to dispute the inspectors’ analyses, evaluations and interpretations of the evidence that was available to them. But for now I invite readers to examine the two extracts below. The extracts are from the same inspection teams who inspected one of the colleges in this project.
Activities and discussions are not challenging enough to stimulate the students and teachers’ expectations of the students are too low. Not enough teaching inspires and interests the students and, as a consequence, they are late to lessons and often do not attend regularly. (Ofsted 2013: 3)
Indeed, the majority of the students live in south Croydon and face long journeys on public transport to get to the college. (Ofsted 2013: 9).
In the first extract the inspectors were drawing the conclusion that students were late to lessons and that they were absent from lessons because their teachers were uninspiring, uninteresting and had low expectations of them. In the second extract the same inspection team identified the difficult journeys which students have had to make on public transport system in order to arrive at their lessons. Yet they disregarded the possible impact of the problems with ‘long journeys’ on public transport on punctuality and attendance in their conclusions. Readers should examine the two extracts and argue which of the two factors, teaching or public transport, could have been the more contributory and plausible reasons for lateness and absenteeism.
I will address these extracts later but for now I want to point out that research evidence has demonstrated that post-compulsory education students do not come late to or absent themselves from lessons because their teachers were uninspiring; were boring; were unable to ‘enthuse’ them, and had low expectations of them. Instead they usually vote with their feet and leave the course and go to another College if they thought that these conditions predominate in their studentship at these colleges (Igbinomwanhia 1998 cited in Igbino 2012a: 21).
There is abundance research evidence on the impact of social and economic class and cultural values on educational achievement and progress. These researches have demonstrated the impact of ‘circumstantial educational factors’. These factors, which have seeped from the environment into the colleges because of the transactions between students and parents and guardians and the colleges as a whole and between individual students and individual teachers in the classroom, have consequences for attitudes to educational achievements. The arguments are that these ‘circumstantial educational factors’, including economic and social class, and cultural values and preferences exerted significant influence on teaching, education, learning and achievement and, indeed, on the ‘choice of curricular programmes’ (Igbinomwanhia 2010: 245).
Indeed, the extent of the poverty among the families, and socio- economic and cultural groups who live in Inner London and urban areas in the UK is well documented. Numerous researchers have drawn attention to the marginalisation of these families and socio-economic and cultural groups. These researchers have advanced the arguments that these families, which comprise the lower economic, social and cultural groups who are domiciled in these deprived urban centres, were more likely to underachieve educationally (Archer and Francis 2007: 36; Evans 2007: 7; Smith and Noble 1995: 29).
Additionally, researchers have argued that the families and socio-economic and cultural groups who live in the economically and socially depressed urban areas often do not have the skills and wherewithal to negotiate the education system in order to send their children to schools and colleges in affluent Middle Class neighbourhoods. Instead these families and socio-economic and cultural groups tended to send ‘their children to neighbourhood schools’ (Jackson and Marsden 1962: 84). And according to Lunn these ‘neighbourhood schools’, with pupils from poor backgrounds and ‘predominantly working class children performed poorly academically compared to schools [schools and colleges in affluent rural England] with mixed in-take’ (Lunn 1971: 26).
Thus for these Ofsted inspectors to have argued that all the college and students in England were the same demonstrated the poor quality of thoughts of some inspectors as human resource employed directly or indirectly by Ofsted. More tellingly their arguments were, to my mind, not only a testimony to the poor quality of the education and training and development of inspectors, but also and more importantly the arguments were a demonstration of the absence of concerted Continuing Professional Development Programmes which would have continuously equipped some of these inspectors to become converse with some of the contemporary theories, researches, issues and developments in education. These inspectors were supposed to be Ofsted’s frontline experts. They were its frontline analysts in its management of the standard of ‘Outcomes for Learners’, ‘Teaching, Learning and Assessment’, and the ‘Leadership and Management’ of education in England. Yet, there they were; completely ignorant of the interplay between deprivation and hunger and educational achievement.
Accordingly, the origins of this study are in the problems posed by the above encounters and on the basis of these encounters, interviews and discussions over the period 2002 - 2013 the argument of this report is that 30% of the Ofsted inspections teams who inspected these colleges were poorly skilled and incompetent.
This project, among others, will be raising critical issues about of Ofsted’s methodologies and the capability of its judgements criteria and evaluative statements to determine, assess and judge the standards of education in England. In raising critical issues about Ofsted’s methodologies and the capability of its judgements criteria and evaluative statements the study will be raising questions about the impact of the skills and competencies of Ofsted inspectors on Ofsted’s methodologies, analyses and evaluations, and interpretations and subsequent inspections judgements.
However, there have been previous criticisms of Ofsted’s methodologies. One such criticism was articulated by Fitz-Gibbon (1999). In her review of Ofsted she focused attention on inspection judgements; on the potential impact of inspection on schools, and on the historical precedent for the origins of Ofsted. Thus, first, her focus was on Ofsted’s methodologies and on the outcomes to which those methodologies led. She questioned the judgements of Ofsted inspectors from the standpoint of the size of lesson observations samples and hence the reliability and validity of inspection judgements. In other words: she was not questioning lesson observations as a method of data collection. Instead she was questioning the sample size and the inconsistences of lesson observers and hence the objectivity of inspections judgements. Second, she was raising issues about the partiality of inspectors and, indeed, inspections and the potential damage that have befallen schools in the wake of unreliable and invalid inspection judgements – inspection judgements which were founded on the above samples. Third, she was positing a historical precedence for the origins of Ofsted and she was therefore arguing that because of their historic hostilities towards state education Ofsted was a covert creation of the Conservative Party in order to undermine and destroy state education (Fitz-Gibbon 1999: 14 – 15).
Richards (1999) more or less pursued similar lines of arguments. His objective was to appraise the effectiveness of Ofsted’s inspection framework. Again his aims and methods were to examine Ofsted’s methodologies. His focus was on the judgements inspectors were required to make in terms of the standards of attainment achieved by pupils, the progress pupils have made [progress which Ofsted seems to be arguing could be physically observed during lesson observations] in lessons and school improvement [the capacity of schools and colleges to improve continuously, irrespective of the decline in the overall national environments]. Thus Richards’ main criticisms were that Ofsted’s understanding of the notions of standards, progress and schools’ improvement was fractional, that its methodological approaches to judging standards, progress and schools’ improvement were flawed and that because of the flaws the results of inspection and hence the judgements of the standards, progress and of school improvements were themselves flawed (Richards 1999: 51 – 52).
The central focus of the above essays was the appraisals of Ofsted as an organisation. Their focus was not on inspectors as the main human resources through which Ofsted developed, grew and became an effective and capable organisation. The above essays did not examine inspectors as practitioners of quality and standards on behalf of Ofsted. Indeed, there is scarcely any literature which has examined inspectors as human resources, particularly in terms of their employability skills and competencies and in terms of their relationships, attitudes, behaviours and approaches to teachers as professionals.
There is no study which has examined their education and training and development. There is no study which has focused attention on the evaluation of Ofsted’s performance criteria for inspectors. There has been no literature which has focused attention on how Ofsted manages the performance of inspectors at their duty stations and how it uses the results of the management of the performance of inspectors to design and develop the contents and syllabuses of CPD programmes. There has been no study which has focused on the examination of the contents and syllabuses of Ofsted’s extant education and training and development programmes in order to establish the continuing fitness of these programmes for purpose. There has been no study which has focused on how Ofsted control and assure the quality of inspection. And more importantly there has been no study which has scrutinised the roles of the Regional Inspection Service Providers (RISPs).
Thus there are bases to argue that the effectiveness of Ofsted at carrying out its statutory roles could be explored, analysed and evaluated, first, by focusing on the education and training and development of inspectors as human resources and hence on the bases of the assessment of the quality of the education and training and development programmes Ofsted provides for its inspectors in order that they are enabled to collect, analyse, evaluate and assess both qualitative and quantitative evidence.
Second, there are bases to argue that the effectiveness of Ofsted could be explored, and analysed and evaluated through an assessment of the extent to which its education and training and development programmes are relevant to practice through the development of inspection relevant contents and syllabuses of Continuing Professional Development for inspectors.
Third, the argument could be advanced that a system of quality controls and assurance which are derived from the management of the performance of inspectors prior to their arrival at the point of the delivery of inspection could be used to identify and develop CPD programmes.
Fourth, and more importantly, there are theoretical basis to argue that the effectiveness and development of Ofsted as an organisation is influenced by its ability to adapt (Schein 1980: 236) and that the quality of its inspectors, particularly the quality of their skills and competencies, are a function of the quality of Ofsted’s education and training and development programmes.
This chapter has discussed the contexts and the backgrounds to the project. The chapter has shown the encounters with Ofsted inspectors and hence the collection of field data on the basis of those encounters during actual inspection occurred between 2002 and 2013. The chapter has argued that on the bases of the analyses of the data that was collected during those encounters that 30% of practicing Ofsted inspectors who arrived to inspect these colleges between 2002 and 2013 were unskilled and incompetent and therefore have consequences for Ofsted’s methods and the quality of inspections and subsequent inspections’ reports.
This chapter will describe how Ofsted and its inspectors judge the extent to which they thought teaching and leadership and management of the colleges have enabled students to achieve the desired learning outcomes. In other words: the focus of the chapter is on the exploration, analysis and evaluation of the judgement criteria and associated evaluative statements on which inspectors would subsequently base their judgements of the extent to which the colleges’ inputs and processes involving teaching and leadership and management transform into ‘Outcomes for Learners’.
And from now on I will use the terms Ofsted, Ofsted inspectors and inspectors interchangeably.
Ofsted has defined the following four judgement criteria which inspectors must use in order to form the basis of their judgements of the extent to which the quality of teaching and assessment and leadership and management have contributed to the achievement of the desired ‘Outcomes for Learners’:
- ‘All learners achieve and make progress relative to their starting points and learning goals
- Achievement gaps are narrowing between different group of learners
- Learners develop personal, social and employability skills
- Learners progress to courses leading to higher-level qualification and into jobs that meet local and national needs’ (Ofsted 2012: 40 -41)
Thus the judgement criteria were the performance criteria against which the colleges were to be judged by inspectors.
Attached to the judgement criteria were twenty-nine evaluative statements. These statements were meant to guide and help inspectors to evaluate ‘Outcomes for Learners’ in order that they were enabled to make each of the above judgement. Thus the judgement criteria were the performance indicators which defined what the colleges must attain and the evaluative statements were the evidence and proof – the Elements - which the colleges have to demonstrate in order to satisfy the inspectors that they have successfully attained the requirements of the judgement criteria – the performance indicators. Therefore the role of the inspectors was reduced to assessing the extent to which they thought the evidence adduced by the colleges were reconciled with each of the evaluative statements and correctly satisfies the performance indicators within the judgement criteria.
Although inspectors do carry out ethnographic interviews with teachers, managers and current students as will be shown in table 25, the entire data that would enable them to subsequently deliver judgement on ‘Outcomes for Learners’ was documentary because significant volume of the disaggregated statistics on ‘Outcomes for Learners’ which the colleges would have made available to them would have been historical and would have referred substantially to the achievements and outcomes for students who have already exited their respective courses.
A closer examination of the judgement criteria and the evaluative statements would demonstrate that they are vague and ambiguous and that they lacked focus and clarity. Indeed, the statements were peppered with the terms ‘further guidance’, ‘where relevance’ and ‘inspectors take into account’.
The vagueness, ambiguity and lack of focus and clarity pose two serious problems. The first problem is whether these statements would enable inspectors to evaluate the underpinning factors which constitute the quality of ‘Outcomes for Learners’.
By underpinning factors I mean the factors which are inherent in the population characteristics of learners. The second problem flows from the issues surrounding the quality of inspectors, their skills and competencies and hence their abilities to make sense of the relationships between the judgement criteria and evaluative statements and incorporate and reconcile them into their analyses, evaluations and interpretations of the evidence adduced by the colleges, particularly the disaggregated evidence of the population characteristics of students. The question here, then, is do inspectors, as a result of their education and training and development, have the skills and competencies to establish the relationship between the performance indicators and the evidence and the proof?
The latter problem has consequences for the overall usefulness of the statements as evaluative tools. The reason is that since the statements were designed to aid and enable inspectors to make judgements and to subsequently take actions on the basis of those judgements, then their usefulness would depend, crucially, on the ability of inspectors to carry out the evaluation of evidence and interpret the outcomes of the evaluation in the light of each of the judgement criteria. In other words: the above problem reemphasised the arguments and questions about the extent to which Ofsted has continuously educated and trained and developed its inspectors, first, on the methodological approaches to the collection, analyses and evaluations and interpretations of secondary data. And, second, the latter problem raises the issues of the extent to which the education and training and development of inspectors would have equipped them such that they know and understand how to weigh evidence. Put simply the problem is: do inspectors know and understand how to assess what evidence is relevant or irrelevant and what evidence to ‘take into account’ and what evidence to discard in order to use the evaluative statement to make meaningful and objective judgements?
Ofsted inspectors made three principal post-inspection judgements of the colleges and ‘Outcomes for Learners’ was one of the judgements. The question that then arise in the contexts of ‘making judgements’ is how do inspectors use the judgement criteria and evaluative statements to construct their judgements of the extent to which they thought teaching and leadership and management have led to acceptable ‘Outcomes for Learners’?
The answers to the above question are to be found in Ofsted’s pre-defined ‘Grade Characteristics’. The ‘Grade Characteristics’ are a template of descriptive statements of success criteria which inspectors use to award grades in descending order on the basis of the best fit between the colleges’ disaggregated statistics on students’ achievements and the judgement criteria and evaluative statements.
The ‘Grade Characteristics’ are shown in tables 3 – 6. I have reordered the texts in the right-hand column, but the texts have been left as they originally appeared in Ofsted’s handbook. I have added indicative performance criteria in the left-hand column. And as could be seen from these tables the grades range from ‘Grade 1: Outstanding’, ‘Grade 2: Good’, Grade 3: Requires Improvements’, and ‘Grade 4: Inadequate’. Thus on the basis of the descending order Grade 1 is the highest and would mean that there is closer fit between the ‘Outcomes’ for the colleges’ students and the entire judgement criteria and evaluative statements.
Grade 1: Outstanding
illustration not visible in this excerpt
Table 3: Grade 1 characteristics
Grade 2: Good
illustration not visible in this excerpt
Table 4: Grade 2 Characteristics
Grade 3: Requires Improvements
illustration not visible in this excerpt
Table 5: Grade 3 characteristics
Grade 4: Inadequate
illustration not visible in this excerpt
Table 6: Grade 4 characteristics
The issues relating to the education and training and development, and the skills and competencies of inspectors and their consequences for the ability of Ofsted to deliver on its statutory duties would be addressed later chapters 7 and 8. Accordingly, the remainder of this chapter will be devoted to the explorations, analyses and evaluations of each of the four judgement criteria and their accompanying evaluative statements.
The first judgement criterion and its associated evaluative statements not only sought to enable inspectors to pass judgements on the extent to which they thought the colleges should and must have added value to the entire population their students’ prior educational achievement. But also the criterion sought to judge the extent to which the colleges should and must have created the circumstances that would enable the entire population of their students to achieve equivalent levels of qualifications, ‘relative to their starting points’.
Thus the first criterion and associated evaluative statements are arguments about curricular development. And since 1992 Ofsted and its inspectors have been waging the arguments on two fronts. The first front is that Ofsted suspects that the colleges and their teachers had ‘low expectations’ of their students and might therefore be tempted to under-teach by developing and teaching curricular materials which were below the levels of knowledge prescribed for the employment types or the qualifications types for which their students have registered.
Thus not only were the aims and methods of the first judgement criterion and its evaluative statements designed to act as deterrents in order to prevent the colleges and their teachers from cheating their students. But also they were meant to constitute safety valves which were designed to ensure, first, that curricular materials were developed on the levels which were commensurate with the requirements of national qualifications’ and, second, the judgement criterion and its evaluative statements were to acts a check on whether the curricular materials that were developed were more challenging than the requirements prescribed by the awarding bodies.
The second front on which Ofsted and its inspectors have been waging their arguments about curricular development has been the demands that curricular development must be backward and forward looking and that the past must inevitably determine the future. Thus there is, buried within the first judgement criterion, evaluative statements and its statements of ‘additional guidance’, the arguments that there were unbreakable links between students’ previous educational achievements and future achievements, not just in education but in the students’ future careers in the workplace and perhaps in play, leisure and hobbies.
Accordingly, in the words of the first judgement criterion and its evaluative statements Ofsted has argued that some forms of coefficients of proportionality exist which equated prior educational achievements to future achievements, such that a student or groups of students who achieved Grade Level 5 in English language and mathematics in primary school should achieve equivalent grades (A* - A or B in mathematics and English) at the end of a five-year secondary school education. Similarly that the same student or groups of students should achieve precisely equivalent grades in English language and mathematics if the student or groups of students subsequently go on to study English language and mathematics at these colleges.
But how firm and credible are the foundations of the above arguments? The answer to the question is that the firmness and the credibility of the foundations of the arguments are shaky because Ofsted states that it does not measure or collect data on whether or not there were possible links between prior educational achievements and future educational achievements (Blake 2012: 5).
This means that Ofsted does not have research evidence which demonstrates some semblance of a link between prior and future educational achievements, however tenuous the semblance. Yet it, nevertheless, continues to spend hundreds of millions of pounds per annum to wage the arguments and send inspectors into these colleges to find evidence whose value and contributions to the standard of education it is unable to establish, how much rather explain. Indeed, that its inspectors would continue to rely on the above judgement criterion and evaluative statements testified to one of the main criticisms which this project has levelled at Ofsted. And the criticisms are that throughout its existence it has mismanaged the judgements of the standards of education in England, that it has been regressive and that it has not contributed anything new to the education of students in these colleges and, indeed, to the entire population of colleges in England.
The focus of the second judgement criterion and its evaluative statement is on the unity of ‘Outcomes for Learners’. By unity I mean that Ofsted is reiterating the arguments it made in the first judgement criterion and that it was arguing in the second criterion and evaluative statements that irrespective of the differences in the socio-cultural, ethnic and economic backgrounds all the students attending England’s colleges must achieve at equivalent level. In other words: the second judgement criterion and its evaluative statements is about underachievement. And their aims and methods were designed to use differential rates of achievement within the ranks of students’ population groups as a platform from which inspectors could judge the quality of teaching, leadership and management.
In order to demonstrate evidence and proof against the performance indicators implicit in the second criterion and evaluative statements the colleges’ leaders and managers must, first, collect and record examinations results data for AS and A’ levels. Second, they must then disaggregate the data on the basis of the ethnicity, gender and socio-economic characteristics of their students. Third, teachers and the colleges’ leaders and managers must then identify the economic, cultural and sociological underpinnings of underachievement within students groups and situate them within local and national performance contexts. And fourth, teachers and the colleges’ leaders and managers must then demonstrate that they have used the data to formulate programmes of action plans which were designed to close the gaps in the rates of achievement between student groups, particularly ‘minority groups’. Thus Ofsted summarised the entire aims and methods of the second criterion as follows:
‘Where relevant, inspectors should take account [of] how well the achievement, including progress and progression data of different groups are collected, analysed and used to set targets to improve the performance of underachieving groups’ (Ofsted 2012: 40)
In the above statement Ofsted has argued for a unitary rate of achievement for all students in England. This means that students with the same starting point should achieve and progress at equivalent rates, irrespective of ethnicity, gender and economic and social backgrounds and irrespective of the impact of the geographical areas of England the students are domicile and attend colleges. In other words: Ofsted regards gaps in the rates of achievement between boys and girls; between some Black Ethnic Minority (BEM) groups and Whites, and between different socio-economic groups with similar achievement rates at Key Stage 4 as an anomaly caused by poor quality teaching and assessment’, and leadership and management.
But research evidence suggests that there are a whole range of interwoven factors which have significant impact on the rates of achievement and hence were likely to account for the gaps in achievement between and with population groups. The factors ranged from ‘primary educational goals and primary educational factors’ both of which are factors that are inherent in the nature of the subjects themselves and in the cross-cultural contexts in which learning is transacted. The research pointed to the complications introduced into education, teaching and learning as a result of ‘secondary educational factors’. These secondary educational factors embedded problems which were derived from the factors associated with some of the ranges of values which students have brought with them to the learning contexts and with which they transact and denominate their interactions with the ‘primary educational goals’ and ‘primary educational factors’, teachers and their peers. Additionally the research argued that there were ‘Circumstantial educational factors’, which included socio-economic variables associated with the ‘social and economic costs of participation’ (Igbinomwanhia 2010: 330 -338).
The combined effects of the above factors are likely to have significant impact on achievement. But there is nowhere in the entire documentary sources which Ofsted has made available to this project, including Ofsted’s own Annual Report, research publications and the plethora of documents available on its website, in which Ofsted has explained, first, why there are variations in educational attainment between students with equivalent prior educational achievement. Yet it made the explanations of variations in educational achievements a judgement criterion of the quality of teaching and leadership and management. Thus by requiring that teachers and the colleges’ leaders and Managers demonstrate that they have set targets to raise the attainment of ‘underachieving’ groups it was effectively asking them to elucidate and account for the causes of the variations in groups’ educational achievement and then to manage those causes and eliminate them.
Second, there was no evidence in those documents where Ofsted explained why it considered variations in educational achievement to be anomalous. Indeed, Ofsted made variations in educational achievement into all or nothing and confusing and bureaucratic arguments. On the one hand, in one of its inspection reports it criticises one College because the College did not ‘embed aspects of Equality and Diversity within teaching and learning across all curriculum areas’. On the other hand, in its inspection of another College it credits the College for its work on linking Equality and Diversity to curricular programmes, teaching, learning and assessment. But at the same time it criticises the first College because it judged that the standard of students’ ‘work vary too much’. And yet it wrote that Diversity in the first College was 60%. (Ofsted 2012: 40, Ofsted 2013: 3). Does such human and cultural diversity and variations not have implications for variations in students’ work?
Thus there are questions which have to be asked about whether Ofsted, in its current form, is able to carry out its statutory responsibilities. And the questions are as follows: are there some defined levels of variations which are acceptable and some which are unacceptable? In other words: how much variation is ‘too much’? Does Ofsted not think that the practice of inclusion and diversification of curricular programmes, teaching, learning and assessment might have consequences for achievement rates? Does Ofsted not think that the variability and diversity and difference which are quite evident in the society might be found in the rates of educational achievement? Why should it be detrimental if the variation reflects the variation within society? Does the country need to spend more than two hundreds millions pounds per annum to find out that the standard of students ‘work vary too much’?
The aims and methods of the third criterion and its evaluative statements are the so-called Key Skills. This means that the learning outcomes which were the focus of the third judgement criterion and its evaluative statements are the so-called ‘hard’ and ‘soft skills’. The former involves competencies in numeracy and literacy: English and mathematics, while the latter involves competencies in personal and communication skills including teamwork and personal attributes such as personal organisation, reliability and trustworthiness. In other words: the goal of the third judgement criterion and evaluative statements is to determine the extent to which teachers and the colleges’ leaders and managers have planned, developed and inculcated labour market skills in their students.
Thus from this standpoint the third judgement criterion and evaluative statements are a restatement of the dominant discourses in Britain’s Industrial Ethnography in the 1980s. In the 1980s discourses Britain’s industrial ethnography were articulated and woven into arguments in which the private sector became synonymous with wealth creation, productivity and value for money; in which the public sector became synonymous with non-productive outcomes, consumption and destruction of the wealth created by the private sector, and in which ‘education’ became ‘learning’ and ‘learning’ became the instruments of the labour market. I return to the discussion of the implications of the patterns of the discourses in Britain’s industrial ethnography for the management of the quality of the practice of Ofsted inspections in chapter 10.
The fourth judgement criterion and evaluative statements are concerned with the extent to which teachers and colleges’ leaders and managers have counselled, advised and guided students from their initial point of contact through their courses to subsequent exit from the colleges. Thus, first, inspectors were to pass judgements on whether students were made to understand the careers, progression and training and developmental opportunities available to them. Second, inspectors were to judge the extent to which students made progress to further learning or employment. Third, inspectors were to judge whether the qualifications, skills and knowledge attained by students were the right ones and that they would enable students to make the transition from these colleges and progress along their future career paths, either in further education, or employment, or some other forms of training or a combination of these pathways. Fourth, inspectors were to situate the progression pathways within local and national contexts and judge whether or not these pathways conformed to and met local and national needs. And fifth, inspectors were to evaluate the inclusivity of teaching, learning and assessment, leadership and management, guidance and career advice and judge whether or not they enabled ‘students with severe and complex learning difficulties’ to make progress and go on to lead independent lives.
In the remainder of the chapter I want explore, analyse and evaluate the above judgement criteria and evaluative statements.
First, it is important to explain that when Ofsted uses the term ‘Outcomes for Learners’ in its reports the underlying meaning is that readers should selectively disregard the achievements of more than 45% of students in England. This is because Ofsted uses the term ‘Outcomes for Learners’ not only to mask a disdain for vocationalism, but also it uses the term to mislead parents, teachers, politicians and the entire population of England. Thus statements such as the following mislead readers:
‘… Primarily inspection evaluates how individual learners benefit from their courses and learning programmes. We must test the providers response to individual needs by observing how well it helps all learners make progress and fulfil their potential, especially those whose needs, disposition, aptitude or circumstances require particularly perceptive teaching and in some cases additional support…’
(Ofsted 2012: 37)
Anyone reading the above statement would assume that the statement refers to the entire population of students in England. But a closer examination shows that it does not, because when Ofsted inspectors evaluated and judged the colleges that were involved in this project the achievements of vocationally oriented students did not carry any weight in the ‘Outcomes’. Instead it was the ‘Outcomes’ for academic students who were on GCSEs, AS’ and A2’s which counted towards the final judgements of the colleges. Thus Ofsted’s definition of the meaning of ‘Outcomes for Learners’ in terms of the grades that were achieved by students and the evaluations and subsequent judgements whether the colleges were ‘Outstanding’, ‘Good’, ‘Requires Improvements’ or ‘Inadequate’ excluded ‘Outcomes’ for vocational students were defined in terms of academic curricular.
The consequences of the exclusion of the achievements of vocational students for one of the colleges meant that it effectively had ‘Outcomes’ for vocational students which were above a ‘National Average for levels 1, 2 and 3’ as defined by Ofsted and was still graded ‘Inadequate’ on ‘Outcomes for Learners’ by inspectors because the ‘Outcomes for Learners’ on comparable levels of academic courses were below a ‘National Average’.
Indeed, the exclusion of the achievements of vocational students means that Ofsted is a regressive organisation because more than two decades after the publication of the White Paper: Education and Training for the 21st Century (DfE/DE 1991 Vol. 1: 19) and a year before Ofsted itself became part of England’s education landscape it still draws distinction between vocational achievements and academic achievements, and between the competitive worth of vocational and academic qualifications.
Second, when Ofsted uses the terms ‘achievement’, ‘progress’, ‘progression’, ‘qualifications’ and ‘learning goals’ it does not use the terms to refer to the entire population of students and the entire curricular programmes within the National Qualification Framework (NQF). Instead it uses the terms exclusively to define academic curricular and qualifications in inspection reports and other documents.
Third, when Ofsted uses the term ‘most able students’ it does not use the term to refer to the entire population of students in England. Instead its use of the term is exclusive: it is uses the term to describe students who have achieved Grade A – B in examinations in academic subjects (Ofsted 2013: 4).
And fourth, when Ofsted uses the term ‘Outcomes for Learners’ its meanings are quite broad and wide ranging because the meanings include progression, employability skills, social skills and remedial education designed to ensure that all students achieve at the same level.
Thus according to Ofsted students could only be defined as ‘bright’ and ‘excellent’ in terms of ‘Outcomes for Learners’ if they achieve Grade A – B in academic subjects. But what if a student or groups of students were vocationally oriented in their learning? What if a student or groups of students preferred to take alternative routes to become ‘bright’ and ‘excellent’? What if in taking alternative routes, say vocational routes, the student or groups of students achieved D – D* (Distinction – Distinction*)? Ofsted and its inspectors would answer the above questions by reporting that a student or groups of students who achieved D – D* in vocational courses were not ‘bright’ and that their performances were not ‘excellent’.
The vocational students in the colleges knew and were very much aware that Ofsted has very little regard for their efforts and achievements. A vocational student subsequently reflected on Ofsted inspectors’ attitudes to the value of vocational qualifications and to the students who took the vocational routes as follows:
‘They [inspectors] think BTEC Extended Diploma is lower than A’ levels. They think vocational students are dumb. You can hear it in their voices when they talk to vocational students. They speak slowly because they think we are not normal. They ask questions slowly like: how are you? What are you learning today? What do you think about what your teacher is doing? Like that. That is how they talk to vocational students as if we are unable to understand them when they speak to us normally…’ (BTEC Extended Diploma student: 2013)
Thus, while the colleges promoted the equivalence and equality of worth of, say, BTEC Extended Diploma and A’ Levels, Ofsted and its inspectors did not recognise the equivalence between any of the levels of vocational qualifications and any of the levels of academic qualifications in their overall evaluations and judgements of the extent to which the colleges and, indeed, teaching and assessment have enabled ‘individual learners’ to achieve their learning outcomes.
Accordingly, the judgement criteria and associated evaluative statements in the preceding sections referred entirely to academic qualifications. Additionally, I should make it clear that even though Ofsted sometimes use the term ‘vocational’ in its reports and publications it does not accord any relevance to the meaning of ‘vocational’ in its assessment of ‘Outcomes for Learners’ and overall assessment of the effectives of the colleges.
One the most important criticisms I have levelled against some members of the teams of inspectors that I encountered and observed in these colleges was because of the quality of the analyses, evaluations and interpretations which they carried out on the disaggregated statistical data on students’ rates of achievement. This criticism is important because the interpretations of AS, A2 and GCSE examinations data formed the base upon which the inspectors subsequently constructed their reports using the contents of tables 3 - 6.
The present criticism derives from close observations and participation in discussions in which inspectors have discussed their interpretations of the disaggregated statistics submitted to them during their inspections of these London colleges. In other words: the criticism is based on the inadequacy of the methodological approaches which some members of the inspection teams used when they balanced the disaggregated data on the rates of achievements for students groups with the contents of tables 3 – 6. Thus the criticism raises questions about Ofsted’s management of the standard of Education in England because from my observations and post-inspections interviews with principals and senior managers it emerged that 30% of the inspection teams who were involved in the inspection of the colleges during the period 2002 and 2013 did not have the skills and competencies they should have in order to use and understand the meanings of statistical data in the real world.
Part of the criticism is that Ofsted has not educated and trained and developed some of the inspectors who were involved in the inspection of the colleges to know and understand that statistics work or fail in the real world because of the fact that the rigour, impartiality and the utility value of statistics are influenced by the humanness of the collectors, interpreters and users of statistics.
And the implications of humanness for statistics means that some of the collectors’, interpreters’, and users’ skills, competencies and preferences often have marked influence on the importance they would have accorded to specific aspects of the statistics, be achievement, progression, attendance and punctuality statistics or otherwise. That in turn means that emphases may be placed on specific aspects while specific other aspects may be played down and that specific aspects may be preferred while specific other aspects may be disregarded.
Thus there were two serious interpretive shortcomings regarding the ways in which some members of the inspection teams interpreted and used statistics in their reports of the inspection of the colleges. The first interpretive shortcoming was that in focusing attention on the proportion of students who achieved A – A* at Key Stage 4 who should subsequently have achieved equivalent grades at Key Stage 5 the inspectors neglected an entire population of students who did not achieve A – A* at Key Stage 4 but who subsequently achieved A – A* or equivalent at AS and A2.
The above state of affair was a common interpretive occurrence during the inspection of the colleges. And it was quite astonishing that these inspectors left out this large population of students from their reports on achievement and progression. In other words: they simply left out students who came to these colleges with little or no GCSE grade point averages, embarked on Level 2 GNVQs or BTEC First Diplomas, achieved Merit and Distinction on these course and had moved cross to academic A’ level curricular and had subsequently achieved at that level.
The second interpretive shortcoming was that the inspectors always applied similar interpretations to statistics, irrespective of the differences in the circumstances and contexts in the real world which underpinned and gave meanings to the statistics. Thus as I mentioned in chapter 1 during the descriptions of my encounters with Ofsted inspectors these inspectors were looking for correspondence and equivalence between the statistics on ‘Outcomes for Learners’ from affluent socio-economic constituents of rural England and statistics on ‘Outcomes for Learners’ from the constituents of economically depressed urban England. And they were using the disparities between the rates of achievements in public examinations by students in rural and affluent England as the basic comparative inputs to judging the extent to which the colleges in poor urban centres of England enabled students to achieve their learning goals, given their starting points.
And much more astonishing was the fact that significant proportion of the inspection teams simply did not take account of the data on students’ profiles. It might be that they saw the data but that they did not know how to interpret the implications of the data for students with the population characteristics which were quite apparent from the profiles. The data on students’ profile which one of the colleges placed before the inspectors clearly showed that over 75% of the students in the College had not passed GCES in English and Mathematics at the time they enrolled at the College and that the highest average GCSE scores of students enrolling on AS courses at the College was 5.6.
And more crucially in the search for equivalence in the rates of achievement between colleges in rural and affluent England and deprived urban England the entire population of inspectors which inspected the colleges during this project misinterpreted the meanings of the first judgement criterion and evaluative and guidance statements. I have reproduced the entire criterion and its associated statements in fig 1.
In judging Outcomes for learners, inspectors must evaluate the extent to which:
‘All learners achieve and make progress relative to their starting points and learning goals’
144. To make this judgement, inspectors will consider the extent to which:
- Learners attain their learning goals, including qualifications, and achieve challenging targets
- Learners’ work meets or exceeds the requirements of the qualifications, learning goals or employment
- Learners enjoy learning and make progress relative to their prior attainment and potential
- Learners make progress in learning sessions and/or in the work place, and improve the quality of their work
- Learners attend, participate in, arrive on time and develop the right attitudes to learning.
145. Where relevant, inspectors should take into account:
- Important learning objectives that are additional to learners’ qualification aims
- Social and personal development, including employability skills
- Achievement data in different settings
- The quality of learners’ work and their ability to demonstrate knowledge, skills and understanding, with particular attention to the level of skills reached by different groups of learners.
Fig 1: the first judgement criterion for ‘Outcomes for Learners
The search for correspondence and equivalence was simply a mask over poor and incompetent analyses and assessments and interpretations of evidence. Because the search for correspondence and equivalence meant that the consequences of the differences in the social, cultural and the economics of consumption between affluent rural and depressed urban England for educational achievement and progression were subsumed under the spurious notion that there is an absolute statistic called a ‘National Average’ – a kind of statistical utopia which fits and explains all. And to mask these lopsided interpretations and use of statistics the inspectors were always basing their judgements, arguments and reports on absolute statistical values: they never paid any attention to trends within the statistical data before them. I return to this incompetence or wilful neglect of statistical evidence in section 2.6.4 when I go on to look at the impact of population characteristics on ‘Outcomes for Learners’.
But for now readers should study fig 1 very closely. And as would have been demonstrated by studying the statement ‘relative to their start points’ in fig 1 the students in these colleges had different ‘starting points’ the inspectors simply did not or could not interpret the possible relationships between the disaggregated data on students’ profiles and the disaggregated data on the rates of achievement and those starting points.
Thus, If I use fig 1 and combine its implications with the subsequent conclusions of the inspectors that ‘many of the students at [the] College do not have A* to C grades passes in English and mathematics when they enter the College’ (Ofsted 2011: 2) and that the ‘average prior attainment at GCSE on entry to the College is noticeably lower than is normally seen in sixth form colleges’ (Ofsted 2013: 9), then I would expect ‘Outcomes for Learners’ at the College to reflect those ‘starting points’. Accordingly, in order to make any valid comparisons the data on the London College would have had to be compared to colleges that admit students with similar profiles in the London Area and not colleges in West Yorkshire, even if they admitted students who had the same ‘starting points’.
Those then were some of the areas of concerns and criticisms. They were the areas which demonstrated the dearth of analytical, evaluative and interpretive skills and competencies, particularly skills and competencies in the analyses, evaluations and interpretations of statistical data and drawing inferences on the basis of the analyses, evaluations and interpretations, within the ranks of the inspectors who inspected the colleges. And they were the areas where the inspections and subsequent reports of the inspections gave inadequate and unrepresentative testimonies about the realities of the implications of the colleges’ timetable day for ‘Outcomes for Learners. These were areas which Ofsted need to tighten and which need to become integral to the contents and syllabuses of its CPD programmes.
‘External things do actually affect your achievement because if you don’t get the right external factors you are likely to be affected. For example, if you have no money to travel to college you will not attend class and if you don’t attend you cannot learn and if you don’t learn you don’t achieve. So all these combinations of external things affect achievement’ (BTEC National Diploma and A2 students)
The students whose statements I have quoted above were stating the obvious. But the obvious they were stating escaped the inspectors of the College. And what was the obvious they were stating? The obvious they were stating was that factors that were traceable to population characteristics, for example poverty and deprivation, have the capacity to trigger events which have consequences for achievement.
Thus one of the important factors which should have informed the judgements on ‘Outcome for Learners’ and which was completely beyond the comprehension of some of the inspectors was not only that the consequences of the variability and dynamics of the population characteristics of students in the colleges for achievement. But also these inspectors were completely ignorant of some of the crucial factors, for example having ‘no money to travel to college’, which would have influenced and gave meanings to those variability and dynamism. The reason the inspectors were unable to comprehend the impact of ‘external factors’ and population characteristics was because their interpretive abilities was limited: a student in Devon was exactly the same as a student in the Inner City of London and therefore these students should have achieved at equivalent rates if they had the same ‘starting points’.
In the above discussions the inspectors ignored significant factors which were capable of exerting potent influence on ‘Outcomes for Learners’. The principal factor among the myriads of factors which would have influenced and gave meanings to the variability and dynamism of the population in these colleges were the burgeoning socio-cultural and economic diversity within the population groups. These socio-cultural and economic diversities were flagged and would have been apparent in the disaggregated data on students’ profiles. And these data were not without implications for ‘Outcomes for Learners’.
But the implications that these socio-cultural and economic diversities have for ‘Outcomes for Learners’ were meaningless to the inspectors, even when the former deputy principal of the College stated the conclusions of his analyses, first, by connecting the meanings and implications of the then Labour Government’s policy of inclusion to the population characteristics of the students in his College. Second, by connecting the population characteristics of his students to the rates of achievement and by using the consequences of those characteristics for the rates of achievement to raise arguments about the way the inspectors were carrying out their measurements of the College without due consideration for the impact of the population characteristics of his students on ‘Outcomes for Learning’. These conclusions were detailed and supported by statistical data and they were also discussed during my interviews with him. And he stated some of his conclusions as follows:
I think there is a tension between the push for further inclusion and the push for increased measurements and measuring and comparison of colleges with each other and colleges and schools…Most of the students we have for level three are on the margins of being able to succeed there…We have to do everything right in order for more students not to fall off the scale… I think there is a tension. I wouldn’t want to say there is a problem. It’s a problem to be resolved. It’s a problem to be worked through, but there is certainly a tension between the way we are measured in one way in terms of inclusion and participation and in another way in terms of raw outcomes.(Cited in Igbinomwanhia 2010: 302)
The question that then needs to be asked is why did some of the inspectors fail to pay due attention to some of the data that was made available them? There are two possible answers to this question. The first answer could be that the inspectors had already formed their opinions and judgements of the College prior to the start of the inspection because as some of the Departmental Directors subsequently argued during my post-inspection interviews and discussions that inspectors ignored substantial parts of the statistics that was made available to them. Additionally, the Departmental Directors claimed that inspectors disregarded the positive and upward trends within those statistics, irrespective of the numerous discussions that were held with them. More importantly it was claimed that during the discussions there had been papers which were prepared in order to direct the attention of the inspection teams to the overall statistics, instead it was claimed that they had not read the papers in whole and had instead focused fractionally and exclusively on the negative data within the statistics.
There is truth in the experiences reported by the Departmental Directors because similar reports were filed by Managers in 2004 and subsequently during reinspection in 2006. These managers claimed that inspectors were selective in their use of the statistics that were presented to them.
The second explanation of the reasons why the inspectors failed to give due attention to some of the data that was made available to them could be because they were unable to contemplate the implications, diversity of preferences, interests, lifestyles, and the deputy principal’s conclusions and differences in the definitions of the meanings of the rates of achievement for ‘Outcomes for Learners’. In other words: when Ofsted inspectors judged the extent to which the quality of teaching in the College ‘promotes learning’ and enabled the gaps in learning outcomes between population groups to be closed they were unable to inquire into how the gaps opened up in the first place. Additionally, they were unable to examine which of the myriads of factors within and without the learning environments might have been the major contributors to the origins of the gaps.
The latter is one of the main reasons why Ofsted’s methodological approaches to the management of the standard of education in England have been flawed. And is also one of the main reasons why Ofsted inspection reports are heavy on criticisms and thin on objective solutions to the problems it perceives to be facing education in England. It directs its inspectors to examine the colleges’ records in order to satisfy themselves that the colleges have formulated action plans and set targets to close ‘achievement gaps’ within their student population groups. But it did not at the same time direct its inspectors to offer or discuss any clues or guidance with teachers and leaders and managers of these colleges on the origins of the factors that led to the gaps and hence how the problems posed by the gaps might addressed so that the gaps might be closed.
What do tables 3– 6 mean in the light of the foregoing discussions? In other words: what do tables 3 – 6 say, given the foregoing discussions? What tables 3 – 6 do say and do agree is that education, as I have been mentioned earlier, is a human enterprise. Its stock in trade is people. Therefore it was a serious omission for Ofsted to direct its inspectors to judge ‘Outcomes for Learners’ and the ‘gaps’ in achievements between learners while at same time neglecting the variability within the population of learners and the interplay between that variability and the ‘Outcomes’ and ‘gaps’ in achievement which were the object of the judgements.
Indeed, Ofsted inspectors presented and discussed the existence of variations in the rates of achievements and ‘gaps’ in achievements as if they were detrimental and as if the ‘gaps’ were planned occurrences which were the consequences of the actions these colleges took or failed to take. The ‘gaps’ were not in any way detrimental. Instead, the ‘gaps’ were natural and random occurrences because they were a manifestation of the variations and differences inherent in humanism.
Ofsted usually cloak the above omission in phrases such as ‘comparable population’ and ‘comparable institutions’. And when Ofsted and its inspectors use these phrases they are trying to bury questions about the relationships between population characteristics and the rates of achievements; they are trying to bury the probable impact of the relationships between the colleges and their environments on ‘Outcomes for Learners’, and they are trying to bury the impact of the relationships between learners’ social and economic conditions and the learners’ ‘internal state of being’ on the rates of achievement (Igbinomwanhia 2000: cited in Igbinomwanhia 2010: 23).
More importantly they are trying to bury the analyses of the notions that much of the factors that would subsequently become significant contributors to variations in achievement later in the lives of students in contemporary England would have already been laid down in the population characteristics of individuals and groups even before pupils leave their homes for nursery schools. And, indeed, when Ofsted resort to using the terms ‘comparable population’ and ‘comparable institutions’ they are trying to bury the alternative arguments that some of the most important factors which would affect educational achievements in individuals and groups were to be found within the inequality in society because of the social and economic division within British society (Butler 1973: 3)
In fig 1 I also showed the origins of some of the most devastating and damning criticisms levelled at the colleges by Ofsted inspectors. These criticisms, which were based on the inspectors’ interpretations of the colleges’ data on ‘retention, attendance’ and ‘punctuality’ rates, were to be found in the evaluative statement ‘Learners attend, participate in, arrive on time and develop the right attitudes to learning’ (my italics).
But what exactly do the terms ‘retention, attendance’ and ‘punctuality’ mean in contemporary England? The question might be answered by stating that in contemporary England ‘retention, attendance’ and ‘punctuality’ are issues of human affairs and that they are issues of the management of these human affairs, particularly the human affairs associated with reliability, dependability, and motivation, planning, organisation, self-control, self-management and, indeed, self-coordination.
Thus when the inspectors criticised the colleges because of poor ‘retention, attendance’ and ‘punctuality’ rates they were using the notion of the poverty in these human affairs and the management of these affairs as evaluative factors to assess the quality of ‘Outcomes for Learners’. However, the extent to which the judgements of such factors, particularly the judgements of ‘retention’, contribute to the standard of education, teaching and assessment in England is debatable. But I will leave the debate and let readers reflect on the significance and consequences of the ‘external factors’ cited by the students whose statements were extracted and discussed in the preceding sections.
Nevertheless Ofsted’s aims and methods in this instance were quite clear: low ‘retention, attendance’ and ‘punctuality’ were the symptoms of low standard, ‘weak teaching’, and ‘low expectations’ and of the inability of teachers to ‘enthuse’ their students. And the inspectors concluded as follows:
Activities and discussions are not challenging enough to stimulate the students and teachers’ expectations of the students are too low. Not enough teaching inspires and interests the students and, as a consequence, they are late to lessons and often do not attend regularly.
(Ofsted 2013: 3)
The above conclusion would probably have demonstrated the crudeness of the ways significant population of inspectors interpreted the statistical evidence that was made available to them in terms the first judgement criterion, evaluative statements and tables 3 – 6. The interpretation was simple: if students were late to lessons or were absent ergo it was because their teachers were boring and were unable ‘enthuse’ them and that was why they either came late to lessons or stayed at home. There can be no other reasons possible for these human frailties.
Yet the inspection teams who drew the above conclusion also drew the following conclusion about significant proportion the college’s students:
Indeed, the majority of the students live in south Croydon and face long journeys on public transport to get to the college (Ofsted 2013: 9).
Could these long journeys on public transport systems not be considered an important component in the discourse of ‘attendance’ and ‘punctuality in these colleges? Anyone who travels on public transport systems in London would probably answer ‘yes’ to the above question. Yet while the inspectors accepted that students faced difficult journeys to get to the College but they did not factor that difficulty into their judgements, instead they concluded that students were ‘late to lessons and often do not attend regularly’ because their teachers did not know how to teach and were unable to entertain their students.
There are alternative arguments and explanations which directly contradict and challenge the above conclusions. Indeed, other than difficult journeys on public transport systems there were factors in the lives of significant proportion of the students population which affected ‘retention, attendance’ and ‘punctuality’ in the colleges. Much of these factors, some of which were linked to the socio-cultural and economic characteristics of specific student population, were beyond the control of the colleges.
And research data, based on interviews with students, demonstrated the interplay between these population characteristics and the reasons why some students dropped out or were sometimes absent or late. The research data which I made available to the College would also have been made available to inspectors by the College. But these inspectors completely ignored some of the very compelling evidence demonstrated within the interview data or were unable to assess the data and interpret the implications of the evidence for the College’s ‘retention, attendance’ and ‘punctuality’ rates.
A cross-section of some of the evidence demonstrated within the data is as follows:
‘But it did not make any sense for me to continue to attend classes because the courses I was doing required me, basically, yeah, to continuously collect information for my coursework. I can only do this in the College because I have no access to computer at home’… (AVCE Student 2007)
‘Some people have trouble at home. I sometimes have trouble at home and I have to go to my cousin in West Norwood and I live in Coulsdon, Surrey. Sometimes I sleep there and then I come late to class and he (Team Manager) is like “if you come late again you may be thrown out of the College”…They don’t understand that I did not simply want to absent myself from college or come late to class… Basically the College does not really know what the majority of students are going through’. (BTEC National Diploma Student 2010)
And another student had the following to say:
‘In my situation, in my family, in my house where I live, people, especially my age [seventeen] I am supposed to be responsible for what I do because my mum or my aunt or whoever I am living with they take many of these into consideration…and they have got themselves to look after and for me I have to look after myself because I am grown now, isn’t it? I have to be responsible and for me to get that kind of money [Tram fare] that day I kind of found it hard…I have to work and sometimes I am too tired and I over sleep or I don’t go to College at all’. (BTEC Extended Diploma 2012)
What were these students saying? They were contesting and contradicting the conclusions of the inspectors. They were saying that they faced circumstances which mediated their desires and willingness to participate in full in the College’s life. They were also saying that the inspectors and, indeed, the Government were ignorant of the circumstances in which they have tried to attend College and have sometimes failed.
The latter student had attendance and punctuality rates of 15% and 45% respectively before he eventually dropped out of College. I tracked him to where he worked five nights a week for one of the large supermarkets filling shelves and cleaning the aisles. According to his testimonies his duty rosters now and when he was attending College were as follows: three nights a week during which he claimed that he worked from 11 pm in the evening to 7.30 am the following morning before trying to make his way to College. And for the remaining two nights he claimed that he worked from 6 pm in the evening to 2.30 am in the morning. In the latter case he claimed that he does not go home because he does not want ‘to be out on the streets at 2.30 am’. He claimed that he ‘sort of hang around the shop until dawn’.
It was students like these that the Education Maintenance Allowance (EMA) was designed to help: yet it was one of the first post-compulsory education policies that was discontinued by the coalition. And the two inspection teams between 2011 and 2013 comprising two HMIs and seven AIs simply disregarded the above interview evidence along with the documentary evidence which the College would have placed before them and instead argued without counter evidence to support their arguments that students drop out; were absent from or were late for lessons because their lessons were uninteresting, unenjoyable, meaningless, unchallenging and hence were unlikely to enable students to achieve their primary learning goals and make progress, given their prior attainment and potential. And that was not all: the inspectors also argued that instances of low retention, high absenteeism and poor punctuality pointed to the fact that the colleges have been unable to ‘develop the right attitudes to learning’ in their students (Ofsted 2012: 39).
[i] Readers could use the Inspection Number and URN to peruse the report and reflect on, first, on the extent to which the analyses of these questions have featured in the report, second, on whether the analyses of these questions have enabled light to be shed on ‘Outcomes for Learners’ and, third, and most important on the extent to which the questions have enabled light to be shed on Ofsted’s definition of the aim and objective of teaching, which is ‘to promote learning’.
Wissenschaftlicher Aufsatz, 15 Seiten
Bachelorarbeit, 61 Seiten
Forschungsarbeit, 14 Seiten
Doktorarbeit / Dissertation, 218 Seiten
Diplomarbeit, 89 Seiten
Referat (Ausarbeitung), 17 Seiten
Hausarbeit, 26 Seiten
Diplomarbeit, 128 Seiten
Diplomarbeit, 79 Seiten
Seminararbeit, 55 Seiten
Hausarbeit, 14 Seiten
Zwischenprüfungsarbeit, 52 Seiten
Diplomarbeit, 118 Seiten
Hausarbeit, 8 Seiten
Diplomarbeit, 117 Seiten
Forschungsarbeit, 14 Seiten
Doktorarbeit / Dissertation, 218 Seiten
Diplomarbeit, 89 Seiten
Diplomarbeit, 118 Seiten
Der GRIN Verlag hat sich seit 1998 auf die Veröffentlichung akademischer eBooks und Bücher spezialisiert. Der GRIN Verlag steht damit als erstes Unternehmen für User Generated Quality Content. Die Verlagsseiten GRIN.com, Hausarbeiten.de und Diplomarbeiten24 bieten für Hochschullehrer, Absolventen und Studenten die ideale Plattform, wissenschaftliche Texte wie Hausarbeiten, Referate, Bachelorarbeiten, Masterarbeiten, Diplomarbeiten, Dissertationen und wissenschaftliche Aufsätze einem breiten Publikum zu präsentieren.
Kostenfreie Veröffentlichung: Hausarbeit, Bachelorarbeit, Diplomarbeit, Dissertation, Masterarbeit, Interpretation oder Referat jetzt veröffentlichen!