‘Discussions of Accountability are peculiarly subject to technical regression: arguments about values soon become arguments about techniques.’
The rise of accountability in public services such as health, education and social provision have descended into technical, data-driven processes which become blind to the ethics, aims and contexts which they are intended to help understand and make transparent. As accountability has flourished in a neoliberal context, it has defined ‘good work’ in increasingly narrow ways. This is exemplified in the way that Green (2011) emphasises how accountability has tended to diminish the role of responsibility. Discussing the work of teachers, she comments,
‘The responsibility of a teacher, it might be agreed, goes much wider than any accountability targets that originate in central government and ‘cascade’ down through various bodies to whom the teacher is accountable, ending with the ‘expected performances of individual teachers. (Pring, 2001a: 281)’ (81)
Green suggests that as we increase levels of reductive accountability, the responsibility and professionalism of teachers declines. Even worse, accountability measures often lead to the need for ever more collecting and reporting of data, which in turn diminishes the time available to actually carry out substantive tasks and affect positive change – what might be called the paradox of reductive accountability.
Even though quantitatively driven accountability systems can be damaging to professionalism, diminishing the role of responsibility and eroding the time required to offer high quality experiences for students, universities in the UK are aggressively pursuing such an agenda. Why? I think this is the result of a number of related factors:
- The TEF is an under-powered, reductive and perverse framework. To keep its operational cost low, it simply recycles unconvincing proxy-data and then creates a coarse-grained three-tier attainment framework which universities are fitted within. Whilst this is an obviously poor system, some VCs, university leadership teams and marketing departments see its potential for positively positioning their organisations within the HE market. Therefore, university leaders pursue the data at all costs in the hope of being able to put a gold icon on letter headers and the side of buses. As a consequence, since first publication in 2017 whole cottage industries have grown up in universities in an attempt to find quick wins to game the new system.
- Quantitative accountability systems are loved by large organisations such as universities as they allow for standardisation, and hence cost-cutting. Numeric data are always easier and quicker to get to grips with, and numbers also feel more ‘truthy’ than narratives; an added advantage is that they keep the messages and insights simple.
- If developed to be ubiquitous, quantitative accountability systems are the perfect basis for engendering self-surveillance amongst staff. And as the number of activities reduced to simple numbers increases, so the self-surveillance becomes ever more totalising.
However, for all of these reasons, whilst accountability systems may lead to a ‘good set of data’, they’re generally awful as the basis for creating great pedagogy.
The pursuit of data leads to an accelerated timeframe for change. Change processes become increasingly led by short-term imperatives. You only need to look at the succession of quick-fixes attempted in the schools’ sector over the past 15 years to see how constant, rapid shift brings silver-bullet after silver-bullet with very little actual or positive change. It does, however, offer a ballooning trade for consultants, data/software companies and other market-led solutions. It already feels like this is being increasingly adopted and ramped up in the HE sector.
But why should it be like this? As these changes have started to take hold, I’ve been reflecting on the core business and culture of universities. Surely if they stand for anything it is scholarship, research in a synthetic tension with teaching. So why don’t we harness this age-old relationship? If we want to improve pedagogy (and hence the outcomes of teaching) we need to investigate and research, not collect more numbers, write more policy documents and react to evaluations we know are often inherently bias and inaccurate. This means we need to create approaches which explore and explain pedagogic processes, allowing for the emergence of great applied insights which can be debated and embedded across the university. Given the income generated by teaching in an HE institution, it always amazes me how little is recycled to explore and explain pedagogic practices.
We need to develop a different way of working, seeing teaching as a research and development activity, one which is central to the work of universities and acts as a positive lever for what they offer. But research and development takes time, it isn’t a quick fix process. But once it is instigated it can offer a consistent and positive impact on practice. Below, a potential model is outlined to show how a research and development led system might work, one based on professionalism, responsibility and trust rather than on coercive accountability.
With the diagnostic dialogue completed, the researcher would then complete some initial reconnaissance work, including linking up with other potential co-investigators, completing a rough literature search to see if there is any evidence already available for integration into an investigation plan. Taken together, these activities would then lead to the design of an appropriate investigation or might lead to a suggestion of coaching/mentoring. The flow diagram shows indicative approaches. The crucial issue here is that the investigation has a bespoke structure which attempts to react to the detail of the diagnostic dialogue.
Because each investigation is bespoke, the time it takes to complete will be dictated by the terms of the activities involved. The time needed depends on the rhythm of the work and shouldn’t be determined by some nominal quota.
At the end of the investigation, there needs to be a carefully considered reporting phase. This would include some form of sharing of insights within the organisation, but could also be a vehicle for academic publication, conference attendance, etc. The whole purpose is to develop a critical scholarship across the organisation as a resource on which all can draw. Linked to the reporting are explicit discussions about the degree to which it can become a catalyst for further work, as well as considering the potential for scaling up to other parts of the organisation. There should also be opportunity given to meet with the tutor(s) again to establish the extent and form of professional development they believe they have experienced.
As the investigation proceeds a final aspect which needs to be included is a meta-level dialogue which attempts to consider and understand the suitability of the processes used, how new and radical/inventive approaches might be developed to support investigation, and how ideas can be scaled and networked.
Universities are heading in the direction of reductive accountability, diminishing autonomy and creativity, engendering standardisation, and attempting to understand their own organisations predominantly through summative, quantitative data – what I have elsewhere called ‘working with the shadows’. Here, I’m suggesting that this is a sure way of slowly killing the quality of practice. An increasing amount of money is being spent on a burgeoning TEF industry. If instead, we spent this on research and development capability (allied with networking systems to help make the insights available throughout an organisation) we may not show such rapid accumulation of data, but we will be in a much better position to support sustained and sustainable change to pedagogic practice. In the longer run this will have a much more radical impact on the quality of teaching in universities.