Ongoing competence: BS or 5750?

To those of you who get the reference in the title. Congratulations. You have come a long way. For the rest, BS5750 was a quality assurance system, all the rage in the 1990s, which came to mind, alongside legal aid practice management standards, and other joys, as I read the LSB’s ‘Ongoing Competence’ report. This report is a summary of evidence it received on it’s consultation on how to regulate competence beyond admission. I was curious to see how much the quality debate had moved on. It sought to provide a set of standards for a well run organisation to manage quality, and it informed much of the thinking by the Legal Aid Board, as it then was, and Law Society on how to assure quality in legal practice.

What we see in this report, I think, is a keen interest from the LSB that the regulators it oversees get significantly better at regulating competence beyond admission; the sense that CPD policy in law is, how shall I put it, a bit of a dog’s breakfast; and, that there are not a great many tested ideas about how to regulate or improve competence beyond admission. By tested, I mean those with any concrete evidence whatsoever about what works. There is the usual stuff too about the need to get lawyers to listen to feedback for their clients about how they can improve. Peer review, inspection, and observation, get regular mentions as does the tendency of quality assurance systems to descend into tick-boxing.

My instinct is that the LSB will lead a charge, or perhaps thoughtful, consultation strewn, desultory trudge, towards something like the system for doctors, perhaps lighter of touch. It caught my eye because there is a reference to research which suggests there is evidence that it works.

Independent reports commissioned by the GMC (Pearson (2017), Umbrella (2018))
identified that there had been some benefits from revalidation and that it has led to some
doctors changing their clinical practice, professional behaviour or learning activities as a
result

LSB, 2021

The system of doctor revalidation has been mentioned regularly in this context for some time, so I took a quick look at this evidence of “some benefits”. The picture is what we can politely call nuanced. I will set that out in a sec. For the uninitiated, which includes me, let me outline what revalidation seems to entail. Doctors are now subject to five yearly re-validation by a ‘responsible officer’, who according to the Pearson report referred to is, “usually a senior doctor within a healthcare organisation”. This revalidation confirms that the doctor has been engaging in an annual appraisal of substance and there are no outstanding concerns about the doctor’s practice.

The appraisals are to be based on information represented in this diagram assembled by the doctor him or herself and the kind of information said to be generally available as part of the Doctor’s normal practice.

The Umbrella report conceptualises, “revalidation as an activity system that is itself made up of many interconnecting subcomponent activities focused on supporting information, appraisal, and appraiser and RO decision-making.” Or perhaps most incisively, “Ultimately, revalidation’s ability to promote good professional practice is through the central role of high quality formative appraisal.” This feels right to me from my admittedly slim understanding of the process. Revalidation is an attempt to institutionalise and improve appraisal on a grand scale. It empowers the appraisees to demand an effective yearly appraisal and provides some institutional support to the ROs validating every five years to tackle doctors who may have ongoing competence, or other, problems relating to their fitness to practice. It encourages some doctors to self-select out of the process (and so leave the profession – which the evidence suggests might impact more on some female, younger, and BME doctors). The Umbrella Report also evidences the following impacts:

  • It did not seem to work so well for locums or others outside of single employer situations, “where the ability to obtain an annual appraisal has been inconsistent”.
  • Inconsistencies in the conduct/experience of appraisal and the information utilised/expected during this process.
  • The availability and ease of collecting the information needed for the appraisal varies according to job role, setting and specialism.
  • Patient and colleague feedback was seen as the most useful information
  • “Reflection on SI in appraisal is key for generating change, but reflection is often seen as just a product of appraisal, not necessarily translated into ongoing reflective practice. Expectations set locally, for example by employing organisations or individual appraisers, can influence doctors’ experiences of SI collection and can go beyond the requirements set by the GMC for revalidation.”
  • “A significant minority of doctors reported changing an aspect of their clinical practice, professional behaviour or learning activities as a result of their most recent appraisal. Overwhelmingly these changes related to the focus or quantity of their continuing professional development (CPD) activities, though changes have occurred across the domains of Good Medical Practice (GMP).”
  • Potentially negative impacts on practice or for professional autonomy were noted, as was a belief that, “Revalidation, through appraisal, provides a means to document practice but may not necessarily improve professional practice.”
  • Revalidation was not generally believed, by those participating, to be a good means to identify ‘bad doctors’.
  • “There was no statistical evidence as yet” that employers had reduced the level of fitness to practice referrals made to the regulator. although the process was reportedly leading to the successful identification and local remedy of concerns, particularly in relation to workplace and health issues.
  • Concerns were expressed about the patient feedback tools used and the need for their refinement from both patient and doctor perspectives.

I think it is fair to say this is quite a mixed bag. It is a very interesting, and I am sure evolving, attempt to improve reflection within practice and in turn improve competence and patient satisfaction by giving proper formative appraisal a good hard push. There are benefits and inevitable inconsistencies within the process . The process of institutional support for the improvement of appraisal and validation processes may well be valuable. Doubts about the cost-benefit of this system have inevitably been expressed too. A key point of divergence between medicine and law may be the different cultures of essentially public professionals in medicine and lawyers, who predominantly work in private practice contexts and businesses. Similarly, I imagine availability of useful data for appraisals in law is rather different from that in medicine. Firms are pretty bad, I am told, with data on their own business even; data relevant to the ongoing competence of their lawyers? If you decide to hold your breath, put on an oximeter would be my advice. Perhaps that is why the other emphasis in the report is on improving client feedback. I am more sceptical that this will make much of a difference especially compared to the accreditation route. Feedback’s value is in it being carefully garnered and listened too. I am not at all sure the LSB can get its regulators to regulate for that.

One thought on “Ongoing competence: BS or 5750?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s