Simon Cavadino, Director and Care Consultant, The Woodberry Partnership, and Neil Grant, Partner, Gordons Partnership Solicitors, provide their perspectives on CQC’s draft sector specific assessment frameworks which were issued for consultation on 24 March 2026. The consultation runs until 5pm on 12 June.
Simon goes first with an upbeat commentary on the draft ratings characteristics and on the victory of inspector (professional) judgement over the discredited scoring system that has bedevilled the Single Assessment Framework from its inception.
Neil follows setting out his views on how the frameworks should be applied in practice, focusing on the relationship between ratings and regulations, and the need to put in place a frequency of assessment as required by the CQC legislation.
Simon Cavadino – a Care Consultant’s perspective
It seemed like an achievable challenge when asked to draft something in a breezy and positive tone about the new changes to the CQC assessment/inspection framework. It was harder than I thought, but here goes.
At the end of March 2026 CQC announced the 24 new key lines of enquiry (KLOEs) and the ratings characteristics for each KLOE. The ratings characteristics are different for four specific sectors:
- Adult Social Care
- Mental Health Care
- Primary Care and Community Services
- Hospitals

The ratings characteristics
The ratings characteristics are an impressively detailed piece of work. I could never have finished writing them unless I was locked in captivity under threat of imminent harm. The characteristics for the four different possible ratings draw a narrative distinction between the different general levels of non-compliance that would distinguish between a Requires Improvement and an Inadequate rating and the general levels of compliance, proactivity and creativity that would distinguish between a Good and an Outstanding rating.
On the surface this appears to be a useful exercise in clarifying expectations of consistency for CQC officers. However, I suspect that the characteristics will simply lead to more argument. They are necessarily vague since they will never be able to deal with every specific situation. For example:
Consent to Care and Treatment. For Requires Improvement: “Staff do not always seek consent to deliver care, support and treatment when required; staff do not fully understand the importance of consent or relevant requirements.”
For Inadequate: “Consent to care, support and treatment is not obtained in line with legislation and guidance, including the Mental Capacity Act 2005. Staff do not understand existing legal requirements.”
Good luck in debating the difference between those two statements without feeling like you are entering a pointless exercise in hair-splitting or dancing on the head of a pin.
Given the cheerful brief I have decided not to mention that we had a perfectly workable set of ratings characteristics under the old KLOEs. That was back in the halcyon days (that we did not realise were the halcyon days at the time) when CQC was at least ‘functional.’ Inspections were conducted fairly reliably and predictably at loosely set timescales depending on the rating. This created a sense that services were actually monitored. I have decided not to make the point that CQC could have just re-introduced these old KLOEs overnight months ago and started working functionally again.
The demise of the Single Assessment Framework and scoring
It is to be welcomed with joy and happiness that the single assessment framework is disappearing, especially the useless and invalid scoring framework. There is not universal agreement on this point, with some believing that scores are necessary to achieve consistency because less ‘inspector-judgement’ is required. This is a tempting position to take in a post-enlightenment world, but I disagree. The scoring system only created the appearance of achieving consistency, but did not achieve it. This was ultimately proven by the inevitable need for inspection teams to find ways around the scoring system, or to cheat it. Put another way, the ‘inspector judgement’ won through in the end and the scoring system was made to fit. The problem was not that CQC officers did this, it was believing in the first place that the scoring system would be valid. CQC deserves credit for scrapping it.
Inspector judgement at each key question level is the way to go. Contrary to what some may assume, inspector judgement is not a ‘whim’ or ‘wish.’ It is a judgement based on much different evidence from an inspection process and weighed up carefully. This includes, but is not limited to, observations of care, listening to views of staff, residents and relatives, assessing care planning systems, governance structures, medication systems, quality of life considerations, physical environments, management cultures and much more. There is no algorithm to weigh this and come up with the ‘correct answer’. It needs experienced humans to do it. CQC deserves credit for recognising this.
Conclusion
As we discuss inspection process, it is worth remembering why any of this is necessary. The answer is to improve the care experience for the vulnerable end-user, protecting people from poor care and harm. Experience tells me that the best way to achieve this is by encouraging providers to improve in a collaborative manner.
I remain fearful that CQC still does not value the factual accuracy process and is seeking to make it as difficult as possible for providers to respond to inspections. It has long been my view that constructive dialogue with inspection officers leads to overall improvement. Feedback from providers on inspection findings should be encouraged, not discouraged. CQC seems more focused on whether they are correct in inspection report drafts, rather than seeing the inspection process as a tool to drive improvement. CQC would do well to reflect on this.
With my positivity battery inevitably starting to wane, I hand over to Neil…
Neil Grant – a lawyer’s perspective
The task of writing a critical analysis of CQC’s draft assessment frameworks is a difficult one for me as I am supportive of CQC’s decisions to return to sector specific frameworks and to reinstate ratings characteristics. It was the obvious thing to do and I recall Simon and myself advocating for it back in 2024. The abandonment of the Single Assessment Framework (SAF) has been far too slow but at least it is now happening. Quite what it means for all the providers who have been inspected under the flawed SAF since December 2023 and will be this year until the new system is in place, remains a debating point. I am doing my best to be polite, as you can tell.

Faced with the dilemma about what to write about given Simon’s authoritative analysis, I have decided to highlight two issues that relate to the broader methodology of assessment/inspection which so far CQC has been silent about in relation to the proposed new framework:
(1) the relationship between ratings and regulations and
(2) the frequency of inspections.
I do this in an attempt to influence CQC’s thinking on how it should apply the new methodology once it comes into effect, noting that CQC is obliged to issue a “method statement” under section 46 of the Health and Social Care Act 2008 describing how it proposes to assess and evaluate performance.
The focus of this article is on the adult social care services but it will also be relevant to those other CQC registered services that fall under the other assessment frameworks.
Ratings and regulations
There are two main aspects to CQC’s evaluation of adult social care services : (i) compliance with statutory requirements and (ii) performance assessment which, as you all know, is currently based on ratings ranging from Inadequate up to Outstanding. Both are part of CQC’s regulatory toolkit but they are quite different.
Compliance was the focus of CQC’s work under the old Essential Standards of Quality and Safety. However since ratings were introduced for adult care services in October 2014, CQC has ridden two horses: compliance and performance assessment.
A breach of regulation (non-compliance) may lead to enforcement action. An Inadequate performance rating will lead to a service being placed in special measures (which has no legal basis, by the way) and judged as being high risk but, of itself, it is not a basis for enforcement. CQC can only take enforcement action when there is non-compliance.
CQC is currently consulting on the performance bit i.e. the ‘quality indicators’, which it is required to put in place under section 46 of the Health and Social Care Act 2008. Ratings are underpinned by statute and thus have a legal basis. However, the regulations (known colloquially as the Fundamental Standards) are not changing so there is nothing to consult on as far as their compliance is concerned.
The draft assessment frameworks do not mention the regulations at all which I approve of. CQC says it will issue guidance that will “show how each topic maps to the fundamental standards of care, set out in the Health and Social Care Act (Regulated Activities) Regulations 2014, and the Care Quality Commission (Registration) Regulations 2009.” On this same point, the Care Provider Alliance adds there is a need for “clarity in relation to boundaries between rating levels and the link with compliance against the regulations.” This sounds sensible until you start to reflect on how CQC has linked performance and compliance in the past and does so currently.
Sensibly, Ofsted, CQC’s regulatory older sibling, separates compliance from performance in the sense that a failure to meet a regulation does not automatically lead to a Requires Improvement rating. As Ofsted writes, “Requirements may still be made by Ofsted when providers are judged to be Good”. This shows that Ofsted treats performance and compliance as separate evaluative processes. CQC should apply the same approach. However, based on past and current performamce, my concern is that CQC will put in place a policy where a breach of regulation leads to a Requires Improvement rating at the key question level or possibly even at the overall rating level.
You may remember that before the SAF, a breach of a single regulation by an adult social care provider would lead to an overall rating of no better than Requires Improvement, no matter how minor the issue. No proportionality was applied by CQC. It was a blanket approach that almost certainly was unlawful, as well as being inconsistently applied. For example, GPs were treated differently with a breach of regulation only affecting the relevant key question rating for their services, not the overall rating.
Things changed under the SAF, at least to start with. A provider could be in breach of regulation and get a Good rating for the key question that was linked to the regulation in question. CQC came to feel uneasy about this and started to “moderate” its scoring in such a way as to bring the key question rating down to Requires Improvement whenever there was a breach. Often it did this by the use of the rating limiter – a score of 1 for a single quality statement brings the key question rating down to Requires Improvement irrespective of the other scores.
The term “moderation” gives the process a certain respectability when in actual fact it is really manipulation of the scoring system to achieve a desired outcome. CQC will say the moderation process is justified as it ensures non-compliance is not ignored in the assessment process. However, I have acted for many providers who would and should have achieved Good ratings but for the unreasonable application of these limiters. It is little wonder that more and more services are being rated Requires Improvement. In contrast, Ofsted does not apply limiters of this nature and nor should CQC. They are artificial, arbitrary and undermine the free application of professional judgement by CQC’s inspectors which was always the core element of the inspection system until CQC started introducing ill-judged rating limiters and scoring.
Frequency of inspections
CQC is required under section 46 of the Health and Socal Care Act 2008 to set an assessment frequency as part of its performance assessment activity but for years CQC has ignored this. As Simon rightly says, assessment/inspection is an ongoing process. CQC needs to get back to assessing and inspecting services according to a defined frequency.
CQC has set a target of 9,000 assessments across all providers between April 2025 and September 2026. It will meet this target and probably exceed it by a bit. But it is absurdly low. To put matters in perspective, in 2005-06, the Commission for Social Care Inspection carried out over 48,000 inspections (adults and children) when there was still a statutory requirement to inspect care homes twice a year. Children’s homes are still inspected annually by Ofsted, by the way.
CQC accepts there is a significant risk associated with not having a frequency of assessment in place. In its Corporate Risk Register for Quarter 3 presented to the CQC Board meeting on 11 March 2026, CQC writes:
“If we are unable to deliver certainty for the frequency of assessments under section 46 of the Health and Social Care Act 2008, then we will not meet our duties as a regulator.”
The risk status is red and “exceeds tolerance.”
A good start would be to revert to the pre-pandemic frequency of inspection which for adult social care was 6 months from publication of the inspection report for Inadequate services, 12 months for those rated Requires Improvement, and 30 months for those rated Good or Outstanding. Services should not have to wait years and years for a reinspection.
Conclusion
Aside from a bit of grant funding from the DHSC, registered providers fund CQC. They deserve a level of service that is at least satisfactory and preferably good. CQC should re-read the Regulators’ Code and issue quality standards in relation to its own service delivery backed up by a Code of Conduct for its staff. It should then publish an annual assessment of its own performance against those standards.
I wish CQC well in its endeavours. England needs a regulator that is fit for purpose. The latest consultation is a major step in the right direction.
The deadline for responding to the CQC consultation is 5pm on 12 June.
The response can be submitted via the CQC website using this link: Give your views on draft sector-specific assessment frameworks – Care Quality Commission