Other things that did emerge from this report were that almost all of those interviewed considered that time spent communicating with residents was critically important, not only for quality of life but for quality of care. Staff consistently found the accreditation process extremely burdensome.
To see the full report go to the following web page
It is no secret that the way in which accreditation operated in the USA has made me extremely skeptical about its utility in protecting residents, particularly in the face of strong market forces and when confronted by powerful groups.
Hundreds of children were needlessly imprisoned and harmed in psychiatric hospitals. Another 700 adults had major heart operations which they did not need. In some instances whistle blowers who gave information in confidence were reported to their hospitals where they were victimized. All of these hospitals were accredited in the USA by the Joint Commission which has become the model and benchmark for accreditation internationally. I have consequently been very skeptical about the introduction of accreditation to control aberrant behaviour in aged care in Australia.
My criticism should not be construed as an objection to accreditation as such. When people are enthusiastic and embrace its principles then it can be a motivating force, and the practices adopted can enhance care. My criticism is that it does not work when staff are not motivated and when there are strong pressures in the system which place a strain on the services provided . When care and profit compete it performs poorly and is frequently gamed.
In Australia the medical profession has remained independent, powerful and motivated. Professional independence in the face of corporate pressure has prevented the excesses encountered in hospitals in the USA. Accreditation has been supported and has probably been beneficial. Doctors and not accreditation have been a deterrent to deviant practices in our hospitals.
Accreditation then seems to be a useful process. The reviewers in essence were asked whether that was so and nothing else. This is a lengthy and laborious report and it ultimately tells us that all involved agree with this and think that it improves care. It did not do more than this. It did not look at the many criticisms made, nor did it look at its utility in doing the many other things the agency was asked to do.
The reviewers were not asked and they did not assess whether this should be the process whereby homes are evaluated or sanctioned. They did not examine its value as a source of information for prospective residents and their families, or for those researching the sector. Accreditation was developed to help those who want to improve. That is what it does. Its the other things that it has been expected to do that we question.
The text is so dense that those scanning through the material (most of us) will skim past the qualifying comments and read the findings as if it is doing all of these things well. We are expecting something that addresses our concerns and it is easy to miss that this is not what the report is doing.
My interest in this report is in the way it was written, the way the assessment was done, the attitudes revealed, the limitations revealed, and what was not addressed. Did it refute my criticisms? Did it negate the allegations by nurses that the process was a farce? No it carefully avoided these issues.
The project was led by Campbell Research & Consulting (CR&C), who worked with associates from DLA Phillips Fox Lawyers and Monash University. It looks to be very much a product of the business world. This is not a review of the accreditation agency. It only asks whether accreditation is beneficial. It did not claim to be a response to the June 2005 senate report on Quality and equity in aged care, but many might see it as such. The community Affairs Committee was very critical of accreditation and questioned whether it was beneficial. The review was commisioned in November 2004.
There are a number of impressions I get. Firstly it is an extremely wordy report loaded with "quality" jargon. I am skeptical about the overuse of words which have associative rather than denotative meanings especially when they are embraced as enthusiastically and whole heartedly as this report does. It reads more like a public relations exercise for the department.
Those who prepared the report are dedicated experts in this field and promote the processes on which the accreditation agency operates enthusiastically. I understand the underlying principles on which the "quality improvement" process is based. I recognize them as an integral part of the way we human's build our world. Recognizing them explicitly and cultivating them is fine and beneficial.
That said my impression is that this movement has been obsessive and it has been overdone to the extent that it has lost objectivity and insight and so is used inappropriately. Like other ideological patterns of thinking it has been applied to situations where it is not directly relevant. The process is top heavy with expectations and claims but the root system on which it stands is small. The accreditation agency's processes are sited within this pattern of thought and the agency operates within it. The agency has picked its own mentors to do this assessment. This is not an outside assessment as most of us would understand it.
I am also suspicious when a scientific publication is laced with unnecessary positive adjectives describing their activities. This reveals an underlying uncertainty. This review is laced with quality jargon and unnecessary adjectives so making it dense for anyone trying to unravel what they are actually saying. The word "quality" is appended wherever it can be. The language sets a skeptic's teeth on edge. A few example phrases illustrate
experts in aged care quality indicators - aged care and quality measurement experts - A total of 18 peak bodies - a suite of draft quality indicators - quantitative measurement of the perceptions - domains of quality - rigorous and comprehensive qualitative research methodologies
As a tour de force the review is impressive. I struggled to find things that meant something concrete to me within the litany of words.
In 2003 the Australian National Audit Office (ANAO) identified a need to assess the impact of accreditation on the quality of care. The Joint Committee of Public Accounts and Audit suggested this be extended to quality of life. It was not asked to assess whether accreditation was effective as a regulatory tool nor as an information resource for consumers or researchers. The problem is that, despite its specific disclaimer this will be seen as an evaluation of the current accreditation system and of the accreditation agency. It was neither.
The review process
The first revelation was that, in spite of its undertaking in 2003, the accreditation agency had never collected solid data and there was nothing to compare current practices with. In addition factors other than accreditation would be at play. Their contribution could not be monitored.
There was no baseline data to enable direct comparison of quality of care and quality of life of residents in aged care homes today, compared with that which existed prior to the introduction of the new regulatory framework in 1997 (Page vi)
The second revelation was a confirmation that accreditation did not do what it was claimed it did. It lacked sensitivity and recorded a minimum only. They did not collect the data they had agreed to collect.
The high proportion of homes achieving all 44 Expected Outcomes is a positive indication of overall quality, but reflects the lack of sensitivity to improvement over time, particularly for those high performing homes consistently achieving 44 Expected Outcomes.
The scale currently used to assess the Standards is not adequate to measure variations in performance or to recognise degrees of achievement beyond compliance to a minimum standard.
The Standards do not allow for measurement of change across time or across the residential aged care sector. (Page xii)
They (accreditation outcomes) are not a rating of the level of quality of care or quality of life in residential aged care homes and they do not demonstrate that accreditation is more effective than other regulatory approaches to quality improvement. (Page 62)
- - - - there are no measures available to consistently assess the level of quality outcomes derived from the funding provided by the Australian Government to residential aged care providers. (Page 83)
They carefully examined the legislation, then they looked at the literature. Finally they went to stakeholders in the nursing homes and asked them - and it seems clear that they went to the homes that offered to have them there - to nursing homes that probably knew that the information obtained would not be negative. By 2007 some staff, a number of residents and even the press were criticizing accreditation and urging a more stringent and objective oversight process. It is clear that review would not advise this.
The purposes of the qualitative research were to:
- - - - residents were not invited to participate in the surveys undertaken in Stage 2. This was primarily because the qualitative consultations with residents conducted in Stage 1 identified that, while residents had clear views on quality, they had limited awareness of accreditation and as such were not in a position to provide a view of the impact of accreditation. (Page ix)
The accreditation process and its objectives are described in detail. The reviews methodology was developed to
- - - evaluate the impact of accreditation in a context where there was no established benchmarks or comparative interventions and where accreditation was one of a number of initiatives that may impact on quality of care and quality of life in residential aged care. (Page 4)
The report points out that in spite of a widespread belief that accreditation improves quality of care, there is a "lack of specific research evidence to confirm this belief". (Page 6) Their research was by interview and questionnaire.
These consultations were comprised of 34 focus groups or forums and 45 in-depth interviews. This extensive national consultation involved stakeholders with direct experience in aged care homes including residents, family members and friends, direct care staff, nurses, providers, allied health and medical professionals, managers of residential aged care homes and quality assessors.
Consultations were undertaken with peak bodies including industry, agency representatives, consumer groups, professionals and carers groups/representatives. A total of 18 peak bodies were included in this process. (Page 11)
Arguments are advanced for the methodology used by the agency in accreditation. There is a detailed discussion of the differences between different types of oversight and accreditation in Australia and when compared with Canada, the UK, Denmark, New Zealand and the USA.
The Accreditation Standards are described as outcome standards. They differ significantly in their expression from standards in many other jurisdictions. (Page 28)
I was left wondering why they had not mentioned the failures in the accreditation processes in these countries particularly the glaring failures in the USA. There was no mention of the many reported failures in Australia. As a consequence no attempt was made to analyse the reasons for them and so to identify weaknesses in the processes. This was not their brief nor an interest of the reviewers. It is ours!
There is a long discussion about the differences between "Quality of Care" and "Quality of Life". I have no problem with the distinction, particularly with the clear link identified by those who contributed, between both Quality of Life and Quality of Care on the one hand, and the amount and type of formal and informal communication between staff and residents on the other. Staff made this very clear to the review. This is the sort of information Hogan needed to confront in his 2004 report pressing for greater efficiency, but did not seek.
This (communication with staff) was such an important element in stakeholders' views on quality that it emerged as a discrete domain of quality and was strongly aligned to quality of life. Interaction also emerged as a key driver of quality of care (for care staff and family carers) and quality of life (for care staff) in the CR&C Aged Care Survey key driver analysis. (Page 71)
The impact of care time and staff-resident interaction is particularly important in understanding quality of care and quality of life. Quality of life is particularly likely to depend upon adequate provision of staff time and this may at least in part explain why quality of life was not generally seen to have improved in the sector as a result of accreditation to the same degree as quality of care. (Page 71)
This did not get us any further as the authors went on to argue that there were no "indicators" of Quality of Life, and that concrete indicators of Quality of Care like pressure sores, and weight loss were difficult to collect and were subject to so many variables that they were of little value for their purposes. In talking about these "quality indicators", as they were called (NOT failures in care), the review said
Indicators are not absolute measures of quality, rather, it is the action triggered in response to indicator data that determines their usefulness. (Page 84)
Whatever the indicators used, they are only useful as far as they initiate and strengthen the continuous quality improvement process. (Page 85)
I beg to differ. The rest of us are interested in exactly what is happening in these nursing homes and these indicators (qualified, if required, by known variables and further investigation) tell us just that. For the quality guru, whose objective is to facilitate the quality process, they may be no more than indicators to initiate the quality process but that is a quite separate issue. This pattern of thinking is not acceptable for any other purpose. How can anyone apply sanctions based on the absence of processes whose outcome you cannot prove and random observations made at infrequent intervals? How can any nursing home that has had serious problems reclaim accreditation by setting in places processes but without showing that they have worked to prevent further failures in care. It is little wonder that the appeal tribunal sets aside so many of the sanctions imposed?
The CHSRA (presumably the Center for Health Systems Research and Analysis in Wisconsin USA) view quoted below is fair enough when dealing with the response to localized events in single nursing homes. But in evaluating what is happening at a home over time, in the industry as a whole, and between sections we need access to what is actually happening. That information must be recorded.
Quality indicators are pointers, or flags, that indicate potential problem areas which need investigating and the starting point for a process of evaluating quality through careful investigation'. (CHSRA 2000) (Page 84)
Indicators are often derived from standards or guidelines to indicate how well those standards are being implemented. For example, the incidence and prevalence of pressure ulcers are generally accepted as valid indicators of skin and hygiene care, even of nursing care more generally, as well as reviewing a safety risk to the patient or resident, and this indicator is used in quality programs across many care settings. (Page 88)
In the real world a hole in someone's bottom is a hole, is a hole, and it should not be there. However many qualifications and explanations may be forthcoming it should still not be there. It is still a failure of care and should be recorded as such. Using jargon like "quality indicators" is to place a sanitizing filter across the harsh reality of a hole in someone's bottom. Its like talking about "passing on" instead of death. Any evaluation must start by documenting and counting failures and it should acknowledge them as failures.
We realize that in the real world failures occur, but we want to know how many people have holes that should not be there and why and how they got there. Why the system failed. This tells residents and their families what they might expect in this nursing home. They give researchers something concrete that they can analyse, They alert the community to what is being done or not done on their behalf. They allow the community to lobby for more money or for changes in the system.
This is accountability and transparency. A hole in someone's bottom is a serious medical complication. People die from it. Expert care is needed. The resident becomes a patient, and a doctor is consulted (or should be). It is written in the patients notes. We can grade and refine but there is no ambiguity that there is a hole there and it should not be there. We know it is difficult, people are human, and we will not always be perfect. But we want to know about it and know that it is being kept to a minimum.
In acute health care failures are related to specific conditions so are each less frequent and multiple factors contribute to causation. Evaluation and analysis is more difficult and complex. In nursing homes there are several key failures in care that are already recorded, and they have the same underlying cause. There is one critical determinant of failure in the majority of instances - nursing - the biggest cost in the system. As a community we all want to know about the nursing care and these indicators tell us.
In the USA these failures are reported. They have been used by the public and the press, to analyse the sector and to criticize the individual nursing homes and direct attention to sectors that had high rates of failed care.
This is why the nursing homes in Australia resist the reporting of these "indicators". This is why an organization set up to support them, and dependant on their cooperation, dare not collect it and will rationalize in order to make this look legitimate. That providers perceive the recording and reporting of "indicators as a measure of performance" as a threat and don't want this is clear from the following
Quality indicators are not well understood in the sector and there are numerous concerns about their use in the sector, particularly in relation to funding. To address these issues:
The purpose of the indicators should be confirmed to the sector - the basis for the indicator development was the clear understanding that they were being developed not to measure performance, but as tools to assist aged care homes to monitor and improve the quality of their care and services;
These "indicators" which record what is actually happening are what the rest of us consider as measures of the rate of failure. That is what we want to know about. They mean something to us. Quality processes are to help providers who want to be helped. They do not help us.
How the Review was conducted
Having found no prior data to analyse the reviewers set out to generate their own from the literature, by using the processes whose validity they were testing, by having a talkfest, interviewing and using questionnaires to those involved in the process. There is a long discussion about the methods used to analyse this data to make it valid - but essentially they were analysing the views of a selected group of people involved in the various processes and who identified with them. As I see it these findings show their views and not much else. To do so they divided the opinions about quality into five domains.
The stakeholder-derived model had a focus on the overall concept of quality, which reflected the views expressed by residents and their family members. The domains identified were environment, services, interactions, personal factors and health. These domains of quality were reviewed positively by stakeholders throughout both stages of the project - - - - . (Page xi)
While the five domains contain a mix of attributes, some of which can be measured objectively, they are fundamentally based upon the perceptions, values and experiences of stakeholders. This approach is unusual in the literature. However, it is consistent with contemporary tends towards increased resident focus and legitimises the importance of examining the subjective elements of care which can make a major contribution to satisfaction with overall quality, and with quality of life in particular. (Page 53)
Stakeholders employed within the residential aged care workforce (providers, managers and staff ) as well as representatives from a number of peak bodies representing consumers, were positive about the impact of accreditation on the overall quality in the sector. (Page xii)
They concluded that accreditation is beneficial because those involved in the process think so. This is not the question the community is asking. The review does not tell us if accreditation is effective in controlling poor operators nor does it tell us who the good ones are. The recurrent scandals where unconscionable practices come to light in spite of accreditation strongly suggest that it is not effective in detecting failures in care and does not work when the recipients are not motivated - or are driven by contrary pressures.
We know that some poor and inept operators who were incapable of fixing things when they were confronted with them have been forced out of business. In a number of them the problems were exposed by whistleblowers or by the press, and had not been detected by accreditation.
In the absence of a meaningful financial sanction, it would be highly likely that some providers would assess the cost of complying with their regulatory responsibilities as higher than the cost of avoiding them. (Page 94)
Experience elsewhere shows that it is highly likely that some providers would find that by creating an impression of complying and so gaming the agency they could make their businesses much more profitable. Isn't this what nurses are telling us when they call the process a farce?
The US experience tells us very clearly that it is the clever ones - the "successful" ones -we should worry about. Their drive for profits causes them to game the system and get away with an absolute minimum? All too often these companies are the ones who are most successful, most credible, and have most influence. Their critics are ridiculed.
Under the guise of "quality", "excellence", "best practice" and a variety of salesman's words we have a system which specifies only a minimum, and for which full marks are routinely given. It does not recognize or reward anything above this.
As indicated above those interviewed thought that quality of care had improved and that accreditation was important in this.
Generally, the level of quality in residential aged care was seen positively by stakeholders as having improved over the 10 years since accreditation was introduced. - - - - However, stakeholders consistently identified accreditation as the most influential factor in driving improvement in the sector. (Page x)
The central finding of the project has been that the existing system of accreditation has achieved the direction intended. The project found that the structure of accreditation using an independent authority with outcomes of assessments linked to sanctions including, ultimately, financial penalties has achieved this impact. (Page xvii)
The conclusion of the project is that the current system of accreditation is capable of achieving both stimulating continuous quality improvement and assuring compliance with minimum standards, and increasing evidence is emerging that the system is successfully doing so. (Page 34)
Close to nine in ten quality managers and care staff (although a higher proportion of quality managers than care staff ) agreed that: - - - Accreditation has supported continuous quality improvement - - - and - - - Accreditation has ensured that there is an acceptable minimum standard of care - - (Page 73).
An interesting observation was that 70% did not think accreditation had improved staff satisfaction. This raises an oblique question about accreditation as a motivating force. If those who speak out are correct then this might reflect some tension between the interviewee's belief's (ie culturally expected) and the context within which they work (which challenges the required belief). The majority found the process burdensome.
A very strong view was consistently expressed by stakeholders consulted in the course of this project that the administrative requirements of accreditation for homes are extremely burdensome, although some providers report they are managing the accreditation process better as they become more familiar with the requirements. (Page 96)
For the future
The report does make a few suggestions for the future. It does talk about the collection of "quality indicators" but these are to be kept by the facilities to initiate the quality process. Presumably they will check that the paperwork shows they have done so. The agency is not going to look at their incidence and no one is going to tell us about that.
What was asked for?
In fairness the review was not asked to examine most of the issues I have raised here. It was a very limited brief and while the review wrote a great deal about the processes it stuck to that brief and carefully hedged and qualified its statements.
The project found accreditation, together with the regulatory framework in which it is embedded, is an appropriate way to improve quality in residential aged care and has achieved an overall improvement in residents' quality of care and quality of life. The project was not intended as an evaluation of the Aged Care Standards and Accreditation Agency (the Agency) or its processes. However, the findings identified areas that could be addressed to improve the efficiency and effectiveness of the aged care accreditation system. (Page v)
The problem is that most would have read past this. It would be seen as an endorsement of the accreditation agency and would be used to refute its critics.
This review of accreditation in Australia illustrates another observation I have made. This is the large divide in perceptions between those at the coal face (eg. nurses) on the one hand and those in charge of the facilities, or who are part of the "system" on the other. In the vast majority of instances complaints made by, even a small number of those at the coal face (nurses and residents), eventually turn out to be reasonably accurate - at least for the facilities where those people are located. It takes a lot of courage to stand out against prevailing views and criticize. Few do this lightly.
As humans in a social milieu we see things only from where we are standing. Few of us will express negative views when we are surrounded by positivity, even when asked to express a view confidentially on paper. We feel uncertain, have doubts, hesitate then shrink away.
The experience of whistle blowers who expose deviant behaviour is a good illustration of the sort of cultural ambiguity that pervades organizations. Whistleblowers are turned on and isolated by the majority of participants and their claims are rejected. Interviews and questionnaires are at risk of encountering this cultural bias when there are outside criticisms of what the participants are doing. They feel they are expected to rally around and do so.
To see this divide here, read the criticisms made about accreditation by those in the system, for example in the Senate Community Affairs References Committee's report "Quality and equity in aged care" released in June 2005 or recounted in many press reports. Many at the coal face considered accreditation to be a farce.
Contrast that with the content and conclusions of this 2007 review of the accreditation process as set out below. Both groups of people are patently genuine and believe what they are saying
Based upon the evidence collated in the course of the project a number of key findings have been identified. Briefly, these findings reflect the views of stakeholders that accreditation has had a positive impact on the quality of care and the quality of life for residents in Australian Government subsidised aged care homes, and the high level of confidence that the overall quality of care and services provided to residents was of a high standard. The findings support the view that accreditation is part of a robust regulatory framework that is well grounded in good practice. (Page ix)
The question I asked myself was whether this review counters the criticisms, and whether it refutes my distrust of the system and my view that accreditation processes that are not ultimately based on hard measurable data are readily subverted and do not adequately protect residents.
After examining the review I find that the evidence, which shows that it does, or does not, adequately protect the frail elderly, has not been collected. This report does not provide it. The findings of the review is what you would expect from the process adopted and the group of people who were interviewed.
Letter to the New Minister for Ageing, The Hon Justine Elliot MP December 11th 2007
Both coalition health and aged care ministers had undertaken to make changes to the approved provider regulations but labor had refused to make a commitment prior to the 2007 election. I wrote to the new labor minister about this soon after the election I had not read the Oct. 2007 review of accreditation. I took the opportunity to address the issue of a market in decrepitude and of the problems inherent in accreditation.
Aged care cannot and has never operated as a marketplace. There is a bed shortage so there is no choice, and even if there was choice the dynamics of these late in life choices ensures that it is not effective. Rhetoric about choice and market forces is ideological gibberish. The marketplace in nursing homes has become one where participants compete to see how much profit they can squeeze from the system without alarming the agency and without generating a community backlash - a mediocre standard.
The commodification of humanitarian services into packages traded in the marketplace detaches them from the values and norms of society. Without exercise these values atrophy.
In my view a health and aged care system which is based on a mechanism which exploits the misfortune and vulnerability of the sick and elderly for the financial benefit of disinterested shareholders is socially unsustainable and destructive of a caring society. The conflicting paradigms intrinsic to a system, which aims to provide intensely personal humanitarian caring services through an impersonal market mechanism, render the system at ongoing and sustained risk.
As indicated accreditation based on an external body is unlikely to be effective. All nursing homes either do or should keep records of income and expenditure, as well as ongoing data of failures in care including staff levels, pressure sores, contractures, weight loss, medication errors, recreational activities and feedback from staff and families. Such data is essential for proper management.
This is the data which, in an ideal world, they would be disclosing publicly and discussing with groups from the community served, as they worked together for the residents. They can hardly claim this as onerous. It should be the responsibility of the outside authority to check the accuracy of the data and ensure that it is collated and taken to the community.
To see the full report go to the following web page
Click Here to go back to the main report page