Jimmo v. Sebelius Settlement Agreement Fact Sheet
On January 24, 2013, the U. S. District Court for the District of Vermont approved a settlement agreement in the case of Jimmo v. Sebelius, in which the plaintiffs alleged that Medicare contractors were inappropriately applying an “Improvement Standard” in making claims determinations for Medicare coverage involving skilled care (e.g., the skilled nursing facility (SNF), home health (HH), and outpatient therapy (OPT) benefits). The settlement agreement sets forth a series of specific steps for the Centers for Medicare & Medicaid Services (CMS) to undertake, including issuing clarifications to existing program guidance and new educational material on this subject. The goal of this settlement agreement is to ensure that claims are correctly adjudicated in accordance with existing Medicare policy, so that Medicare beneficiaries receive the full coverage to which they are entitled.
In the case of Jimmo v. Sebelius, the Center for Medicare Advocacy (CMA) alleged that Medicare claims involving skilled care were being inappropriately denied by contractors based on a rule-of- thumb “Improvement Standard”—under which a claim would be summarily denied due to a beneficiary’s lack of restoration potential, even though the beneficiary did in fact require a covered level of skilled care in order to prevent or slow further deterioration in his or her clinical condition. In the Jimmo lawsuit, CMS denied establishing an improper rule-of-thumb “Improvement Standard.” The Court never ruled on the validity of the Jimmo plaintiffs’ allegations.
While an expectation of improvement would be a reasonable criterion to consider when evaluating, for example, a claim in which the goal of treatment is restoring a prior capability, Medicare policy has long recognized that there may also be specific instances where no improvement is expected but skilled care is, nevertheless, required in order to prevent or slow deterioration and maintain a beneficiary at the maximum practicable level of function. For example, in the regulations at 42 CFR 409.32(c), the level of care criteria for SNF coverage specify that the “. . . restoration potential of a patient is not the deciding factor in determining whether skilled services are needed. Even if full recovery or medical improvement is not possible, a patient may need skilled services to prevent further deterioration or preserve current capabilities.”
The Medicare statute and regulations have never supported the imposition of an “Improvement Standard” rule-of-thumb in determining whether skilled care is required to prevent or slow deterioration in a patient’s condition. A beneficiary’s lack of restoration potential cannot, in itself, serve as the basis for denying coverage, without regard to an individualized assessment of the beneficiary’s medical condition and the reasonableness and necessity of the treatment, care, or services in question. Conversely, coverage in this context would not be available in a situation where the beneficiary’s care needs can be addressed safely and effectively through the use of nonskilled personnel.
Thus, such coverage depends not on the beneficiary’s restoration potential, but on whether skilled care is required, along with the underlying reasonableness and necessity of the services themselves. Any Medicare coverage or appeals decisions concerning skilled care coverage must reflect this basic principle. In this context, it is also essential and has always been required that claims for skilled care coverage include sufficient documentation to substantiate clearly that skilled care is required, that it is in fact provided, and that the services themselves are reasonable and necessary, thereby facilitating accurate and appropriate claims adjudication.
The Settlement Agreement - No Expansion of Medicare Coverage:
The Jimmo v. Sebelius settlement agreement itself includes language specifying that “Nothing in this Settlement Agreement modifies, contracts, or expands the existing eligibility requirements for receiving Medicare coverage.”
The settlement agreement is intended to clarify that when skilled services are required in order to provide care that is reasonable and necessary to prevent or slow further deterioration, coverage cannot be denied based on the absence of potential for improvement or restoration. As such, any actions undertaken in connection with this settlement do not represent an expansion of coverage, but rather, serve to clarify existing policy so that Medicare claims will be adjudicated consistently and appropriately.
CMS plans to conduct the following activities under the terms of the settlement agreement:
Clarifying Policy – Updating Program Manuals
The first action CMS will undertake as specified in the settlement agreement will be revising the relevant program manuals used by Medicare contractors. The Medicare program manuals will be reworded for clarity, so as to reinforce the intent of the policy. Specifically, in accordance with the settlement agreement, manual revisions will clarify that coverage of therapy “...does not turn on the presence or absence of a beneficiary’s potential for improvement from the therapy, but rather on the beneficiary’s need for skilled care.”
Educational Campaign – Informing Stakeholders
The next step CMS will take will be an educational campaign for contractors, adjudicators, and providers and suppliers. CMS will disseminate to these recipients a variety of written materials, including:
• Program Transmittal;
• Medicare Learning Network (MLN) Matters article; • Updated 1-800 MEDICARE scripts.
CMS will also conduct national conference calls with providers and suppliers as well as Medicare contractors, Administrative Law Judges, medical reviewers, and agency staff, to communicate the policy clarifications described herein and answer questions.
In addition, to ensure beneficiaries receive the care to which they are entitled, CMS will engage in accountability measures, including review of a random sample of SNF, HH, and OPT coverage decisions to determine overall trends and identify any problems, as well as a review of individual claims determinations that may not have been made in accordance with the principles set forth in the settlement agreement.
According to the terms of the settlement agreement, CMS will complete the manual revisions and educational campaign by January 23, 2014, which is within one year of the approval date of the settlement agreement.
by Ron Ashkenas | 10:00 AM April 16, 2013
As a recognized discipline, change management has been in existence for over half a century. Yet despite the huge investment that companies have made in tools, training, and thousands of books (over 83,000 on Amazon), most studies still show a 60-70% failure rate for organizational change projects — a statistic that has stayed constant from the 1970's to the present.
Given this evidence, is it possible that everything we know about change management is wrong and that we need to go back to the drawing board? Should we abandon Kotter's eight success factors, Blanchard's moving cheese, and everything else we know about engagement, communication, small wins, building the business case, and all of the other elements of the change management framework?
While it might be plausible to conclude that we should rethink the basics, let me suggest an alternative explanation: The content of change management is reasonably correct, but the managerial capacity to implement it has been woefully underdeveloped. In fact, instead of strengthening managers' ability to manage change, we've instead allowed managers to outsource change management to HR specialists and consultants instead of taking accountability themselves — an approach that often doesn't work.
Here's an example of this pattern: Over the course of several years, a major healthcare company introduced thousands of managers to a particular change management approach, while providing more intensive training in specific tools and techniques to six sigma and HR experts. As a result, managers became familiar with the concepts, but depended on the "experts" to actually put together the plans. Eventually, change management just became one more work-stream for every project, instead of a new way of thinking about how to get something accomplished.
Obviously, not every company lets its managers off the hook in this way. But if your organization (or your piece of it) struggles with effectively implementing change, you might want to ask yourself the following three questions:
- Do you have a common framework, language, and set of tools for managing significant change? There are plenty to choose from, and many of them have the same set of ingredients, just explained and parsed differently. The key is to have a common set of definitions, approaches, and simple checklists that everyone is familiar with.
- To what extent are your plans for change integrated into your overall project plans, and not put together separately or in parallel? The challenge is to make change management part and parcel of the business plan, and not an add-on that is managed independently.
- Finally, who is accountable for effective change management in your organization: Managers or "experts" (whether from staff groups or outside the company)? Unless your managers are accountable for making sure that change happens systematically and rigorously — and certain behaviors are rewarded or punished accordingly — they won't develop their skills.
Everyone agrees that change management is important. Making it happen effectively, however, needs to be a core competence of managers and not something that they can pass off to others.
Websites such as Luminosity.com make some bold promises about the effectiveness of computer-based brain-training programs. The site claims:
“Harness your brain’s neuroplasticity and train your way to a brighter life”
“Your brain’s abilities are unique. That’s why your Personalized Training Program adapts to fit your brain and your life goals.”
“Just 10 hours of Lumosity training can create drastic improvements. Track your own amazing progress with our sophisticated tools.”
Wow – in just 10 hours I can become smarter by playing fun video games personalized to my brain. I’m a huge fan of video games, and I would love to justify this hobby by saying that I’m training my brain while I play, but what does the scientific evidence have to say about such claims?
Not surprisingly, the published evidence is complex and mixed.
Before I summarize that evidence, let me describe the variables with which brain-training research must contend. First there are various target populations who likely will not respond in the same way to brain-training interventions. These include: healthy children, healthy young adults, healthy older adults, children with some form of cognitive impairment or developmental delay, adults with traumatic brain injury, older adults with mild cognitive impairment, and older adults with Alzheimer’s disease or other forms of dementia.
Most studies do indeed pick a target population or two on which to focus. Each of these populations need to be considered separately when reviewing the literature.
The second important variable is the brain function that is being evaluated. There is no single measure of brain function or intelligence. Studies typically identify the following distinct functions:
Memory is the ability to encode, store, and recall information. Memory can be further divided into recognition, recall, verbal, visual, episodic, and working memory. Each type of memory has specific tasks associated with that memory function.
Attention is the ability to focus one’s perception on target visual or auditory stimuli and filter out unwanted distractions.
Executive function is ability to strategically plan ones actions, abstraction, and cognitive flexibility – the ability to change strategy as needed. A classic test for executive function is trail-making, drawing a line from A-1-B-2, etc, which requires quickly switching from numbers to letters and back again.
Reaction time and processing speed are related functions that deal with how quickly someone can react to stimuli and process information, respectively.
Another very important variable in brain-training studies is generalizability – to what extend does training in one specific task increase performance on other tasks, and how far from the trained task does the effect extend? For example, does training in a visual memory task improve verbal memory, and does any memory training improve executive function?
Intervention types generally break down to three categories – classic training tasks, neuropsychological training (which involves multiple tasks at once), and video games.
Finally, studies need to account for the duration of any training effect. If there is an effect, how long does it last after the period of training ends?
The above variables must be considered in addition to all the generic factors that influence the rigor of any clinical study – number of subjects, randomization, effect size, statistical significance, proper blinding, adequate control group, accounting for multiple comparisons, drop-out rate if any, dose-response (in this case, duration and intensity of training) and replicability.
With all of these variables to account for it will take a great deal of research to understand the true effects of computer-based brain training of each type for various outcomes and on various populations. Not surprisingly, existing research is just scratching the surface of addressing all the potential questions regarding brain training.
A 2012 systematic review by Kueider et. al. identified 151 computerized training studies published between 1984 and 2011 involving healthy older adults. That is not many studies, resulting in only a few studies for each intervention and target population. Of the 151 studies identified, only 38 met the review’s inclusion criteria.
For the full results of this review, I suggest you read the original article, which is available open-access at the link above. It’s not really possible to summarize the full results in less space than the review itself, so there is no reason to duplicate it here. To give an overview, however, in each category there were only a few studies, and most studies were relatively small. My overall impression, therefore, is that much more research needs to be done.
Studies generally found positive effects from brain training (not surprising for small preliminary studies), but in most cases results were mixed with some positive and some negative studies. Brain training was generally found to be as effective as traditional book and pencil training, but less labor intensive.
Effects were strongest for the task that was trained, with highly variable outcomes in terms of generalizability. Overall tasks generalized either not at all or only to closely related tasks, but not across the board or to very different tasks. For example, there seemed to be no cross-over effect between visual spacial cognitive function and verbal cognitive function.
In this review classic training tasks had the biggest effect on working memory, processing speed, and executive function. Neuropsychological tasks had the most improvement on memory and visuospacial ability. Video games had a positive impact on reaction time and processing speed.
A more recent 2013 review and meta-analysis of studies involving healthy children and adults concluded:
The authors conclude that memory training programs appear to produce short-term, specific training effects that do not generalize. Possible limitations of the review (including age differences in the samples and the variety of different clinical conditions included) are noted. However, current findings cast doubt on both the clinical relevance of working memory training programs and their utility as methods of enhancing cognitive functioning in typically developing children and healthy adults.
A 2013 study of brain training in older adults with mild cognitive impairment or dementia found no statistically significant difference in the treatment and control groups, but a tendency toward better performance in the treatment group, only for the more mildly affected subjects.
Computer based brain-training is a promising intervention for maintaining and improving cognitive function in healthy and perhaps mildly impaired individuals, primarily because it is convenient, less labor intensive than traditional methods, and cost effective.
Existing research, however, is inadequate to rigorously address all of the variables of brain-training interventions. There does appear to be a few apparent patterns in existing research, however.
- Brain-training is effective, whether designed as classic cognitive tasks, combined tasks, or video games
- Effects are mostly restricted to the specific tasks being trained and do not significantly generalize to other tasks or cognitive functions
- Effects tend to be short lived, although evidence here is very mixed
- Computer-based brain training does not appear to be significantly different in outcome from traditional pencil and paper based training, but is less labor intensive.
- I could find no published evidence to support any claims for individualized programs.
In short, brain-training does not seem to make you smarter, but will make you better at whatever task you perform. This can be simply a training effect – you will get better at anything you do repetitively. This is no more an effect of brain plasticity than any generic learning. Suggestions that such brain training makes your brain function better in any way other than simply learning the task that is being practiced is not evidence-based.
Another way to look at all this is that the very concept of “brain-training” is probably flawed. It is useful as a marketing slogan, but does not seem to be based in reality. “Brain-training” is just a fancy term for good old-fashioned learning, but is meant to invoke an image of cutting edge neuroscience and brain plasticity which is not supported by evidence. It’s just learning.
The bottom-line recommendations I would make from existing data are this:
- Engaging in various types of cognitively demanding tasks is probably a good thing.
- Try to engage in novel and various different types of tasks. These do not have to be computer-based.
- Find games that you genuinely find fun – don’t make it a chore, and don’t overdo it.
- Don’t spend lots of money on fancy brain-training programs with dramatic claims.
- Don’t believe the hype.
Finally, there is a clear need for further research. We need many large rigorous studies that control for multiple variables.
Doctors and nurse practitioners: We’re failing the reality test
Over the past several months, I have covered some controversial topics, such as electronic health records and the overuse of diagnostic testing. For this month’s column, I will address a less provocative topic: the role of non-physician providers in patient care. (Okay, perhaps we will discuss something non-controversial next month.)
Rather than rehash organized medicine’s position(s) on the topic or attempt an unbiased review of the evidence (what little there is), I will present a practicing physician’s real-life perspective of the issue, and comment on the vitriol that this subject generates. Before I go further, I remind you that my statements do not necessarily reflect official policies of ACP.
I have worked with nurse practitioners or physician assistants since medical school in different settings: resident clinic, a staff-model HMO, and 20 years in private practice. During that time, I have been a colleague, teammate, co- worker, supervisor, and employer of NPs and PAs. For simplicity, I will refer to both types of clinicians as non-physician providers, or NPPs (“mid-level providers” or “physician extenders” are terms that many NPs and PAs find objectionable, by the way).
My practice uses NPPs to increase our patients’ access to care. Our patients can see NPPs for urgent visits, follow up of chronic conditions such as diabetes and hypertension, and preventive services. Our NPPs do not have their own patient panels because we prefer that every patient in the practice have a primary physician. Our preference is based more on logistics than our judgment of the NPPs’ ability to manage a panel of selected patients. However, some of our patients take matters into their own hands and find a way to see the NPP for all of their problems. I don’t view that as a threat but see it as an affirmation that we have a team of providers that patients feel comfortable seeing. Some patients, on the other hand, refuse to see anyone but a physician. That is their choice. When they request an appointment, we make clear who they can see and what their credentials are.
Our NPPs see patients independently. When they have a question, they ask one of the physicians. In a typical day, that might happen once or twice, usually because the patient is complicated or has an unclear presentation. Often, the NPP will recommend that such patients follow up with one of the physicians. That isn’t surprising given the differences in training and expertise. On the other hand, sometimes one physician will ask another for help with an exam finding or a management question. One of my NPPs worked in a dermatology office for many years, and sometimes I will ask her to look at a rash that I can’t figure out. When we are not sure of something, we ask for help, regardless of our title.
Physicians review and cosign every office note from an NPP visit. There are a few reasons for that, including billing requirements, but it also helps us to keep up to speed with what is happening with our patients. That stated, there are very few occasions that I read an NPP’s note and disagree with the care provided, and most of those disagreements are more over style than substance. I suspect that if I reviewed my physician colleagues’ notes I would have similar disagreements from time to time.
Do our NPPs order more tests or prescribe more antibiotics than the physicians prescribe? Sometimes it seems that way, but then again the NPPs are often seeing acutely ill patients. It varies by NPP, just as physicians differ in their test and antibiotic use. I believe that NPPs welcome education on appropriate use of tests and treatments more than physicians do. I should add that I have hired new physicians straight out of residency who order more tests and antibiotics per capita than any NPP.
On average, our NPPs see fewer patients per day than do our physicians, but in a crunch, the NPPs can see just as many, if not more. The longer visits with the NPPs are by design, for reasons such as patient education and chronic care management. We are a fee-for-service practice, so provider productivity matters, but at the same time, with the longer NPP visits we can provide better care for our patients without hurting the bottom line too much.
From my vantage point, many of the arguments over how to limit what NPPs do fail the reality test. We hear a lot about supervision. One could argue that most of my patients’ visits with NPPs take place without my supervision. While you can call my reviewing the notes “supervising,” by the time I read the note, the prescriptions are written, the tests ordered, and the patient sent home. When my NPPs need help with a patient, they seek help, just as a physician should under similar circumstances. That has nothing to do with regulations or employment status; it is a professional obligation.
Then there is the talk about interchangeability of physicians and NPPs. NPPs can provide many of the primary care and acute care services that I do. That does not make us equivalent, just as my being able to provide much of the care to patients with heart disease does not make me a cardiologist. We work well together when we understand our roles, abilities, and limitations, and we value what each of us brings to the care of our patients.
As to the economic arguments about threats to physician practice, my home state is one of the most permissive for independent nurse practitioner practice, yet there are very few such practices in the state. Perhaps that speaks to the choices that NPPs make, or the fact that a business model that doesn’t work well for physicians wouldn’t work any better for NPPs.
So, when I sit in meetings and listen to angry and frightened physicians or defiant NP leaders discuss “scope of practice,” “restraint of trade,” and who can do what better than the other, I think about what goes on in the real world and wonder if we’re all on the same planet. Why don’t we focus on communication, collaboration, education, and professionalism instead?
Yul Ejnes practices internal medicine in Cranston, Rhode Island, and is the Immediate Past Chair, Board of Regents, American College of Physicians. His statements do not necessarily reflect official policies of ACP.