* The Wall Street Journal
* NOVEMBER 27, 2009
Insurer Aims to Alter Health-Care Fee Model
By BARBARA MARTINEZ
Blue Cross Blue Shield of Massachusetts Inc. is expected to announce Friday a deal covering 60,000 members of the Caritas Christi Health Care system, marking one of the country's largest experiments in fundamentally changing the way doctors and hospitals are paid.
In most of the U.S. health-care system, doctors and hospitals generally earn money when people get sick, under a reimbursement system known as "fee for service." But Blue Cross is trying to change the payment model to a system in which doctors and hospitals earn more by keeping patients healthy and out of doctors' offices and hospitals.
If successful, the approach offers a potential model for the rest of the U.S. Legislation to overhaul the health-care system pending in the Senate calls for Medicare to set up small experiments to change reimbursement in ways similar to what Blue Cross is attempting....
Full article at: http://online.wsj.com/article/SB125928023296565707.html
I am not a fan of the present payment scheme in American Medicine...far from it. However, the idea that a what amounts to be a capitated network controlled by hospitals would end up delivering better service and care to patients has serous flaws. In order to understand the almost certain breakdowns which will occur using this model, you need to first think about who is contracted to who.
Ideally, contractual arrangements are constructed between two parties and some sort of exchange happens between the two parties. Each party controls their own resources and has the ability to continue the relationship or to end the contract. The decision to invest their own resources and continue to relationship is based upon their own criteria which may or may not bear great semblance to criteria established by someone else. You can decide what is important to you and allocate your own resources accordingly
In the case of a capitated health care network proposed, there are patients who relinquish or are granted financial resources through some mechanism (wages withheld, taxes, government largess) and this money goes directly to some third party, generally some variant of insurance company. Thus from the start, those who supposedly are the final recipients of any health care services control none of the resources. Patients ultimately must be dependent upon the kindness of strangers unless the incentives of those who hold the money are aligned with the needs and wants of patients. Fat chance that will happen consistently.
Next the real contractual arrangements are negotiated between the insurance company and some sort of health care delivery agent. In the Massachusetts plan outlined in the WSJ story, the insurance company develops some sort of prepaid agreement with hospitals based upon the assumption that for a set amount of money, hospitals will deliver complete care to a set of patients. The idea is that hospitals can function as some sort of accountable entity. The question becomes accountable to whom?
Ultimately hospitals (or integrated health care systems) in this scenario are accountable primarily to legal entities with whom they have entered into contractual obligations, that being the insurance companies who hold the money. Any actual obligation to patients, who hold a limited ability to control resources, must be secondary.
One of the basic tenants of moving to a capitated model in an integrated system is that physicians will no longer be paid for doing more things to patients, thus ending a perverse incentive structure which rewarded some physicians for over utilizing lucrative activities. However, the new system will replace one set of perverse incentives with a second perhaps worse incentives. Physicians (and presumably non-MD extenders of all types) will be employees of of the specific entities which have primary contractual obligations not to patients but to insurers.
Will (or should) all encounters between patients and providers be preceded by the equivalent of the reading of the "Health Care Miranda" statement which might read like:
" I may appear to be your personal physician (physician extender, nurse practioner..) and have your best interests in mind. However, I am an employee of the XXXXXXX Health Care system with whom I have a contractual obligation. You may have specific wants and desires relating to your health care and our priorities at XXXXXXX Health Care system hopefully overlap to some degree with your priorities. My entire compensation and benefits are paid by XXXXXXX Health Care system. My year end bonus is primarily based upon specifically measurable end-points which may have little to do with your specific health or well being. It is our mission to deliver what we have convinced our contractual partners that you need, not necessarily to deliver what you want. I am incentivized to avoid making you particularly unhappy but there is little financial reason for me to aim to go much beyond this goal since I do not actually work for you".
Definitely not a follower: Following the herd will get you to where the herd is going
Friday, November 27, 2009
Wednesday, November 25, 2009
Where are our blind spots?
Hindsight is 20/20. Yes, it may be cliche but it is also a truism. Nowhere is this so obvious as when a major scam or hoax is uncovered. When Bernie Madoff's Ponzi scheme was revealed, the warning signs were so obvious in retrospect.
Many things change but one thing that does not is for us to thrive (both individually and together), we must be trusting to some degree. In being so we are prone to be scammed and when we or someone else is the victim of a scam or hoax, we scratch our heads and ask, how could we (or anyone) be so gullible?
Being a person of science, I am reasonably well steeped in the concepts of skepticism and testable hypotheses. One of the great things about the scientific method is the emphasis on testing things in an environment where one has a good handle on most of the key variables. The problem is that context has only limited applicability to the real world. Most of the circumstances we find ourselves in real life which require some sort of decision are one shot deals where we do not even begin to know all the variables. We will experience an outcome where we are not able to make any reasonable comparisons to alternative outcomes. We will be able to decide whether we are happy with our respective outcomes and nothing more.
Under only the rarest of circumstances do people live without the need for interactions with other people. We require "things" as well as emotional support from our fellow humans, much of which which obtain through formal and informal exchange. Ideally, the exchange which occurs results in each party benefiting from the exchange. There are circumstances where it is obvious when one of the parties has been deceitful and the exchange is asymmetric, and it becomes obvious in a time frame where those affected understand the outcome.
In contrast, there are many instances throughout history where entire populations have been "scammed" for generations. Perhaps the most obvious examples have been ruling elites who employed sages, priests, wizards, and the related classes of experts who provided purported glimpses into the future. Many models were used. The ancient Chinese used broken bones from animals and the Greeks and Romans consulted the Oracle of Delphi. Perhaps it is too strong to use the term scam since this implies some sort of intentional deceit. Still experts sold themselves as having skills beyond what they could actually deliver. No one caught on for hundreds of years because there was not way to demonstrate their predictions were actually wrong.
There is constancy throughout history in that widely held beliefs are found to be simply wrong at a later period of time when tools or circumstances allow for actual testing of hypotheses. It is very unlikely that this has changed or will change in the near future. The question arises, what widely held beliefs do we now have that will prove to be completely wrong?
Many things change but one thing that does not is for us to thrive (both individually and together), we must be trusting to some degree. In being so we are prone to be scammed and when we or someone else is the victim of a scam or hoax, we scratch our heads and ask, how could we (or anyone) be so gullible?
Being a person of science, I am reasonably well steeped in the concepts of skepticism and testable hypotheses. One of the great things about the scientific method is the emphasis on testing things in an environment where one has a good handle on most of the key variables. The problem is that context has only limited applicability to the real world. Most of the circumstances we find ourselves in real life which require some sort of decision are one shot deals where we do not even begin to know all the variables. We will experience an outcome where we are not able to make any reasonable comparisons to alternative outcomes. We will be able to decide whether we are happy with our respective outcomes and nothing more.
Under only the rarest of circumstances do people live without the need for interactions with other people. We require "things" as well as emotional support from our fellow humans, much of which which obtain through formal and informal exchange. Ideally, the exchange which occurs results in each party benefiting from the exchange. There are circumstances where it is obvious when one of the parties has been deceitful and the exchange is asymmetric, and it becomes obvious in a time frame where those affected understand the outcome.
In contrast, there are many instances throughout history where entire populations have been "scammed" for generations. Perhaps the most obvious examples have been ruling elites who employed sages, priests, wizards, and the related classes of experts who provided purported glimpses into the future. Many models were used. The ancient Chinese used broken bones from animals and the Greeks and Romans consulted the Oracle of Delphi. Perhaps it is too strong to use the term scam since this implies some sort of intentional deceit. Still experts sold themselves as having skills beyond what they could actually deliver. No one caught on for hundreds of years because there was not way to demonstrate their predictions were actually wrong.
There is constancy throughout history in that widely held beliefs are found to be simply wrong at a later period of time when tools or circumstances allow for actual testing of hypotheses. It is very unlikely that this has changed or will change in the near future. The question arises, what widely held beliefs do we now have that will prove to be completely wrong?
Sunday, November 22, 2009
Understanding what is knowable
A piece appeared last week in the WSJ on the rating system of wines and how a particular winery proprietor and Professor conceived and executed a blinded trial to assess the ability of wine tasting to provide any sort of consistent and reproducible results. The piece can be found at:
http://online.wsj.com/article/SB10001424052748703683804574533840282653628.html
A relevant excerpt which summarizes the key findings is:
"The unlikely revolutionary is a soft-spoken fellow named Robert Hodgson, a retired professor who taught statistics at Humboldt State University. Since 1976, Mr. Hodgson has also been the proprietor of Fieldbrook Winery, a small operation that puts out about 10 wines each year, selling 1,500 cases
A few years ago, Mr. Hodgson began wondering how wines, such as his own, can win a gold medal at one competition, and "end up in the pooper" at others. He decided to take a course in wine judging, and met G.M "Pooch" Pucilowski, chief judge at the California State Fair wine competition, North America's oldest and most prestigious. Mr. Hodgson joined the Wine Competition's advisory board, and eventually "begged" to run a controlled scientific study of the tastings, conducted in the same manner as the real-world tastings. The board agreed, but expected the results to be kept confidential.....
In his first study, each year, for four years, Mr. Hodgson served actual panels of California State Fair Wine Competition judges—some 70 judges each year—about 100 wines over a two-day period. He employed the same blind tasting process as the actual competition. In Mr. Hodgson's study, however, every wine was presented to each judge three different times, each time drawn from the same bottle.
The results astonished Mr. Hodgson. The judges' wine ratings typically varied by ±4 points on a standard ratings scale running from 80 to 100. A wine rated 91 on one tasting would often be rated an 87 or 95 on the next. Some of the judges did much worse, and only about one in 10 regularly rated the same wine within a range of ±2 points.
Mr. Hodgson also found that the judges whose ratings were most consistent in any given year landed in the middle of the pack in other years, suggesting that their consistent performance that year had simply been due to chance."
There is an important lesson to be learned from this study. Perhaps the most important knowledge that anyone can have is the knowledge required to recognize that you don't know. This actually reminded me of an experience I had in my internship in the late 1970's when I spent a year as a general medical intern. I worked in a general hospital caring for patients with acute bread and butter medical problems. We had a substantial numbers of patients with acute strokes and it was at the beginning of the era of sophisticated imaging tools. However, our specific hospital did not have a CAT scanner. We were able to get scans in a not so timely fashion from another sister hospital in the area.
For the first half of the year the medical service would admit the patient and get an immediate Neurology consultation. The Neurology resident would come with their big black bag and use the time honored tools of the bedside neurology exam to localize the lesion. They would then write a detailed noted confidently describing where the stroke lesion resided in the brain. They did so based upon years of experience using these tools and there was great confidence in the utility of thse time honored tools. After their assessment the Neurology team would then recommend the patient receive a CAT scan of the brain when the test could be done.
Unfortunately, the time honored bedside tools had never actually been validated since the tools to validate them had not yet been developed; that is until new imaging tools were developed and deployed in the late 1970's. During the first six months of my internship the sequence was always neurology evaluation, detailed report, and then CAT scan. The results were remarkable. The CAT scan showed the bedside neurological evaluation was basically always wrong when it came to identifying the actual site of the stroke. The second half of my internship the sequence was always neurology evaluation, CAT scan and then detailed report. When confronted with unambiguous evidence that the time honored tools were terribly flawed, these tools where quickly jettisoned.
There was no strong financial stake held in the bedside neurological assessment. In fact, it is not at all surprising that a tedious, time consuming, and poorly compensated activity such as this would become history when a better tool came along. However, strongly held but weakly supported beliefs are not always let go so readily, particularly when they serve as the underpinnings for financially lucrative activities.
Under those circumstances it is devilishly difficult to get the parties who have something at risk to objectively ask fundamental questions. How do I know what I believe to be true is actually true? What knowledge is really knowable and how do I actually know this to be true? While this may all sound to be the stuff of late night bull sessions in a college dormitory, it really is central to any professional activity where clients come to you as a trusted person of authority. Strongly held beliefs supported by nothing more than strongly held beliefs tend to serve only as rationalizations of self serving activities.
http://online.wsj.com/article/SB10001424052748703683804574533840282653628.html
A relevant excerpt which summarizes the key findings is:
"The unlikely revolutionary is a soft-spoken fellow named Robert Hodgson, a retired professor who taught statistics at Humboldt State University. Since 1976, Mr. Hodgson has also been the proprietor of Fieldbrook Winery, a small operation that puts out about 10 wines each year, selling 1,500 cases
A few years ago, Mr. Hodgson began wondering how wines, such as his own, can win a gold medal at one competition, and "end up in the pooper" at others. He decided to take a course in wine judging, and met G.M "Pooch" Pucilowski, chief judge at the California State Fair wine competition, North America's oldest and most prestigious. Mr. Hodgson joined the Wine Competition's advisory board, and eventually "begged" to run a controlled scientific study of the tastings, conducted in the same manner as the real-world tastings. The board agreed, but expected the results to be kept confidential.....
In his first study, each year, for four years, Mr. Hodgson served actual panels of California State Fair Wine Competition judges—some 70 judges each year—about 100 wines over a two-day period. He employed the same blind tasting process as the actual competition. In Mr. Hodgson's study, however, every wine was presented to each judge three different times, each time drawn from the same bottle.
The results astonished Mr. Hodgson. The judges' wine ratings typically varied by ±4 points on a standard ratings scale running from 80 to 100. A wine rated 91 on one tasting would often be rated an 87 or 95 on the next. Some of the judges did much worse, and only about one in 10 regularly rated the same wine within a range of ±2 points.
Mr. Hodgson also found that the judges whose ratings were most consistent in any given year landed in the middle of the pack in other years, suggesting that their consistent performance that year had simply been due to chance."
There is an important lesson to be learned from this study. Perhaps the most important knowledge that anyone can have is the knowledge required to recognize that you don't know. This actually reminded me of an experience I had in my internship in the late 1970's when I spent a year as a general medical intern. I worked in a general hospital caring for patients with acute bread and butter medical problems. We had a substantial numbers of patients with acute strokes and it was at the beginning of the era of sophisticated imaging tools. However, our specific hospital did not have a CAT scanner. We were able to get scans in a not so timely fashion from another sister hospital in the area.
For the first half of the year the medical service would admit the patient and get an immediate Neurology consultation. The Neurology resident would come with their big black bag and use the time honored tools of the bedside neurology exam to localize the lesion. They would then write a detailed noted confidently describing where the stroke lesion resided in the brain. They did so based upon years of experience using these tools and there was great confidence in the utility of thse time honored tools. After their assessment the Neurology team would then recommend the patient receive a CAT scan of the brain when the test could be done.
Unfortunately, the time honored bedside tools had never actually been validated since the tools to validate them had not yet been developed; that is until new imaging tools were developed and deployed in the late 1970's. During the first six months of my internship the sequence was always neurology evaluation, detailed report, and then CAT scan. The results were remarkable. The CAT scan showed the bedside neurological evaluation was basically always wrong when it came to identifying the actual site of the stroke. The second half of my internship the sequence was always neurology evaluation, CAT scan and then detailed report. When confronted with unambiguous evidence that the time honored tools were terribly flawed, these tools where quickly jettisoned.
There was no strong financial stake held in the bedside neurological assessment. In fact, it is not at all surprising that a tedious, time consuming, and poorly compensated activity such as this would become history when a better tool came along. However, strongly held but weakly supported beliefs are not always let go so readily, particularly when they serve as the underpinnings for financially lucrative activities.
Under those circumstances it is devilishly difficult to get the parties who have something at risk to objectively ask fundamental questions. How do I know what I believe to be true is actually true? What knowledge is really knowable and how do I actually know this to be true? While this may all sound to be the stuff of late night bull sessions in a college dormitory, it really is central to any professional activity where clients come to you as a trusted person of authority. Strongly held beliefs supported by nothing more than strongly held beliefs tend to serve only as rationalizations of self serving activities.
Wednesday, November 18, 2009
Evidenced based medicine vs. Politically based medicine
The latest blow up regarding the Preventative Services Task force recommendations on mammography should dispel any doubts about our ability to create a firewall around "evidenced based medicine" to prevent decisions from being politicized. Politics always trumps reason. However, many of the assumptions underlying health care reform are undermined by this observation.
It is commonly accepted that health care is a basic human entitlement which should be guaranteed by legal protection. In order for this right to attain such a status, we need to be in the position to define what the scope of the right should be. Presumably, this definition should be based upon what is rational and beneficial to patients. How hard can that be?
The mammography screening controversy underscores how difficult this process is. Few areas have been as well studied for as long as the effectiveness of mammography for preventing breast cancer death. Much of the criticism of the Task Force's recommendations have focused on the perception that their recommendations were solely based upon financial considerations, implying that the only reason not to screen are because it is just too expensive to screen in the younger age groups. This is not an accurate assessment of their work. The full report can be found at http://www.ahrq.gov/clinic/3rduspstf/Breastcancer/bcscrnsum1.htm.
There is much more to the story than money. First, the underlying data does not inspire great confidence in the utility of mammography to save lives. The number of subjects to screen over 10 years to save one life in the 50-60 age group is estimated to be around 1300 with the confidence intervals ranging from about 300 to over 7000! The number to screen in the 40-50 age group was ~1900 with an only slightly smaller confidence interval (900 to 6000). I have to wonder if this data is EVER presented to patients when screening is placed in front of them as an option. What this essentially means is that for any given primary care physician over their entire practice lifetime they could order literally thousands of screening mammograms and they may be saving no lives whatsoever.
When you start to compare numbers from various studies, they just do not add up. For example, in a study by Rowan T. Chlebowski, M.D., Ph.D., Harbor–UCLA WHI Clinical Center, published in 2008 findings from the WHI Estrogen plus Progestin (E+P) Hormone Trial. Of the 16,608 women enrolled in the E+P Trial, 8,506 were randomly assigned to take active study pills with combined estrogen plus progestin, while 8,102 took inactive placebo pills. Each woman had a mammogram and breast examination yearly. Biopsies were performed based on their physicians’ clinical judgment.
During the 5.6 years of the trial, 199 women in the active hormone group and 150 women in the placebo group developed breast cancer. Assuming the rate of cancer formation is relatively constant, one can extrapolate that about 300-400 women will develop cancer over the 10 year window. Given the meta-analysis of the Preventative task force where you need to screen about 1500 women on average for 10+ years to save one life, this would suggest that screening would have saved around 4-6 women in each of these roughly 8000 women cohort. What about the other 150-200 women? Does that mean that all the remaining women's lives, who were diagnosed and treated for breast cancer, were not saved?
At least part of the explanation may be the explosion in the diagnosis of ductal carcinoma in situ (DCIS), which is clearly a consequence of increased screening intensity. As noted in the PSTF report;
"some view diagnosis and treatment of ductal carcinoma in situ (DCIS) as potential adverse consequences of mammography. There is incomplete evidence regarding the natural history of DCIS, the need for treatment, and treatment efficacy, and some women may receive treatment of DCIS that poses little threat to their health. In a 1992 study, 44 percent of women with DCIS were treated with mastectomy and 23 percent to 30 percent were treated with lumpectomy or radiation. In one survey, only 6 percent of women were aware that mammography might detect nonprogressive breast cancer."
Using common funds to pay for screening activities hides the fact that each of us is assuming part of the cost for this. It is a common foil to say that you cannot put a cost on a life saved. There are at least two flaws with this reasoning. First, the evidence of lives saved is marginal in the younger age groups. Second, there is little evidence that decisions by patients are done in an environment where the actual numbers are conveyed to them in an understandable fashion. Part of the problem may be that most physicians are not aware of the numbers. Even if the direct monetary cost to the participant may approach zero, is it a good deal? If you have to put up with more than 500 false positives to find one cancer, and 95% of those cancers do not seem to behave malignantly anyway, is that an activity that most patients if well informed would buy in to?
The question is whether the present data on the effectiveness of mammography when presented to an informed consumer would convince individual women to spend their own money. We could make some estimates as to what the actual cost per person screened could be based upon a rough assumption of $100 per test and a biopsy rate of 1 woman biopsied by 10 screened. This means on average every woman will need to pay for 10 screens and one biopsy per decade. Whether this would end up being a needle or open biopsy or something even more involved such as sterotactic biopsy is an open question. Lets just assume a conservative $2000 cost. Add in another $1000 for total time travel, lost wages and productivity for a total cost of $4000. How many women would pay $4000 for the equivalent of a lottery ticket which to reduce their risk of death over the course of one decade by about 1 in 1000?
Welcome to the tragedy of the commons. All this discussion is irrelevant if services such as mammography are paid for by individual resources. The total cost of screening per woman screened is likely on the order of a few thousand dollars over the course of a decade, and represents pennies per day. That amount is hardly the stuff that insurance should pay for, being neither unpredictable or extremely costly.
This controversy is just one small piece of health care. Once you move to a system where resources are placed in a common pool and are allocated via some sort of consensus driven process, you have a mess. This latest controversy clearly shows that any attempt to base decisions on a scientific consensus are doomed. It involves people and the decision will be a political one. End of story. Now all we need to do is extrapolate this out to the entirety of health care decisions regarding allocation. If we can't unambiguously define whether mammograms are warranted as a health care right after all this investment of time and money, how are we going to deal with the defining the scope of the remaining 99.99999% of health care?
It is commonly accepted that health care is a basic human entitlement which should be guaranteed by legal protection. In order for this right to attain such a status, we need to be in the position to define what the scope of the right should be. Presumably, this definition should be based upon what is rational and beneficial to patients. How hard can that be?
The mammography screening controversy underscores how difficult this process is. Few areas have been as well studied for as long as the effectiveness of mammography for preventing breast cancer death. Much of the criticism of the Task Force's recommendations have focused on the perception that their recommendations were solely based upon financial considerations, implying that the only reason not to screen are because it is just too expensive to screen in the younger age groups. This is not an accurate assessment of their work. The full report can be found at http://www.ahrq.gov/clinic/3rduspstf/Breastcancer/bcscrnsum1.htm.
There is much more to the story than money. First, the underlying data does not inspire great confidence in the utility of mammography to save lives. The number of subjects to screen over 10 years to save one life in the 50-60 age group is estimated to be around 1300 with the confidence intervals ranging from about 300 to over 7000! The number to screen in the 40-50 age group was ~1900 with an only slightly smaller confidence interval (900 to 6000). I have to wonder if this data is EVER presented to patients when screening is placed in front of them as an option. What this essentially means is that for any given primary care physician over their entire practice lifetime they could order literally thousands of screening mammograms and they may be saving no lives whatsoever.
When you start to compare numbers from various studies, they just do not add up. For example, in a study by Rowan T. Chlebowski, M.D., Ph.D., Harbor–UCLA WHI Clinical Center, published in 2008 findings from the WHI Estrogen plus Progestin (E+P) Hormone Trial. Of the 16,608 women enrolled in the E+P Trial, 8,506 were randomly assigned to take active study pills with combined estrogen plus progestin, while 8,102 took inactive placebo pills. Each woman had a mammogram and breast examination yearly. Biopsies were performed based on their physicians’ clinical judgment.
During the 5.6 years of the trial, 199 women in the active hormone group and 150 women in the placebo group developed breast cancer. Assuming the rate of cancer formation is relatively constant, one can extrapolate that about 300-400 women will develop cancer over the 10 year window. Given the meta-analysis of the Preventative task force where you need to screen about 1500 women on average for 10+ years to save one life, this would suggest that screening would have saved around 4-6 women in each of these roughly 8000 women cohort. What about the other 150-200 women? Does that mean that all the remaining women's lives, who were diagnosed and treated for breast cancer, were not saved?
At least part of the explanation may be the explosion in the diagnosis of ductal carcinoma in situ (DCIS), which is clearly a consequence of increased screening intensity. As noted in the PSTF report;
"some view diagnosis and treatment of ductal carcinoma in situ (DCIS) as potential adverse consequences of mammography. There is incomplete evidence regarding the natural history of DCIS, the need for treatment, and treatment efficacy, and some women may receive treatment of DCIS that poses little threat to their health. In a 1992 study, 44 percent of women with DCIS were treated with mastectomy and 23 percent to 30 percent were treated with lumpectomy or radiation. In one survey, only 6 percent of women were aware that mammography might detect nonprogressive breast cancer."
Using common funds to pay for screening activities hides the fact that each of us is assuming part of the cost for this. It is a common foil to say that you cannot put a cost on a life saved. There are at least two flaws with this reasoning. First, the evidence of lives saved is marginal in the younger age groups. Second, there is little evidence that decisions by patients are done in an environment where the actual numbers are conveyed to them in an understandable fashion. Part of the problem may be that most physicians are not aware of the numbers. Even if the direct monetary cost to the participant may approach zero, is it a good deal? If you have to put up with more than 500 false positives to find one cancer, and 95% of those cancers do not seem to behave malignantly anyway, is that an activity that most patients if well informed would buy in to?
The question is whether the present data on the effectiveness of mammography when presented to an informed consumer would convince individual women to spend their own money. We could make some estimates as to what the actual cost per person screened could be based upon a rough assumption of $100 per test and a biopsy rate of 1 woman biopsied by 10 screened. This means on average every woman will need to pay for 10 screens and one biopsy per decade. Whether this would end up being a needle or open biopsy or something even more involved such as sterotactic biopsy is an open question. Lets just assume a conservative $2000 cost. Add in another $1000 for total time travel, lost wages and productivity for a total cost of $4000. How many women would pay $4000 for the equivalent of a lottery ticket which to reduce their risk of death over the course of one decade by about 1 in 1000?
Welcome to the tragedy of the commons. All this discussion is irrelevant if services such as mammography are paid for by individual resources. The total cost of screening per woman screened is likely on the order of a few thousand dollars over the course of a decade, and represents pennies per day. That amount is hardly the stuff that insurance should pay for, being neither unpredictable or extremely costly.
This controversy is just one small piece of health care. Once you move to a system where resources are placed in a common pool and are allocated via some sort of consensus driven process, you have a mess. This latest controversy clearly shows that any attempt to base decisions on a scientific consensus are doomed. It involves people and the decision will be a political one. End of story. Now all we need to do is extrapolate this out to the entirety of health care decisions regarding allocation. If we can't unambiguously define whether mammograms are warranted as a health care right after all this investment of time and money, how are we going to deal with the defining the scope of the remaining 99.99999% of health care?
Tuesday, November 17, 2009
Designing complexity - Are we deluding ourselves?
Perhaps more than 10 years ago I heard Don Coffey speak. I do not remember the specific topic but he introduced the talk with a aerial picture of the island of Manhattan. He pointed out that there were somewhere in the neighborhood of five million people on this island and there was three days of food. He posed the question "How do all these people remain fed?"
His point was that there was no master feeding plan, no food czar, no ultimate authority. However, there was an abundance and a variety of foods which rivaled any place on earth. How could that be? Something as important as food, which is essential to the lives of all those millions of people, could not be left to chance. Who could have designed such a system?
The system was not designed, but it evolved over time... a long, long time. The rules were basically simple. If I have or create something I own, I can trade it for something else someone else is willing to give up voluntarily in trade. Voluntary exchange which occurs in an environment respectful of the rule of law, if the rules are right is an amazing facilitator of spontaneous order and complexity. Ultimately that complexity was manifested by amazing density, complexity, and abundance which is now the island of Manhattan.
The present state was not intentionally designed or engineered by men. Devoutly religious people are ridiculed for believing in deity based intelligent design. Some devoutly secular people also worship at this same altar blindly, embracing an equally implausible notion that mortal men can achieve god like powers associated with intelligent design of complex systems. It is what Freidrich von Hayek termed the fatal conceit.
Complex and durable systems are systems that can respond to change. It is very difficult to design the ability to respond to change into complex systems. Complex and durable systems come as a consequence of iterative processes. These systems can adapt if they can place lots of little bets and can take many small losses in order to find innovation and adaptation to an always changing world.
In the present health care environment, we are tied to systems that are cumbersome and almost impossible to change. In every domain imaginable we are constrained, whether financially or via regulatory shackles. Our financing models could not be more flawed. When I think of our almost complete dependence on federal funding for our research and teaching missions I cannot help but think of Koala bears and eating only eucalyptus leaves. Cute and quaint, but not a particularly viable strategy for thriving. The clinical domain is not far behind in moving to a eucalyptus leaf only diet.
The regulatory chaos is beyond crazy. We have licensing bodies, non-state regulatory bodies, regulations relating to state payers, agencies which regulate insurance mandates, private/public partnerships to set prices of services, and the general direction of these activities is toward increasing the layer upon layer of rules and regulations. Each new program is conceived in broad terms in documents which rival War and Peace in length, yet these serve only as a framework for the actual regulations which are subsequently written. The regulations are written by agents who cannot help but be insulated from the real world unintended consequences of their ramblings. It is the untended consequences which will have much more lasting effects than anything initially planned.
It all comes back to how we conceptualize the formation of complex networks of human interaction. How do all those people on Manhattan get their food? Intelligent design by humans invariably results in not so intelligent constructs. A bill in Congress to create a universal food care program with a public option for the island of Manhattan would lead to out of control costs, reduced choices, and many hungry people.
His point was that there was no master feeding plan, no food czar, no ultimate authority. However, there was an abundance and a variety of foods which rivaled any place on earth. How could that be? Something as important as food, which is essential to the lives of all those millions of people, could not be left to chance. Who could have designed such a system?
The system was not designed, but it evolved over time... a long, long time. The rules were basically simple. If I have or create something I own, I can trade it for something else someone else is willing to give up voluntarily in trade. Voluntary exchange which occurs in an environment respectful of the rule of law, if the rules are right is an amazing facilitator of spontaneous order and complexity. Ultimately that complexity was manifested by amazing density, complexity, and abundance which is now the island of Manhattan.
The present state was not intentionally designed or engineered by men. Devoutly religious people are ridiculed for believing in deity based intelligent design. Some devoutly secular people also worship at this same altar blindly, embracing an equally implausible notion that mortal men can achieve god like powers associated with intelligent design of complex systems. It is what Freidrich von Hayek termed the fatal conceit.
Complex and durable systems are systems that can respond to change. It is very difficult to design the ability to respond to change into complex systems. Complex and durable systems come as a consequence of iterative processes. These systems can adapt if they can place lots of little bets and can take many small losses in order to find innovation and adaptation to an always changing world.
In the present health care environment, we are tied to systems that are cumbersome and almost impossible to change. In every domain imaginable we are constrained, whether financially or via regulatory shackles. Our financing models could not be more flawed. When I think of our almost complete dependence on federal funding for our research and teaching missions I cannot help but think of Koala bears and eating only eucalyptus leaves. Cute and quaint, but not a particularly viable strategy for thriving. The clinical domain is not far behind in moving to a eucalyptus leaf only diet.
The regulatory chaos is beyond crazy. We have licensing bodies, non-state regulatory bodies, regulations relating to state payers, agencies which regulate insurance mandates, private/public partnerships to set prices of services, and the general direction of these activities is toward increasing the layer upon layer of rules and regulations. Each new program is conceived in broad terms in documents which rival War and Peace in length, yet these serve only as a framework for the actual regulations which are subsequently written. The regulations are written by agents who cannot help but be insulated from the real world unintended consequences of their ramblings. It is the untended consequences which will have much more lasting effects than anything initially planned.
It all comes back to how we conceptualize the formation of complex networks of human interaction. How do all those people on Manhattan get their food? Intelligent design by humans invariably results in not so intelligent constructs. A bill in Congress to create a universal food care program with a public option for the island of Manhattan would lead to out of control costs, reduced choices, and many hungry people.
Thursday, November 12, 2009
They just can't make this stuff up!
A colleague of mine is dilligently reading the entirety of both House and Senate health care reform bills. He sent me the following amendment to the Senate Bill:
Value-Based Modifier for Physician Payment Formula: The Secretary of
Health and Human Services would be required to apply a separate,
budget-neutral payment modifier to the fee-for-service physician
payment formula. This separate modifier will not be used to replace
any portion of the Geographic Adjustment Factor. The separate payment
modifier will, in a budget-neutral manner, pay physicians or groups of
physicians differentially based upon the relative quality of care they
achieve for Medicare beneficiaries relative to cost. Costs shall be
based upon a composite of appropriate measures of cost that take into
account justifiable differences in input practice costs, as well as
the demographic characteristics and baseline health status of the
Medicare beneficiaries served by physicians or groups of physicians.
Quality shall be based upon a composite of appropriate, risk-based
measures of quality that reflect the health outcomes and health status
of Medicare beneficiaries served by physicians or groups of
physicians. In establishing appropriate quality measures the Secretary
would be required to seek the endorsement of the entity with a
contract with the Secretary under section 1890(a) of the Social
Security Act. The Secretary would also be required to take into
account the special conditions of providers in rural and other
underserved communities.
By 2017, all physician payments must be subject to this payment modifier.
I don't even know where to begin to think about this. To believe that this could be implemented in any time frame and result in a positive experience for anyone involved with Medicare (and likely all other insurance that the Feds touch) should be a test for loss of reality testing.
What will be measured as a surrogate for quality?
Who does the measuring?
How will any of these measures be assessed for actual validity?
Who makes decisions as to weighting?
Given this is by definition a zero sum game (budget neutral fashion), who decides the winners and losers?
There are over 1 billion encounters with doctors alone in the US each year. How many hours will it take to develop the appropriate quality metrics that will be applicable to even a fraction of these encounters? By 2017 all payments need to be subject to this payment modifier?
Value-Based Modifier for Physician Payment Formula: The Secretary of
Health and Human Services would be required to apply a separate,
budget-neutral payment modifier to the fee-for-service physician
payment formula. This separate modifier will not be used to replace
any portion of the Geographic Adjustment Factor. The separate payment
modifier will, in a budget-neutral manner, pay physicians or groups of
physicians differentially based upon the relative quality of care they
achieve for Medicare beneficiaries relative to cost. Costs shall be
based upon a composite of appropriate measures of cost that take into
account justifiable differences in input practice costs, as well as
the demographic characteristics and baseline health status of the
Medicare beneficiaries served by physicians or groups of physicians.
Quality shall be based upon a composite of appropriate, risk-based
measures of quality that reflect the health outcomes and health status
of Medicare beneficiaries served by physicians or groups of
physicians. In establishing appropriate quality measures the Secretary
would be required to seek the endorsement of the entity with a
contract with the Secretary under section 1890(a) of the Social
Security Act. The Secretary would also be required to take into
account the special conditions of providers in rural and other
underserved communities.
By 2017, all physician payments must be subject to this payment modifier.
I don't even know where to begin to think about this. To believe that this could be implemented in any time frame and result in a positive experience for anyone involved with Medicare (and likely all other insurance that the Feds touch) should be a test for loss of reality testing.
What will be measured as a surrogate for quality?
Who does the measuring?
How will any of these measures be assessed for actual validity?
Who makes decisions as to weighting?
Given this is by definition a zero sum game (budget neutral fashion), who decides the winners and losers?
There are over 1 billion encounters with doctors alone in the US each year. How many hours will it take to develop the appropriate quality metrics that will be applicable to even a fraction of these encounters? By 2017 all payments need to be subject to this payment modifier?
Sunday, November 8, 2009
Standardization, modularity, and the changing world
Much has been made for the merits of standardization of process within health care environments as a vehicle to improve outcomes and safety. The real power of standardization is when it can be deployed with modular design. Combining these two characteristics allow for safety, efficiency, and the ability to adapt to change.
Modular design allows engineers to tinker with one component without altering the function of a different component. While this is well appreciated within the engineering and software domains, it is not well appreciated within complex human systems. However, when thinking about it within the context of software design, it is easy to see how complex human systems share many of the same characteristics.
When building new software, it is very common to build upon the foundation of old code. Likewise, human institutions are rarely created de novo. They are generally created using older structures and forms and are frequently created using social groups derived from pre-existing structures. When creating new software, the operating units or files may operate in a self contained fashion, may require the function of other programs, or may be required by other programs for their functions. These represent interdependencies.
Modifying operating files which are dependent upon or required by other files creates added complexities. Their interdependencies must be defined if possible and unintended consequences identified. These complications of changing parts of software have uncanny parallels to manipulations of complex human systems. The more modular the software, the fewer interdependencies that exist, and the easier it is to manipulate and change any given component.
Within complex human systems there are a host of interdependencies which exist. Whenever there is a "change order" issued, it is best to understand just how modular your system is. Before you can begin to understand what might happen as a consequence of such a change order, you need to at least begin to understand simply the nature of interdependencies which are likely to come into play. In the health care environment we are only beginning to appreciate what we are up against.
Our present architecture is not standardized nor modular. Our interdependencies are extensive and only minimally defined. Perhaps our greatest interdependencies are financial. Within large integrated health care entities the function of many financially non-viable units is dependent upon financial resources generated by other units. Since interdependencies create non-modularity, it only follows that financial interdependency creates inflexibility. You can't alter one piece without altering the function of other units. This does not bode well for entities engineered this way since the only thing which we can predictably anticipate is a changing world and survival of the most adaptable entities.
Modular design allows engineers to tinker with one component without altering the function of a different component. While this is well appreciated within the engineering and software domains, it is not well appreciated within complex human systems. However, when thinking about it within the context of software design, it is easy to see how complex human systems share many of the same characteristics.
When building new software, it is very common to build upon the foundation of old code. Likewise, human institutions are rarely created de novo. They are generally created using older structures and forms and are frequently created using social groups derived from pre-existing structures. When creating new software, the operating units or files may operate in a self contained fashion, may require the function of other programs, or may be required by other programs for their functions. These represent interdependencies.
Modifying operating files which are dependent upon or required by other files creates added complexities. Their interdependencies must be defined if possible and unintended consequences identified. These complications of changing parts of software have uncanny parallels to manipulations of complex human systems. The more modular the software, the fewer interdependencies that exist, and the easier it is to manipulate and change any given component.
Within complex human systems there are a host of interdependencies which exist. Whenever there is a "change order" issued, it is best to understand just how modular your system is. Before you can begin to understand what might happen as a consequence of such a change order, you need to at least begin to understand simply the nature of interdependencies which are likely to come into play. In the health care environment we are only beginning to appreciate what we are up against.
Our present architecture is not standardized nor modular. Our interdependencies are extensive and only minimally defined. Perhaps our greatest interdependencies are financial. Within large integrated health care entities the function of many financially non-viable units is dependent upon financial resources generated by other units. Since interdependencies create non-modularity, it only follows that financial interdependency creates inflexibility. You can't alter one piece without altering the function of other units. This does not bode well for entities engineered this way since the only thing which we can predictably anticipate is a changing world and survival of the most adaptable entities.
Saturday, November 7, 2009
Plans and innovation
I am a product of the western world. There are a number of assumptions which go with growing up in such an environment. One of those assumptions is that of progress. We are raised to believe there is some sort of directionality in human development, moving toward some end which is more desirable than the present. Recent history by in large reinforces such a belief system, although there and many people who contest the assumption that what we have experienced represents an improvement over how people lived in the past. I for one think they are crazy and would not for a minute want to roll back time to a point where most children died in childhood and people eked out a day to day existence.
Given our circumstances as people have improved immensely over the past 300 years (see Steven Landsburg - http://online.wsj.com/article/SB118134633403829656.html#articleTabs%3Darticle), it poses a fundamental question. What portion of that improvement was the consequence of specific and intentional human plans and what part was due to unintended consequences of activities committed to for other completely unrelated reasons. For example, did the industrial revolution develop because of a strategic plan put forth by English merchants? Did the German pharmaceutical industry develop because of some master plan devised by the German chemical industry? Obviously the answer to these question is no. Whether it be technological, legal, or social innovations which made quantum leaps possible, the really big ones happened more because the random juxtaposition of events rather than anything planned.
There is no question that certain outcomes clearly benefit from having a well defined plan and defined end points. However, if the game changing breakthroughs are almost always unplanned and linked more to serendipity than planning, what are the ideal rules to implement that fosters both prudent planning and flexibility sufficient to permit disruptive innovation?
I think the key factor is the nature of the challenge one is approaching. There are really three types of problems which we can address. There are simple problems or tasks where the outcomes are clearly definable and the resources and expertise needed to solve them are readily and widely available. An example of this might be the building of a house. It might be expensive and take many months but building a house is a task which has been done literally millions of times.
There are complex and difficult tasks which require coordination of many people and resources over an extended length of time. Some of the challenges may not be fully defined at the time the task is taken on. However, the full scope of the problem can ultimately be defined and resources needed to address the problem identified or created. An example of this was sending a man to the moon. There were initially a number of initially undefined elements but in the end, it was a complex but definable and solvable problem, based upon Newtonian physics, 20th century material science, and for the most part definable variables.
Finally, there are wicked problems (http://cognexus.org/id42.htm). These are problems which we cannot even come close to defining all the variables where the only certainty is the presence of unknown unknowns. Approaching one element of a wicked problem will virtually always have unintended consequences.
Improving the human condition in the long run is a wicked problem. Any intervention is likely to have both planned desirable outcomes as well as unintended undesirable ones. How do we continue to act and not be paralyzed with the fear that our actions will bring disastrous and unintended consequences?
I believe the key to success (as measured continued innovation and progress) is to continue to plan on a relatively small and local scale and to hedge our bets. Provide incentives for people and groups to plan for and gain from small incremental improvements. History would suggest that small wins are like lottery tickets. Acquire enough of them and you will get a game changer. Plan to do too much and attempts to control too much over a time frame beyond which you cannot reliably predict outcomes will generally result only in unintended consequences.
Given our circumstances as people have improved immensely over the past 300 years (see Steven Landsburg - http://online.wsj.com/article/SB118134633403829656.html#articleTabs%3Darticle), it poses a fundamental question. What portion of that improvement was the consequence of specific and intentional human plans and what part was due to unintended consequences of activities committed to for other completely unrelated reasons. For example, did the industrial revolution develop because of a strategic plan put forth by English merchants? Did the German pharmaceutical industry develop because of some master plan devised by the German chemical industry? Obviously the answer to these question is no. Whether it be technological, legal, or social innovations which made quantum leaps possible, the really big ones happened more because the random juxtaposition of events rather than anything planned.
There is no question that certain outcomes clearly benefit from having a well defined plan and defined end points. However, if the game changing breakthroughs are almost always unplanned and linked more to serendipity than planning, what are the ideal rules to implement that fosters both prudent planning and flexibility sufficient to permit disruptive innovation?
I think the key factor is the nature of the challenge one is approaching. There are really three types of problems which we can address. There are simple problems or tasks where the outcomes are clearly definable and the resources and expertise needed to solve them are readily and widely available. An example of this might be the building of a house. It might be expensive and take many months but building a house is a task which has been done literally millions of times.
There are complex and difficult tasks which require coordination of many people and resources over an extended length of time. Some of the challenges may not be fully defined at the time the task is taken on. However, the full scope of the problem can ultimately be defined and resources needed to address the problem identified or created. An example of this was sending a man to the moon. There were initially a number of initially undefined elements but in the end, it was a complex but definable and solvable problem, based upon Newtonian physics, 20th century material science, and for the most part definable variables.
Finally, there are wicked problems (http://cognexus.org/id42.htm). These are problems which we cannot even come close to defining all the variables where the only certainty is the presence of unknown unknowns. Approaching one element of a wicked problem will virtually always have unintended consequences.
Improving the human condition in the long run is a wicked problem. Any intervention is likely to have both planned desirable outcomes as well as unintended undesirable ones. How do we continue to act and not be paralyzed with the fear that our actions will bring disastrous and unintended consequences?
I believe the key to success (as measured continued innovation and progress) is to continue to plan on a relatively small and local scale and to hedge our bets. Provide incentives for people and groups to plan for and gain from small incremental improvements. History would suggest that small wins are like lottery tickets. Acquire enough of them and you will get a game changer. Plan to do too much and attempts to control too much over a time frame beyond which you cannot reliably predict outcomes will generally result only in unintended consequences.
Takeover of the amatuers
In a world in which people's activity is becoming ever more hyper-specialized, our ideas of what we do and how we support ourselves changes rapidly. This trend has tremendous implications within medicine and there is no reason to believe that this trend will impact medicine any more or less than other realms. Ultimately, in an ideal world what any given person does and is rewarded for should have some value to the recipient of that good or service. How this translates to medicine is that patients should be as good or better off after their encounters with us than before.
The question arises "What can I (or anyone) do which is of value to other people and why?" Presumably, the other people I am referring to are those who I am directing my service to. In the realm of medicine it is presumably patients. What is the nature of those value adding activities and what specific expertise or talents do I have which allow me to provide these services better or exclusively?
As I see this, physicians historically have held central positions in health care because they had access to unique information which allowed them make predictions and solve problems, they controlled access to specific diagnostic and therapeutic tools, and they were in a unique position to coordinate human activity to facilitate the care of patients. In order to function in these roles they required substantial training on the background of particular innate talents. The net result of these requirements is that medical management talent was a scarce resource.
This combination of characteristics is not unique in history and is a narrative which describes the existence and evolution of virtually every professional class whether it be priests, scribes, journalists, or professional photographers. In each of these cases, barriers to entry and need for unique tools or expertise limited access to the profession and created a scarcity. However, technological change created rapid displacements. Ultimately, technological change allowed for massive entry of "amateurs" into these respective fields and undermined the role of the professionals.
Much of this historical information is is well described in Clay Shirky's book "Here comes everybody" but some specific observations are well worth repeating. He describes the role of the scribe in the middle ages when the ability to read and write were rare skills and that the scribe was an essential and scarce resource and a key cog in the ability to pass on knowledge from generation to generation. The potential loss of intellectual material which would not occur between generations without the scribe was immense. However, the introduction of the printing press changed all these assumptions and opened the business of replication of the printed word to a much larger number of non-scribe amateurs. Much was actually written lamenting the loss of the scribe profession but interestingly these writings were disseminated using the printing press technology.
In our contemporary world, there are similar trends happening in the world of journalism. Until recently, entry into the world of publishing was limited by the ability to print and disseminate writings. That is not longer the case. A word processor and an internet connection conceivably make everyone into a one man publishing company and this capability is moving toward the loss of the journalist as a professional class. The scarcity associated with previous business model (that is few publishing outlets and few journalists)has vanished. The amateurs have taken over. There is no clear distinction between the professional class and everyone else.
So, what does all this have to do with medicine and health care delivery? The "amateurs" are coming to health care, facilitated by a variety of technological changes, particularly those impacting dissemination of information. Over 45 years ago Kenneth Arrow identified what he believed to be a key and unique element to the health care industry which made it different; information asymmetry between patient and provider. It is remarkable that this concept is still emphasized, as if nothing has changed in the past 45 years.
It is not as if the information asymmetry as evaporated but the calculus as morphed remarkably. As information relating to health care has exploded, physicians have become less and less dependent upon their own brains and rely more and more on information tools which they can access on demand. However, these tools are generally not proprietary and are accessible to patients and other non-physicians. Thus the justification of professional class on the basis of access to and control of information is going away.
Specific technical skills may also serve as a justification for the physician professional class. However, best outcomes in this realm are generally linked to practice, process, and narrow focus. This appears to hardly be a justification for for the broad, extended, and expensive training model now used to train physicians. We perhaps can get better outcomes by focusing on specific technical skills required to do very specific focused tasks.
Finally, the professional class distinction for physicians may be justified on the basis of their ability to synthesize information and coordinate the activities of many people. This skill set is always prized but is not unique to medicine. Entry into the realm may be from many different educational and experience paths. It could be argued that since current medical education has no particular focus on these specific skills and that the current set of financial incentives with traditional medicine has created a culture which is indifferent to this particular need, that these functions should be moved elsewhere.
Where does this leave the medical profession and what is its fate in the future? Perhaps the more important question is how will technological and social change alter our ability to serve the medical needs of our patients? There is little question in my mind that we will cede control to the amateurs in many realms that were traditionally our realms. Innovation brings disruption and our profession will be disrupted. In the end our measure should not be how it affects our particular guild but instead how it affects our patients. It will be a hard pill to swallow.
The question arises "What can I (or anyone) do which is of value to other people and why?" Presumably, the other people I am referring to are those who I am directing my service to. In the realm of medicine it is presumably patients. What is the nature of those value adding activities and what specific expertise or talents do I have which allow me to provide these services better or exclusively?
As I see this, physicians historically have held central positions in health care because they had access to unique information which allowed them make predictions and solve problems, they controlled access to specific diagnostic and therapeutic tools, and they were in a unique position to coordinate human activity to facilitate the care of patients. In order to function in these roles they required substantial training on the background of particular innate talents. The net result of these requirements is that medical management talent was a scarce resource.
This combination of characteristics is not unique in history and is a narrative which describes the existence and evolution of virtually every professional class whether it be priests, scribes, journalists, or professional photographers. In each of these cases, barriers to entry and need for unique tools or expertise limited access to the profession and created a scarcity. However, technological change created rapid displacements. Ultimately, technological change allowed for massive entry of "amateurs" into these respective fields and undermined the role of the professionals.
Much of this historical information is is well described in Clay Shirky's book "Here comes everybody" but some specific observations are well worth repeating. He describes the role of the scribe in the middle ages when the ability to read and write were rare skills and that the scribe was an essential and scarce resource and a key cog in the ability to pass on knowledge from generation to generation. The potential loss of intellectual material which would not occur between generations without the scribe was immense. However, the introduction of the printing press changed all these assumptions and opened the business of replication of the printed word to a much larger number of non-scribe amateurs. Much was actually written lamenting the loss of the scribe profession but interestingly these writings were disseminated using the printing press technology.
In our contemporary world, there are similar trends happening in the world of journalism. Until recently, entry into the world of publishing was limited by the ability to print and disseminate writings. That is not longer the case. A word processor and an internet connection conceivably make everyone into a one man publishing company and this capability is moving toward the loss of the journalist as a professional class. The scarcity associated with previous business model (that is few publishing outlets and few journalists)has vanished. The amateurs have taken over. There is no clear distinction between the professional class and everyone else.
So, what does all this have to do with medicine and health care delivery? The "amateurs" are coming to health care, facilitated by a variety of technological changes, particularly those impacting dissemination of information. Over 45 years ago Kenneth Arrow identified what he believed to be a key and unique element to the health care industry which made it different; information asymmetry between patient and provider. It is remarkable that this concept is still emphasized, as if nothing has changed in the past 45 years.
It is not as if the information asymmetry as evaporated but the calculus as morphed remarkably. As information relating to health care has exploded, physicians have become less and less dependent upon their own brains and rely more and more on information tools which they can access on demand. However, these tools are generally not proprietary and are accessible to patients and other non-physicians. Thus the justification of professional class on the basis of access to and control of information is going away.
Specific technical skills may also serve as a justification for the physician professional class. However, best outcomes in this realm are generally linked to practice, process, and narrow focus. This appears to hardly be a justification for for the broad, extended, and expensive training model now used to train physicians. We perhaps can get better outcomes by focusing on specific technical skills required to do very specific focused tasks.
Finally, the professional class distinction for physicians may be justified on the basis of their ability to synthesize information and coordinate the activities of many people. This skill set is always prized but is not unique to medicine. Entry into the realm may be from many different educational and experience paths. It could be argued that since current medical education has no particular focus on these specific skills and that the current set of financial incentives with traditional medicine has created a culture which is indifferent to this particular need, that these functions should be moved elsewhere.
Where does this leave the medical profession and what is its fate in the future? Perhaps the more important question is how will technological and social change alter our ability to serve the medical needs of our patients? There is little question in my mind that we will cede control to the amateurs in many realms that were traditionally our realms. Innovation brings disruption and our profession will be disrupted. In the end our measure should not be how it affects our particular guild but instead how it affects our patients. It will be a hard pill to swallow.
Monday, November 2, 2009
Financial Innovation in Health Care
After reading Clayton Christensen's book, "The Innovator's Prescription", I began to think about how our current payment structure has stifled needed innovation in medicine. When we think of innovation, we tend to focus on technological change. However, dramatic and beneficial change in any industry requires the simultaneous implementation of technological and financial pieces. When Sony entered into the the US television market, they benefited from access to marketing channels via the upstart Kmart whose business model was not dependent upon service revenues. The lesson of this story was that the technological innovation needed a financial innovation to be viable.
The automobile was simply an expensive toy of the wealthy until Henry Ford created a disruptive manufacturing process which included putting more money in the hands of his workers. The telecommunication revolution was driven by both technology and new business models which were allowed by the dismantling of ATT and novel approaches to bundling telecommunication services. Expansion of home ownership was made possible by both advances in building materials as well as development of mortgage products which expanded the power of credit to persons who previously did not have that option.
How are these observations relevant to health care? I believe that the financial innovation piece is as important as any other innovation in the creation of an improved health care system. The object of any change is that it should move us toward expanding what is available and affordable to more and more people. We have naively assumed that whatever technological advancement is developed at whatever cost can be deployed by declaring it a human right and insisting it be made available by some sort of redistributive magic. That will not work.
Current plans for reform provide no blueprint for the type of financial innovation which is requisite for moving toward a world were better care is available to more people for less cost. To meet those ends we need to put in place mechanisms which facilitate the development of payment schemes which support real disruptive innovation. Whether everyone is "covered" is ultimately meaningless if coverage is for legacy services offered under legacy conditions by legacy providers. Using legacy payment schemes will guarantee modest variations on the status quo, overspending on overpriced services, predictable shortages of underpriced services, and no real change or innovation.
The automobile was simply an expensive toy of the wealthy until Henry Ford created a disruptive manufacturing process which included putting more money in the hands of his workers. The telecommunication revolution was driven by both technology and new business models which were allowed by the dismantling of ATT and novel approaches to bundling telecommunication services. Expansion of home ownership was made possible by both advances in building materials as well as development of mortgage products which expanded the power of credit to persons who previously did not have that option.
How are these observations relevant to health care? I believe that the financial innovation piece is as important as any other innovation in the creation of an improved health care system. The object of any change is that it should move us toward expanding what is available and affordable to more and more people. We have naively assumed that whatever technological advancement is developed at whatever cost can be deployed by declaring it a human right and insisting it be made available by some sort of redistributive magic. That will not work.
Current plans for reform provide no blueprint for the type of financial innovation which is requisite for moving toward a world were better care is available to more people for less cost. To meet those ends we need to put in place mechanisms which facilitate the development of payment schemes which support real disruptive innovation. Whether everyone is "covered" is ultimately meaningless if coverage is for legacy services offered under legacy conditions by legacy providers. Using legacy payment schemes will guarantee modest variations on the status quo, overspending on overpriced services, predictable shortages of underpriced services, and no real change or innovation.
What does the fee actually cover?
Yes I am going to bash the payment system yet again. I can't help it. The more I think about this the more that I realize that undesirable outcomes can be directly attributable to how doctors are paid.
When I see a patient, I am paid for the specific encounter, that is the actual face to face time I spend with the patient. However, there is a series of post visit obligations which which I encumber as well. There are four characteristics of these post visit obligations that are worth noting. First, the actual obligations are poorly defined. Second these obligations are essentially uncompensated. Third, delegation of these obligations, even to those with little or no training generally has little downside to physicians. Lastly, the extent of these post visit obligations can be managed most efficiently by selecting a subspecialty whose workflow generates few and well defined post encounter obligations.
Historically the practice of most specialties and subspecialties of medicine generated sufficient revenues from the encounter to support the activities which were not directly compensated. However, as the margins decreased, physicians responded by focusing more and more on activities that generated direct payments. For activities which generated few downstream unfunded obligations, the higher throughput created few problems. When you were done with the face to face encounter your were done. For specialties like primary care, each encounter predictably created an additional post encounter unfunded obligation. Ramping up billable activity in this context created an unsustainable workload to support non-compensated activities. One approach was simply to stint on what is not paid for. For the most part this approach had positive financial outcomes at the cost of practicing medicine in such a way that was more in the physician's best interest than the patients.
The current approach to the presence of perverse incentives is to mount a campaign which aims to influence physician practice behavior by appealing to their professionalism. Such an approach, appealing to physician conscience based upon the assumption that physicians can be durably influenced to respond to incentives other than those directed at self interest, may sound appealing. We should feel obligated to do what is right if we were correctly socialized. However there is little in history to suggest that it is at all functional. It is more likely an exercise in wishful thinking. The product of lecturing medical student on professionalism will quickly wither in the face of real life economics in a world which financially punishes those who model the desired but not rewarded behavior. Bad incentives trump good intentions in the long run.
Humans are driven by self interest. To deny this is a non-starter as an entry point into any social problem solving activity. In creating a system in which lack of rewards for specific activities is baked in, we basically guarantee these activities will go away. We lament that physicians fail to engage in activities where they receive no compensation, but this should come as no surprise. In order to treat a patient with a given disease, you need a correct diagnosis. To fix a pathological health care system, we also need the correct diagnosis. What is broke? It is the payment system stupid!
When I see a patient, I am paid for the specific encounter, that is the actual face to face time I spend with the patient. However, there is a series of post visit obligations which which I encumber as well. There are four characteristics of these post visit obligations that are worth noting. First, the actual obligations are poorly defined. Second these obligations are essentially uncompensated. Third, delegation of these obligations, even to those with little or no training generally has little downside to physicians. Lastly, the extent of these post visit obligations can be managed most efficiently by selecting a subspecialty whose workflow generates few and well defined post encounter obligations.
Historically the practice of most specialties and subspecialties of medicine generated sufficient revenues from the encounter to support the activities which were not directly compensated. However, as the margins decreased, physicians responded by focusing more and more on activities that generated direct payments. For activities which generated few downstream unfunded obligations, the higher throughput created few problems. When you were done with the face to face encounter your were done. For specialties like primary care, each encounter predictably created an additional post encounter unfunded obligation. Ramping up billable activity in this context created an unsustainable workload to support non-compensated activities. One approach was simply to stint on what is not paid for. For the most part this approach had positive financial outcomes at the cost of practicing medicine in such a way that was more in the physician's best interest than the patients.
The current approach to the presence of perverse incentives is to mount a campaign which aims to influence physician practice behavior by appealing to their professionalism. Such an approach, appealing to physician conscience based upon the assumption that physicians can be durably influenced to respond to incentives other than those directed at self interest, may sound appealing. We should feel obligated to do what is right if we were correctly socialized. However there is little in history to suggest that it is at all functional. It is more likely an exercise in wishful thinking. The product of lecturing medical student on professionalism will quickly wither in the face of real life economics in a world which financially punishes those who model the desired but not rewarded behavior. Bad incentives trump good intentions in the long run.
Humans are driven by self interest. To deny this is a non-starter as an entry point into any social problem solving activity. In creating a system in which lack of rewards for specific activities is baked in, we basically guarantee these activities will go away. We lament that physicians fail to engage in activities where they receive no compensation, but this should come as no surprise. In order to treat a patient with a given disease, you need a correct diagnosis. To fix a pathological health care system, we also need the correct diagnosis. What is broke? It is the payment system stupid!
Subscribe to:
Posts (Atom)