Posted 3/05/12 on Common Sense Family Doctor
On the first day of the clinical preventive medicine course that I teach every spring, I review the concept of lead-time bias and its potential to make a screening test look more effective than it really is (or, effective when it’s not). Frugal Family Doctor recently explained how lead-time bias deceptively improves 5-year survival statistics. If you are unfamiliar with this concept, I recommend reading his post, but the basic idea is that by advancing the time in the disease course that cancer (or some other condition) is detected, screening will always increase the percentage of patients who survive for 5 years or more, even if it doesn’t do anything to reduce mortality. This concept is as basic to the appropriate use of screening tests as vital signs are to the practice of medicine. In my opinion, any physicians who don’t understand lead-time bias ought to have their test-ordering privileges suspended until they do.
Unfortunately, a study published today in the Annals of Internal Medicine concluded that a whole lot of clinicians require remedial education regarding lead-time bias. A national sample of more than 400 primary care physicians were provided scenarios about the effects of two hypothetical screening tests. The first test improved 5-year survival from 68 to 99 percent, and the second reduced the mortality rate from 2 deaths per 1000 to 1.6 deaths per 1000. 95 percent of surveyed physicians said that they would “definitely” or “probably” recommend the test that improved 5-year survival, even though this information (which is based on lead-time statistics associated with screening for prostate cancer) provides absolutely no evidence that the test improves patient outcomes. In contrast, considerably fewer physicians were enthusiastic about the test that actually lowered the mortality rate, perhaps because the absolute risk reduction seemed unimpressive by comparison.
Another disappointing finding was that almost half of surveyed physicians believed that a screening test “saves lives” if more cancers are detected in screened than in unscreened populations. The truth is, finding more cancers is a poor assurance of better outcomes. For example, a randomized trial of screening for ovarian cancer found no difference in mortality rates between women assigned to annual screening versus those receiving usual care despite 21% more cancers being detected in the screening group. This study confirmed what most medical organizations had suspected for years in recommending against ovarian cancer screening in asymptomatic women. Unfortunately, another surveypublished recently in Annals found that one-third of a nationally representative sample of family physicians, general internists, and obstetricians nonetheless believe that ovarian cancer screening is effective.
The Institute of Medicine has identified low levels of health literacy in the general population as a major obstacle to ensuring optimal health and quality of care. But how can physicians expect our patients to make informed decisions regarding screening tests when large numbers of us are functionally illiterate regarding basic screening concepts? As a medical educator, I took home this message from these studies: medical schools, residency programs, and certifying boards must devote more time and effort to improving physicians’ literacy regarding screening, lest misleading survival statistics continue to fuel overuse of ineffective tests and expose countless patients to potential harm.