Search This Blog

22 July 2011

How Bright Promise in Cancer Testing Fell Apart

When Juliet Jacobs found out she had lung cancer, she was terrified, but realized that her hope lay in getting the best treatment medicine could offer. So she got a second opinion, then a third. In February of 2010, she ended up at Duke University, where she entered a research study whose promise seemed stunning.

Doctors would assess her tumor cells, looking for gene patterns that would determine which drugs would best attack her particular cancer. She would not waste precious time with ineffective drugs or trial-and-error treatment. The Duke program — considered a breakthrough at the time — was the first fruit of the new genomics, a way of letting a cancer cell’s own genes reveal the cancer’s weaknesses.

But the research at Duke turned out to be wrong. Its gene-based tests proved worthless, and the research behind them was discredited. Ms. Jacobs died a few months after treatment, and her husband and other patients’ relatives have retained lawyers.

The episode is a stark illustration of serious problems in a field in which the medical community has placed great hope: using patterns from large groups of genes or other molecules to improve the detection and treatment of cancer. Companies have been formed and products have been introduced that claim to use genetics in this way, but assertions have turned out to be unfounded. While researchers agree there is great promise in this science, it has yet to yield many reliable methods for diagnosing cancer or identifying the best treatment.

Instead, as patients and their doctors try to make critical decisions about serious illnesses, they may be getting worthless information that is based on bad science. The scientific world is concerned enough that two prominent groups, the National Cancer Institute and the Institute of Medicine, have begun examining the Duke case; they hope to find new ways to evaluate claims based on emerging and complex analyses of patterns of genes and other molecules.

So far, the Food and Drug Administration “has generally not enforced” its regulation of tests created by individual labs because, until recently, such tests were relatively simple and relied heavily on the expertise of a particular doctor, said Erica Jefferson, a spokeswoman for the agency. But now, with labs offering more complex tests on a large scale, the F.D.A. is taking a new look at enforcement.

Dr. Scott Ramsey, director of cancer outcomes research at the Fred Hutchinson Cancer Center in Seattle, says there is already “a mini-gold rush” of companies trying to market tests based on the new techniques, at a time when good science has not caught up with the financial push. “That’s the scariest part of all,” Dr. Ramsey said.

Doctors say the heart of the problem is the intricacy of the analyses in this emerging field and the difficulty in finding errors. Even well-respected scientists often “oversee a machine they do not understand and cannot supervise directly” because each segment of the research requires different areas of expertise, said Dr. Lajos Pusztai, a breast cancer researcher at M. D. Anderson Cancer Center at the University of Texas. As a senior scientist, he added, “It’s true for me, too.”

The Duke case came right after two other claims that gave medical researchers pause. Like the Duke case, they used complex analyses to detect patterns of genes or cell proteins. But these were tests that were supposed to find ovarian cancer in patients’ blood. One, OvaSure, was developed by a Yale scientist, Dr. Gil G. Mor, licensed by the university and sold to patients before it was found to be useless.

The other, OvaCheck, was developed by a company, Correlogic, with contributions from scientists from the National Cancer Institute and the Food and Drug Administration. Major commercial labs licensed it and were about to start using it before two statisticians from M. D. Anderson discovered and publicized its faults.

The Duke saga began when a prestigious journal, Nature Medicine, published a paper on Nov. 6, 2006, by Dr. Anil Potti, a cancer researcher at Duke University Medical Center; Joseph R. Nevins, a senior scientist there; and their colleagues. They wrote about genomic tests they developed that looked at the molecular traits of a cancerous tumor and figured out which chemotherapy would work best.

Other groups of cancer researchers had been trying to do the same thing.

“Our group was despondent to get beaten out,” said Dr. John Minna, a lung cancer researcher at the University of Texas Southwestern Medical Center. But Dr. Minna rallied; at the very least, he thought, he would make use of this incredible discovery to select drugs for lung cancer patients.

First, though, he asked two statisticians at M. D. Anderson, Keith Baggerly and Kevin Coombes, to check the work. Several other doctors approached them with the same request.

Dr. Baggerly and Dr. Coombes found errors almost immediately. Some seemed careless — moving a row or a column over by one in a giant spreadsheet — while others seemed inexplicable. The Duke team shrugged them off as “clerical errors.”

And the Duke researchers continued to publish papers on their genomic signatures in prestigious journals. Meanwhile, they started three trials using the work to decide which drugs to give patients.

Dr. Baggerly and Dr. Coombes tried to sound an alarm. They got the attention of the National Cancer Institute, whose own investigators wanted to use the Duke system in a clinical trial but were dissuaded by the criticisms. Finally, they published their analysis in The Annals of Applied Statistics, a journal that medical scientists rarely read.

The situation finally grabbed the cancer world’s attention last July, not because of the efforts of Dr. Baggerly and Dr. Coombes, but because a trade publication, The Cancer Letter, reported that the lead researcher, Dr. Potti, had falsified parts of his résumé. He claimed, among other things, that he had been a Rhodes scholar.

“It took that to make people sit up and take notice,” said Dr. Steven Goodman, professor of oncology, pediatrics, epidemiology and biostatistics at Johns Hopkins University.

In the end, four gene signature papers were retracted. Duke shut down three trials using the results. Dr. Potti resigned from Duke. He declined to be interviewed for this article. His collaborator and mentor, Dr. Nevins, no longer directs one of Duke’s genomics centers.

The cancer world is reeling.

The Duke researchers had even set up a company — now disbanded — and planned to sell their test to determine cancer treatments. Duke cancer patients and their families, including Mrs. Jacobs’s husband, Walter Jacobs, say they feel angry and betrayed. And medical researchers see the story as a call to action. With such huge data sets and complicated analyses, researchers can no longer trust their hunches that a result does — or does not — make sense.

No comments:

ShareThis