Reproduzo abaixo texto de The Wall Street Journal sobre o caso torcetrapib a partir da suspensão do estudo Illuminate. Talvez, a história mais completa até agora. O problema desse texto é a lógica do autor e, o desconhecimento sobre conceitos básicos como doença e fator de risco.
Ao contrário, do que muitos divulgam colesterol elevado não é doença, mas sim, fator de risco. Ou seja, uma condição que aumenta a probabilidade de um doença no futuro. Isso posto, implica que avaliar intervenção farmacológica para tratamento de doença é uma coisa e, para redução de fator de risco é outra muito distinta. Porém, no presente caso há somente um questionamento: porque o estudo não foi suspenso antes?
Aliás, aguarda-se a publicação do estudo "Illuminate" em revista de grande circulação com a mesma velocidade que as empresas conseguem editar e submeter quando o resultado é positivo.
Relatively Small Number of DeathsHave Big Impact in Pfizer Drug TrialDecember 6, 2006 by Carl Bialik
Pfizer Inc. rattled the medical world -- and the stock market -- with its announcement that it was abandoning a potentially blockbuster cholesterol drug after some patients died during clinical trials.
At first glance, it might seem like Pfizer's decision was a bit hasty. This was a large clinical trial, involving 15,003 people at high risk for cardiovascular disease. Some 82 people taking the new drug died, while 51 people in a control group also died. How could such a small difference be enough to justify Pfizer's decision to walk away from a drug it spent $1 billion developing, not to mention the $21 billion in stock-market value the company lost after announcing the news?
Did Pfizer act appropriately in halting the trial, and development of the drug, when it did? Do you think there are sufficient safety measures in place for clinical trials? Have you or a family member been part of a clinical trial? Were you aware of the measures taken to protect participants? A closer look at the figures -- and an understanding of the roles the stats play in medical trials -- shows why the small difference had such a big impact.Pfizer told me it divided the pool of participants down the middle, which is standard in such trials. Half of them, the control group, took the company's successful cholesterol-lowering drug Lipitor. The other half took Lipitor plus the new drug, called torcetrapib. On a straight percentage basis, about 0.7% of those in the Lipitor-only group died, while 1.1% of people taking the new drug regimen died. That's a difference of just 0.4 percentage point, but it doesn't tell the whole story.
Scientists and statisticians I spoke with said it is more important to calculate the relative risks facing the two groups. Crunching the numbers that way, you can say that the people taking the new drug were 60% more likely to die -- hardly a small difference. Still, that stat suggests why Pfizer might want to stop the trial. To understand why Pfizer chose to stop the testing when it did, we have to dig deeper. (It turns out that the difference between stopping and proceeding likely came down to just a few deaths. More on that in a bit.) Clinical trials usually get halted for one of two reasons: Either a drug shows overwhelming promise and it wouldn't be fair to delay its release, or, as in this case, the results suggest great risk. Each clinical trial is different, and numbers aren't the only factor in these decisions. But for many trials, including Pfizer's, monitoring boards set numerical thresholds for bad outcomes before the trial begins. When the thresholds are crossed, the tests are stopped. Philip Barter, director of the Heart Research Institute in Australia and chairman of the steering committee overseeing Pfizer's torcetrapib study, told me he was contacted Friday evening by Charles Hennekens, a professor of biomedical science at Florida Atlantic University and chair of Pfizer's safety monitoring board. Dr. Barter said Mr. Hennekens shared troubling information about deaths in the trial. Dr. Barter told me he realized the "imbalance in deaths had crossed the statistical boundary" that had been set before the trial to trigger an automatic halt. Dr. Barter stopped the study Saturday. (Dr. Hennekens referred all questions about the study to Dr. Barter.) The "statistical boundary" Dr. Barter referred to wasn't some arbitrary figure, but rather a single number that is calculated as part of clinical trials. The timing of Pfizer's decision lies in this calculation.
The magic number is something called a "p value." Broadly, p measures the probability that a particular result -- in this case, the difference in the rates of death between the two drug groups -- can be chalked up to a statistical anomaly. Put another way: if a different 15,003 people had been selected, would the outcome have been the same? The lower the p value, the more certain it is that the numbers aren't due to some anomaly. Calculating p in this case is complex, and takes into account several factors, including the number of people in the study. Dr. Barter told me his committee set the p threshold for the Pfizer study at 0.01 -- meaning that once researchers computed a p that fell below that number, they would know that the results were indeed significant, and not something that would be likely to change by evaluating a different pool of patients. Researchers calculated p monthly; given the deaths, any number below 0.01 would mean halting the test. Indeed, the study was halted when new data produced a p that crossed the threshold. (In simple terms, a p value of 0.01 means that there is a one in 100 chance that the results are due to some statistical quirk.) At my request, Lisa Schwartz and Steven Woloshin, both associate professors of medicine at Dartmouth Medical School, calculated p for the Pfizer study at the time it was halted. They came up with a value of 0.007. Dr. Barter confirmed to me that their calculation was correct.
If you've seen p values before, you might be surprised that the threshold was set at 0.01 -- 0.05 is usually considered the threshold for statistical significance. Using such a value, the Pfizer study could have been halted even sooner. But experts told me that the threshold must be set higher when there are frequent measurements of p, to make sure that too much weight isn't given to any single reading. "You don't want to have a small-numbers problem, where one anecdote drives the decision," Kimberly Thompson, associate professor of risk analysis and decision science at the Harvard School of Public Health, told me. Subjective clinical expertise comes into play in setting these thresholds before the trial starts. "If you're talking about a drug to cure cancer when there is no other treatment, you would tolerate an enormous risk before pulling the plug," Brian Strom, chairman and professor of biostatistics and epidemiology at the University of Pennsylvania, and a veteran of several safety monitoring boards, told me. "Where you're talking about a drug to treat allergies, where there are other drugs available and they are safe, you would tolerate much less risk."
Several experts I interviewed said that Pfizer was right to halt the study. "The purpose of the drug is to reduce the chance of death," Dr. Woloshin and Dr. Schwartz of Dartmouth wrote in their analysis for me. "Since the drug increases death, there is no reason to pursue it further."
But their analysis also showed how sensitive the outcome was to small numbers. Just two fewer deaths among those people taking the experimental drug -- 80 instead of 82 -- would have led to a p value of 0.011, just above the threshold.
Dr. Barter agreed that "just a couple" fewer deaths could have let the study continue. "If it had been just above the boundary, I don't know what we would have done -- whether we would have waited another month or not to halt the study," Dr. Barter told me. He added, "As you know, statistics is not an exact science." copyright The Wall Street Journal.
Pfizer Inc. rattled the medical world -- and the stock market -- with its announcement that it was abandoning a potentially blockbuster cholesterol drug after some patients died during clinical trials.
At first glance, it might seem like Pfizer's decision was a bit hasty. This was a large clinical trial, involving 15,003 people at high risk for cardiovascular disease. Some 82 people taking the new drug died, while 51 people in a control group also died. How could such a small difference be enough to justify Pfizer's decision to walk away from a drug it spent $1 billion developing, not to mention the $21 billion in stock-market value the company lost after announcing the news?
Did Pfizer act appropriately in halting the trial, and development of the drug, when it did? Do you think there are sufficient safety measures in place for clinical trials? Have you or a family member been part of a clinical trial? Were you aware of the measures taken to protect participants? A closer look at the figures -- and an understanding of the roles the stats play in medical trials -- shows why the small difference had such a big impact.Pfizer told me it divided the pool of participants down the middle, which is standard in such trials. Half of them, the control group, took the company's successful cholesterol-lowering drug Lipitor. The other half took Lipitor plus the new drug, called torcetrapib. On a straight percentage basis, about 0.7% of those in the Lipitor-only group died, while 1.1% of people taking the new drug regimen died. That's a difference of just 0.4 percentage point, but it doesn't tell the whole story.
Scientists and statisticians I spoke with said it is more important to calculate the relative risks facing the two groups. Crunching the numbers that way, you can say that the people taking the new drug were 60% more likely to die -- hardly a small difference. Still, that stat suggests why Pfizer might want to stop the trial. To understand why Pfizer chose to stop the testing when it did, we have to dig deeper. (It turns out that the difference between stopping and proceeding likely came down to just a few deaths. More on that in a bit.) Clinical trials usually get halted for one of two reasons: Either a drug shows overwhelming promise and it wouldn't be fair to delay its release, or, as in this case, the results suggest great risk. Each clinical trial is different, and numbers aren't the only factor in these decisions. But for many trials, including Pfizer's, monitoring boards set numerical thresholds for bad outcomes before the trial begins. When the thresholds are crossed, the tests are stopped. Philip Barter, director of the Heart Research Institute in Australia and chairman of the steering committee overseeing Pfizer's torcetrapib study, told me he was contacted Friday evening by Charles Hennekens, a professor of biomedical science at Florida Atlantic University and chair of Pfizer's safety monitoring board. Dr. Barter said Mr. Hennekens shared troubling information about deaths in the trial. Dr. Barter told me he realized the "imbalance in deaths had crossed the statistical boundary" that had been set before the trial to trigger an automatic halt. Dr. Barter stopped the study Saturday. (Dr. Hennekens referred all questions about the study to Dr. Barter.) The "statistical boundary" Dr. Barter referred to wasn't some arbitrary figure, but rather a single number that is calculated as part of clinical trials. The timing of Pfizer's decision lies in this calculation.
The magic number is something called a "p value." Broadly, p measures the probability that a particular result -- in this case, the difference in the rates of death between the two drug groups -- can be chalked up to a statistical anomaly. Put another way: if a different 15,003 people had been selected, would the outcome have been the same? The lower the p value, the more certain it is that the numbers aren't due to some anomaly. Calculating p in this case is complex, and takes into account several factors, including the number of people in the study. Dr. Barter told me his committee set the p threshold for the Pfizer study at 0.01 -- meaning that once researchers computed a p that fell below that number, they would know that the results were indeed significant, and not something that would be likely to change by evaluating a different pool of patients. Researchers calculated p monthly; given the deaths, any number below 0.01 would mean halting the test. Indeed, the study was halted when new data produced a p that crossed the threshold. (In simple terms, a p value of 0.01 means that there is a one in 100 chance that the results are due to some statistical quirk.) At my request, Lisa Schwartz and Steven Woloshin, both associate professors of medicine at Dartmouth Medical School, calculated p for the Pfizer study at the time it was halted. They came up with a value of 0.007. Dr. Barter confirmed to me that their calculation was correct.
If you've seen p values before, you might be surprised that the threshold was set at 0.01 -- 0.05 is usually considered the threshold for statistical significance. Using such a value, the Pfizer study could have been halted even sooner. But experts told me that the threshold must be set higher when there are frequent measurements of p, to make sure that too much weight isn't given to any single reading. "You don't want to have a small-numbers problem, where one anecdote drives the decision," Kimberly Thompson, associate professor of risk analysis and decision science at the Harvard School of Public Health, told me. Subjective clinical expertise comes into play in setting these thresholds before the trial starts. "If you're talking about a drug to cure cancer when there is no other treatment, you would tolerate an enormous risk before pulling the plug," Brian Strom, chairman and professor of biostatistics and epidemiology at the University of Pennsylvania, and a veteran of several safety monitoring boards, told me. "Where you're talking about a drug to treat allergies, where there are other drugs available and they are safe, you would tolerate much less risk."
Several experts I interviewed said that Pfizer was right to halt the study. "The purpose of the drug is to reduce the chance of death," Dr. Woloshin and Dr. Schwartz of Dartmouth wrote in their analysis for me. "Since the drug increases death, there is no reason to pursue it further."
But their analysis also showed how sensitive the outcome was to small numbers. Just two fewer deaths among those people taking the experimental drug -- 80 instead of 82 -- would have led to a p value of 0.011, just above the threshold.
Dr. Barter agreed that "just a couple" fewer deaths could have let the study continue. "If it had been just above the boundary, I don't know what we would have done -- whether we would have waited another month or not to halt the study," Dr. Barter told me. He added, "As you know, statistics is not an exact science." copyright The Wall Street Journal.
Nenhum comentário:
Postar um comentário