Many people suspect that statistics are manipulated to suit an investigator’s motives.
They might be right! How can you tell?
Surely meta-analyses help provide reliable information?
Actually, there is potential for manipulation in meta-analyses too…
What is a meta-analysis?
Briefly, meta-analyses attempt to pool data from different sources on a particular topic. For example, if you were interested in how effective ibuprofen is for treating a headache compared with paracetamol, you could get data from relevant previously published trials and summarise what they all found. That sounds like a systematic review… There is a clear difference between a review and a meta-analysis, however it is often not understood or not remembered! In a review the results from each source are discussed. In a meta-analysis the actual data (summary measures from each trial e.g. sample size and number of events) are used to obtain ‘pooled’ or ‘combined’ or ‘common’ estimates.
Two main methods for manipulation
[Note: The following commentary is not intended to imply that any motives are not honorable. It is merely an up-to-date example of how results from meta-analyses can differ.]
First, the results of a meta-analysis will differ depending on which studies are included. For example, to investigate ibuprofen, one meta-analysis could be performed using all studies which suggest that ibuprofen is better than paracetamol, and one meta-analysis could be performed with all studies which suggest that paracetamol is better. Clearly, these meta-analyses would produce different results. It is quite common to find more than one group researching the same topic with a meta-analysis, and it can be difficult to select which studies to use. To take an up-to-date example, a recent article on the PharmaTimes website described two meta-analyses conducted to investigate Spiriva and Atrovent medications for chronic obstructive pulmonary disease (COPD).1 One meta-analysis, which was published in the Journal of the American Medical Association, concluded that the drugs were associated with an increased risk of cardiovascular death, myocardial infarction and stroke. 2 The second meta-analysis concluded that there was no evidence of an association.3 The results of this analysis are currently being reviewed by regulatory authorities and are therefore not published yet. These two analyses are a great example of how, depending on which studies a meta-analysis includes, different results can be gained. Both analyses used controlled clinical trials and both included a large number of patients (more than 14,000). Although there could be many differences in their designs, which cannot be evaluated until the second analysis is published, it is most likely that the main difference between the analyses are the studies that were included. The first meta-analysis used 17 trials identified from systematic searches, and the second used 30 trials, which we do not currently know where from.4
The second main method for manipulation is the actual method of analysis. In brief, there are two main types of model that can be used: a fixed or a random effects model. What is supposed to happen, as with most research, is that a plan is developed for the analysis before conducting any. The type of model to use should depend on the distributional assumptions and the amount of heterogeneity (how different the studies are). For example, fixed effect models assume there is no heterogeneity in effect sizes, therefore if there is evidence of heterogeneity, random effect models are normally more appropriate. However, this does not seem to be well understood by investigators conducting meta-analyses as often they run every type of analysis then chose which statistic most suits them. The second main method for manipulation in meta-analyses is therefore not performing the most appropriate analysis. Perhaps this attitude arises from misinterpretation of respectable literature, such as the learning material on the Cochrane Collaboration website, which suggests running the different models to investigate heterogeneity.
Do not be put off!
Do not let all this put you off conducting or reading meta-analyses. This article was only intended to make people be more cautious of results from meta-analyses. Commonly people believe that meta-analyses are trust-worthy because they incorporate lots of information or because they calculate statistics (as opposed to reviews). Whilst meta-analyses do have many advantages, moderate trust should only be put into results from meta-analyses which give you special reason to trust them, for example they were conducted by respectable statisticians or their methodology looks to have been well-planned in advance. If you are thinking of conducting a meta-analysis, make sure you know how to do it correctly!
How can I correctly conduct a meta-analysis?
There are many books dedicated to meta-analyses, a quite good list is available from meta-analysis.com.
There are some useful online resources, such as the previously mentioned learning material on the Cochrane Collaboration website.
Find yourself a statistician that has experience in meta-analyses. However, beware of the statisticians that have just learnt from other researchers instead of learning from qualified professionals!
Better still would be to enrol yourself on a course, however courses that are convenient for your location can be difficult to find, and then you have to get a place on the course…
1. Grogan, K. Boehringer leaps to defence of Spiriva over cardiovascular risk claim. September 2008. Accessed on 26/11/08 from Pharmatimes website.
2. Singh, S., Loke, Y.K., Furberg, C.D. (2008) Inhaled anticholinergics and risk of major adverse cardiovascular events in patients with chronic obstructive pulmonary disease. Journal of the American Medical Association 300 1439-1450
3. Established safety profile of Spiriva confirmed by 30 rigorously controlled clinical trials. September 2008. Accessed on 26/11/08 from Boehringer-Ingelheim website.