Category Archives: Articles

Medical Statistics and Biostatistics defined

Medical Statistics

Medical statistics is the application of statistical knowledge and methods to the field of medicine and medical practice.

Although medical statistics has been a recognised branch of statistics in the UK for more than 40 years, the term does not appear to have come into general use in North America, where the wider term ‘biostatistics’ is used and encompasses the application of statistics (the branch of applied mathematics concerned with the collection and interpretation of quantitative data and the use of probability theory to estimate population parameters) to medical-related data as well as those in the wider field of biology.

My preferred definition of medical statistics is the one I have coined above. The current (June 2009) definition of medical statistics in Wikipedia is, like many things in Wikipedia, an unsatisfactory work in progress. It says that medical statistics is “the field of medicine dealing with applications of statistics to the field of health and medicine.”

An entry in Answers.com throws some interesting light on the history of what we would now call medical statistics, quoting sources that trace its roots to the eighteenth century:

“One tradition [which flowed from Graunt’s and Petty’s early work] was medical statistics, which developed most fully in England during the eighteenth century. Physicians such as James Jurin (1684–1750) and William Black (1749–1829) advocated the collection and evaluation of numerical information about the incidence and mortality of diseases. Jurin pioneered the use of statistics in the 1720s to evaluate medical practice in his studies of the risks associated with smallpox inoculation. William Black coined the term medical arithmetic to refer to the tradition of using numbers to analyze the comparative mortality of different diseases. New hospitals and dispensaries such as the London Smallpox and Inoculation Hospital, established in the eighteenth century, provided institutional support for the collection of medical statistics; some treatments were evaluated numerically.”

Biostatistics

A search for the term ‘biostatistics’ or ‘biometrics’ returns many definitions, of which the following are a selection:

“The theory and techniques for describing, analyzing, and interpreting health data.” Johns Hopkins Bloomberg School of Public Health.

“The use of statistical tests to analyze biological data” Duke Clinical Research Institute.

“The science of statistics applied to the analysis of biological or medical data.” The American Heritage Medical Dictionary (2004) Published by Houghton Mifflin Company.

“Numeric data on births, deaths, diseases, injuries, and other factors affecting the general health and condition of human populations. Also called vital statistics” Mosby’s Medical Dictionary, 8th edition. 2009, Elsevier

“Biostatistics (a combination of the words biology and statistics; sometimes referred to as biometry or biometrics) is the application of statistics to a wide range of topics in biology. The science of biostatistics encompasses the design of biological experiments, especially in medicine and agriculture; the collection, summarization, and analysis of data from those experiments; and the interpretation of, and inference from, the results.” Wikipedia.
“A branch of biology that studies biological phenomena and observations by means of statistical analysis.” WordNet.
“The science of collecting and analyzing biologic or health data using statistical methods. Biostatistics may be used to help learn the possible causes of a cancer or how often a cancer occurs in a certain group of people. Also called biometrics and biometry.” National Cancer Institute.

Go to Contacts page to request permission to reproduce an article

How to survive a puma attack

A short article in the March 2009 issue of the statistical magazine Significance pointed out a fantastic piece of research done into how not to be eaten by a puma. The research by R. G. Coss and others found that people who did not run away from pumas had the greatest frequency of being severely injured (43%) and the lowest likelihood of escaping injury (26%). In other words, if a puma is approaching you… run!

It is natural to wonder why this type of research is conducted. What benefit or insight does it provide? There is actually more to the paper, for example they looked into the effect of age and the number of people in the group on survival, so it probably is valuable research. However I would like to use it to illustrate a point I have been making for years.

Each day in the newspapers and TV news items and in every day discussions that people have, there are stories of the latest methods for preventing a disease or condition. For example recently there was an article on the benefit of taking vitamin D supplements to prevent bone fractures. Whilst it is always great to research the prevention or treatment of disease, are people supposed to change their habits in accordance with the latest research? There is always a lot of conflicting advice which can leave people confused about what to do. But the point is: no one knows what is best, there is just a growing body of evidence one way or the other.

Until there is a large enough body of evidence one way or the other, for example we are now pretty certain that smoking is an unhealthy habit, perhaps we ought to just go with out instincts. Logic tells us that we are more likely to survive by running away from a puma than standing still. It is difficult to imagine that taking vitamin D supplements could harm you, so maybe it is best, if you can afford it, to take supplements. Until we are pretty certain of something, as is the case with smoking and eating plenty of fruit and veg, perhaps it is best to go with instinct rather than the selective reporting of research in the press.

Reference
Coss, R.G., Fitzhugh, L.E., Schmid-Holmes, S., Kenyon, M.W. and Etling, K. (2009) The effects of human age, group composition, and behavior on the likelihood of being injured by attacking pumas. Anthrozoos: A Multidisciplinary Journal Of The Interactions Of People & Animals 22 77-87

Go to Contacts page to request permission to reproduce an article

 

Go to Contacts page to request permission to reproduce an article

Changing endpoints

Imagine we are playing monopoly. When it is my turn I roll the die five times in a row then choose the roll I prefer, the one which places my piece on a winning square. Wouldn’t you tell me I cannot do that, accuse me of cheating and walk off? It stands to reason that we should not tolerate the equivalent, the changing of endpoints to gain better results, in clinical trials.

What are “endpoints”?
Before a clinical trial is conducted, endpoints should be specified. These are the outcome measures of interest, for example in a trial of a smoking cessation therapy the primary endpoint would be smoking status and a secondary endpoint might be a reduction in the number of cigarettes.

What is wrong with changing endpoints?
Changing from the pre-specified endpoints once a trial has begun can introduce bias. As in the monopoly analogy, if the endpoints of interest are changed to get ‘better’ results, i.e. more ‘significant’ or publishable results, the trial will produce biased results which do not inform research. This includes the selection of new endpoints which display a trend towards ‘significance’ or endpoints that have been investigated but not reported because they fail to display the desired trend. This increases the chance of false positive (type 1) errors.

Further discussion of problems created by changing endpoints can be found in the very clear and helpful essay by Scott Evans. Also discussed are the reasons why changing endpoints may be appropriate, for example if more accurate biomarkers or outcome measures are discovered which could contribute more up-to-date knowledge. Scott Evans proposes a series of issues that need to be considered in order to assess and handle changes in endpoints in clinical trials.

Are many trials guilty?
Chan et al. assessed the selective reporting of outcomes in 102 trials and, through comparing the published results and their protocols, found that 62% of trials had at least one primary outcome that was changed, introduced or omitted without explanation. Furthermore, Chan et al. sent a questionnaire to the lead investigators and found that 86% (42 out of 49 who responded) denied the existence of unreported outcomes despite evidence otherwise.

This, plus evidence from other reports and examples, suggests that many clinical trials are guilty of changing endpoints during a trial without justification.

What can be done?
Any changes in endpoints should be declared and explained to the registry of the trial and to any journals that manuscripts are submitted to. Some measures are already in place. For example, some journals now require the protocol to be submitted along with the manuscript, but we need more. This should be better ‘policed’, perhaps with people who are employed solely to investigate the selective reporting in trials.

Researchers and trialists should be made more aware of the dangers of changing endpoints. We should all be better informed of the problems that can arise and of the few situations where changing endpoints can be appropriate.

Please read Scott Evans’ short article to further your awareness of why changing trial endpoints is problematic.

Go to the Contacts page to request permission to reproduce an article

See here for definition of Medical Statistics.

Go to Contacts page to request permission to reproduce an article

Is anyone using Google Flu Trends?

Google Flu Trends was launched in November 2008, but what has become of it since?

Google Flu Trends uses people’s Google searches to assess influenza activity across the US. It is a daily update of the number of people in different areas who use particular search terms, indicating they might be suffering from flu. At the moment it is just in the US, but it may be developed for other countries.

Google claim that it can “accurately estimate current flu levels one to two weeks faster than published Centers for Disease Control and Prevention (CDC) reports”. (Google Flu Trends website) Whilst this is impressive, what has the information actually been used for?

Most of the media attention focused on the ethical issues surrounding Flu Trends, such as whether it is an invasion of privacy. Now that people seem to have accepted Flu Trends, perhaps the focus should be on how it can actually be used in reality, not just in theory.

There was a lot of talk of how the information could theoretically be used, for example Google say that it “may enable public health officials and health professionals to better respond to seasonal epidemics”. Are there any professionals using it? Or are any individuals using it?

It is hard to imagine anyone in the US regularly checking Flu Trends in their state to decide whether or not to be in contact with people. Theoretically they could use it to advise them whether or not to get a flu vaccine, but will anyone actually do this?

It would be a real shame if such an impressive resource goes to waste. Please inform us if you use Flu Trends.

Google Flu Trends: http://www.google.org/flutrends/

Go to the Contacts page to request permission to reproduce an article

Go to Contacts page to request permission to reproduce an article

A comment on comments

Personally, flicking through a journal such as The Lancet or the British Medical Journal, the most interesting section is the ‘comments’ or ‘letter’ section. The articles there are generally shorter than in other sections, and therefore it is easier to get a picture of new developments or current concerns. Most importantly, they usually involve some evaluation of previously published articles or developments. Not only are the ‘comments/letters’ articles interesting, but they are very important for future research.

Recently (November 2008) a comment was published by Boys et al. in The Lancet which commented on a TV programme and an editorial on prenatal screening for Down’s syndrome.1 The editorial and the TV program concluded that after prenatal serum or ultrasound screening for Down’s syndrome, two healthy babies are miscarried for every three Down’s syndrome births that are prevented.2-3 The Boys et al. comment evaluated both the methods used to get this and other statistics, and the appropriateness of the early online publication (according to Boys et al. it was published early to coincide with the television broadcast). In their opinion, the editorial would have benefited from an independent review process and submission to a peer-reviewed academic journal.

Aside from how interesting the commentary is (both the subject matter and the passion with which it is written make it a very stimulating read), it serves as a great example of how important these articles in the ‘comments/letters’ sections are. The article (similar to many others) evaluated the statistical methods and conduct of previous publications, and called research into question. Perhaps the more this happens, the more researchers will strive to improve the quality of their investigations.

References

1. Boys, C., Cunningham, C., McKenna, D., Robertson, P., Weeks, D.J., Wishart, J. (2008) Prenatal screening for Down’s syndrome: editorial responsibilities. Lancet 372 (9652) 1789-91
2. Channel 4 News. Exclusive: research suggests Down’s screening risk is ‘unacceptable’. Sept 16, 2008. Accessed on 12 Dec 2008 from Channel 4 website.
3. Buckley, F. & Buckley, S. Wrongful deaths and rightful lives – screening for Down syndrome. Down Syndrome Res Pract 2008; published online Sept 16. DOI: 10.3104/editorials.2087 (accessed Dec 12, 2008)

 

Go to the Contacts page to request permission to reproduce an article

Go to Contacts page to request permission to reproduce an article

Beware of meta-analysis manipulation!

Many people suspect that statistics are manipulated to suit an investigator’s motives.

They might be right! How can you tell?

Surely meta-analyses help provide reliable information?

Actually, there is potential for manipulation in meta-analyses too…

What is a meta-analysis?
Briefly, meta-analyses attempt to pool data from different sources on a particular topic. For example, if you were interested in how effective ibuprofen is for treating a headache compared with paracetamol, you could get data from relevant previously published trials and summarise what they all found. That sounds like a systematic review… There is a clear difference between a review and a meta-analysis, however it is often not understood or not remembered! In a review the results from each source are discussed. In a meta-analysis the actual data (summary measures from each trial e.g. sample size and number of events) are used to obtain ‘pooled’ or ‘combined’ or ‘common’ estimates.

Two main methods for manipulation
[Note: The following commentary is not intended to imply that any motives are not honorable. It is merely an up-to-date example of how results from meta-analyses can differ.]

First, the results of a meta-analysis will differ depending on which studies are included. For example, to investigate ibuprofen, one meta-analysis could be performed using all studies which suggest that ibuprofen is better than paracetamol, and one meta-analysis could be performed with all studies which suggest that paracetamol is better. Clearly, these meta-analyses would produce different results. It is quite common to find more than one group researching the same topic with a meta-analysis, and it can be difficult to select which studies to use. To take an up-to-date example, a recent article on the PharmaTimes website described two meta-analyses conducted to investigate Spiriva and Atrovent medications for chronic obstructive pulmonary disease (COPD).1 One meta-analysis, which was published in the Journal of the American Medical Association, concluded that the drugs were associated with an increased risk of cardiovascular death, myocardial infarction and stroke. 2 The second meta-analysis concluded that there was no evidence of an association.3 The results of this analysis are currently being reviewed by regulatory authorities and are therefore not published yet. These two analyses are a great example of how, depending on which studies a meta-analysis includes, different results can be gained. Both analyses used controlled clinical trials and both included a large number of patients (more than 14,000). Although there could be many differences in their designs, which cannot be evaluated until the second analysis is published, it is most likely that the main difference between the analyses are the studies that were included. The first meta-analysis used 17 trials identified from systematic searches, and the second used 30 trials, which we do not currently know where from.4

The second main method for manipulation is the actual method of analysis. In brief, there are two main types of model that can be used: a fixed or a random effects model. What is supposed to happen, as with most research, is that a plan is developed for the analysis before conducting any. The type of model to use should depend on the distributional assumptions and the amount of heterogeneity (how different the studies are). For example, fixed effect models assume there is no heterogeneity in effect sizes, therefore if there is evidence of heterogeneity, random effect models are normally more appropriate. However, this does not seem to be well understood by investigators conducting meta-analyses as often they run every type of analysis then chose which statistic most suits them. The second main method for manipulation in meta-analyses is therefore not performing the most appropriate analysis. Perhaps this attitude arises from misinterpretation of respectable literature, such as the learning material on the Cochrane Collaboration website, which suggests running the different models to investigate heterogeneity.

Do not be put off!
Do not let all this put you off conducting or reading meta-analyses. This article was only intended to make people be more cautious of results from meta-analyses. Commonly people believe that meta-analyses are trust-worthy because they incorporate lots of information or because they calculate statistics (as opposed to reviews). Whilst meta-analyses do have many advantages, moderate trust should only be put into results from meta-analyses which give you special reason to trust them, for example they were conducted by respectable statisticians or their methodology looks to have been well-planned in advance. If you are thinking of conducting a meta-analysis, make sure you know how to do it correctly!

How can I correctly conduct a meta-analysis?
There are many books dedicated to meta-analyses, a quite good list is available from meta-analysis.com.

There are some useful online resources, such as the previously mentioned learning material on the Cochrane Collaboration website.

Find yourself a statistician that has experience in meta-analyses. However, beware of the statisticians that have just learnt from other researchers instead of learning from qualified professionals!

Better still would be to enrol yourself on a course, however courses that are convenient for your location can be difficult to find, and then you have to get a place on the course…

References
1. Grogan, K. Boehringer leaps to defence of Spiriva over cardiovascular risk claim. September 2008. Accessed on 26/11/08 from Pharmatimes website.
2. Singh, S., Loke, Y.K., Furberg, C.D. (2008) Inhaled anticholinergics and risk of major adverse cardiovascular events in patients with chronic obstructive pulmonary disease. Journal of the American Medical Association 300 1439-1450
3. Established safety profile of Spiriva confirmed by 30 rigorously controlled clinical trials. September 2008. Accessed on 26/11/08 from Boehringer-Ingelheim website.

Go to the Contacts page to request permission to reproduce an article