Article Text

Download PDFPDF

Overview of study design in clinical epidemiology
Free
  1. J M Stephenson1,
  2. A Babiker1
  1. 1Department of Sexually Transmitted Diseases, UCL Medical School, The Mortimer Market Center, Off Capper Street, London WCIE 6AU
  1. Dr Stephenson jstephenson{at}gum.ucl.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

The purpose of this article is to provide a brief overview of the range of study designs used to address research questions in clinical epidemiology. For readers with a particular research question in mind, comparison of the different options may guide selection of an appropriate study design. Clinical epidemiology can be defined as the investigation and control of the distribution and determinants of disease. Some of the other epidemiological terms used in the article are described more fully in table 1. Later articles in this series will deal with different study designs in more detail.

Table 1

Definition of terms

The range of clinical and epidemiological studies in sexual health is very wide, but in terms of purpose and basic study design, they can be divided into a few broad categories. The purpose may be to determine the level (prevalence) of disease in a population, to identify causes of disease or those at high risk of disease, to describe the natural history of disease, to prevent the onset of disease or alter the course of disease in individuals or populations. The simplest categorisation in terms of study design is between observational studies and intervention studies (trials). Observational studies, in which one observes the course of a disease or the relation between risk factors (exposures) and outcomes, are used to address questions about prevalence, natural history, aetiology, and risk groups. Trials, in which one intervenes to prevent or change the course of a disease, are used to evaluate preventive or therapeutic interventions, but can also provide strong evidence of causality (table 2).

Table 2

Ideal study designs according to purpose of study

Observational studies

Observational studies include cross sectional, cohort, case-control, and ecological studies.

CROSS SECTIONAL STUDIES

In a cross sectional study, individuals with a defined disease, risk factor, or other condition of interest are identified at a point in time. The number of individuals with the condition divided by the total number in the population gives the prevalence (expressed as a proportion) of the condition in a defined population at that point in time. For example, many cross sectional studies have been conducted to estimate the prevalence of HIV infection in antenatal and genitourinary medicine clinics.1

COHORT STUDIES

In cohort studies individuals are followed through time to monitor the natural history of a disease, to observe prognosis in relation to treatment, or to investigate aetiology. In the early days of the AIDS epidemic, several cohort studies provided vital information about the course of HIV infection—for example, the Multicenter AIDS Cohort Study.2 More recently, in the era of anti-HIV therapy, cohorts with continuous patient recruitment have made an important contribution to knowledge about the incubation period of infection and the impact of changes in therapy over time.3, 4

An example of a cohort study conducted to investigate aetiology is provided by hepatitis B infection and liver cancer. To study the hypothesis that hepatitis B causes hepatocellular carcinoma, over 22 000 male Taiwanese civil servants, of whom 15% were hepatitis B surface antigen positive, were followed for approximately 9 years.5 At follow up, the death rate from hepatocellular carcinoma was 98 times higher in HepBsAg positive men (the exposed group) than in HepBsAg negative men (the unexposed group), indicating an exceptionally strong association between HbsAg status and primary hepatocellular carcinoma. This example also illustrates how large cohort studies need to be if the outcome of interest is relatively rare. An alternative approach would be to do a case-control study.

CASE-CONTROL STUDIES

The distinctive feature of a case-control study is that individuals are selected according to disease or outcome status rather than exposure status. People with the disease or outcome of interest are selected as cases, and a suitable group of individuals without the disease are selected as controls. Returning to the example of liver cancer and hepatitis B, a case-control study could be conducted by recruiting cases with liver cancer and suitable controls who were free of liver cancer. The relative frequency (odds) of previous exposure to hepatitis B would then be compared between cases and controls. Another key feature of case-control studies is that inferences about the association between exposure and disease depend entirely on the exposure preceding the disease. For example, it would be impossible to conclude that hepatitis B causes liver cancer if the infection occurred after the cancer developed.

The case-control study has intuitive appeal as a means of investigating aetiology. It can be thought of as the logical extension of a case series. For example, it was a cluster of case reports of Pneumocystis carinii pneumonia and Kaposi's sarcoma in young, previously healthy, homosexual men that eventually prompted a national case-control study to seek explanations for these unusual presentations of immunodeficiency.6 The addition of a control group allows the frequency of exposure in cases to be expressed relative to people who are disease free. While it is perfectly logical to search for meaningful differences between comparable groups of people with and without the disease, the intuitive appeal of case-control studies belies a key problem—namely, how to select the most appropriate control group. This is the “Achilles heel” of the case-control study. Since the exposure has already happened, selecting controls who are more (or less) likely than the cases to have been exposed for reasons unrelated to the outcome of interest will result in a biased association (odds ratio) between exposure and disease. Unfortunately, there is no simple recipe for selecting the ideal control group. The potential biases specific to each research question need to be considered carefully before controls are defined and selected. Further discussion about selection of controls is beyond the scope of this article.

ECOLOGICAL STUDIES

Observational studies conducted at a population level rather than an individual level are called ecological studies. Differences in outcome between populations, or over time, are related to population characteristics that are thought to be risk (or preventive) factors. An example would be analysis of the decrease in chlamydial infection and ectopic pregnancy over time in Sweden.7 Although the findings may be causally linked, it is usually hard to explore alternative explanations within the limits of this study design. For this reason, results from ecological studies often serve as a basis for further investigation of individuals.

Intervention studies (clinical trials)

The accepted gold standard for the evaluation of a therapeutic or preventative intervention is the randomised control trial (RCT). The RCT has a distinct advantage over observational studies in terms of its potential to avoid selection bias. The key principle is randomisation where, in the case of evaluating a single intervention against standard of care, patients are allocated to either the intervention under study (the experimental group) or to standard management (control group) by a pure chance process. The two groups are followed prospectively for a specified period of time and then compared in terms of an outcome measure specified at the outset. Bias and random error are two different obstacles to overcome in the reliable evaluation of the treatment effect—that is, the difference between the experimental and control groups in the study outcome. Bias in this context means any distortion of the study results in a particular direction as a consequence of a systematic difference between the two groups arising from an inappropriate design or conduct of the study. Random error is the play of chance leading to an inaccurate estimate of the treatment effect.

The most important design technique for avoiding bias is randomisation. Randomisation ensures that, within the limits of chance variation, there are no systematic differences between the two groups in known and unknown prognostic factors so that any difference in outcome can be reasonably attributed to the effect of the intervention. In addition, randomisation provides a sound basis for the statistical analysis of the data. Functionally, the process of simple randomisation is analogous to tossing a coin for each patient and allocating the patient to the intervention group if the result is heads and to the control group if tails. In practice, this is done by computer generated lists mimicking repeated coin tossing. Other techniques for avoiding bias include blinding where the clinician and/or the patient are made unaware of treatment allocation and the appropriate handling of non-adherence to allocated treatment and missing outcome measures in the analysis.

The uncertainty associated with a particular result of a trial is what we called “random error.” Chance can play a much greater part in determining trial results than many people realise. For example, in an RCT of a new antibiotic for the treatment of chlamydia the observed cure rate in the patients allocated to the new antibiotic was 98% but the cure rate was only 90% in patients allocated to standard treatment. The question now is how confident should we be in concluding that the higher cure rate of the new antibiotic is real and not just the result of the characteristics of the patients who happened to be allocated to receive it?

Provided that the study is properly randomised and conducted, we can conclude that the observed superiority of the new antibiotic is either real or the result of random error —that is, chance. Randomisation also allows us to quantify this random error and thereby provides ways of reducing our uncertainty in the results. The way to reduce random error is to recruit a sufficient number of patients in the trial. The larger the number of patients (the sample size) the less uncertainty and the more confidence we have in the trial result. Intuitively, we would have more confidence that the new antibiotic was genuinely superior if the trial had 500 patients than if it had only 50 patients. Later articles in this series will deal with how to work out how large a study should be and other design issues.

COMMUNITY RANDOMISED TRIALS

Sometimes it is not possible or not desirable to randomise individuals to an intervention because the natural unit of randomisation is an entire group or community, not the individuals within it.8 Examples include a trial to evaluate the effects of sex education delivered to classes of school pupils9 or a trial of improved STI case management delivered to whole communities.10 Both the design and the analysis of the trial have to take account of the fact that the unit of randomisation is the community. Further information about community (or cluster) randomised trials can be found elsewhere (see further reading).

Hierarchy of studies in determining causality

Many studies are conducted to examine associations between exposures, or putative risk factors, and disease outcomes. Associations can arise through chance, through bias, or confounding, or they may indicate a causal relation. Distinguishing between these different explanations is a key objective of much research. The study designs described here differ in their ability to indicate causality (fig 1).

Figure 1

Pyramid showing hierarchy of study designs in determining causality.

The randomised trial bears closer resemblance than other designs to a controlled laboratory experiment, in which differences between experimental and control subjects are best explained by the one factor—that is, the intervention, that differs between them. By comparison, observational studies are less able to eliminate alternative explanations of bias or confounding. Cohort studies usually provide stronger evidence of causality than case-control studies, partly because one can be sure that the exposure occurred before the disease. Since the direction of analysis in case-control studies is “backwards” (from outcome to exposure), one cannot always be sure that the exposure predated the disease. The same applies to associations found in cross sectional studies. Because of the problems of selecting a suitable control group, case-control studies are also more susceptible than cohort studies to other forms of bias. Although ecological studies can contribute information about associations between groups of individuals, evidence for causality is always weak because individual data on other (confounding) factors are unavailable.

For example, several observational studies have shown an association between HIV infection and other STIs. The strength and statistical significance of this association, found consistently across several studies, makes chance an unlikely explanation. However, it was initially unclear whether the association was due to a biological (causal) interaction between HIV and other STIs, or whether it merely reflected a common (confounding) association with high risk sexual behaviour. A subsequent community randomised trial5 showed that the association was causal because improved management of STI resulted in lower HIV incidence without any appreciable change in sexual behaviour between intervention and control communities.

In conclusion, this article presents a brief overview of the various single study designs in clinical epidemiology with particular reference to sexual health research. Evidence from different studies addressing the same research question can be synthesised in a systematic way to provide more evidence than can be gained from individual studies (systematic overview or meta-analysis). A future article will describe these methods in more detail.

Further reading

Lilienfeld DE, Stolley PD. Foundations of epidemiology. 3rd ed. New York: Oxford University Press, 1994.

Schlesselman JJ. Case control studies: design, conduct analysis. New York, Oxford: Oxford University Press, 1982.

Pocock S. Clinical trials: a practical approach. Chichester: Wiley, 1983.

Murray DM. Design and analysis of group-randomised trials. New York: Oxford University Press, 1998.

References