Article Text

Download PDFPDF

Launching a new era for behavioural surveillance
  1. Lisa E Manhart1,2,3,
  2. Christine M Khosropour1,2
  1. 1Department of Epidemiology, University of Washington, Seattle, Washington, USA
  2. 2The Center for AIDS and STD, University of Washington, Seattle, Washington, USA
  3. 3Department Global Health, University of Washington, Seattle, Washington, USA
  1. Correspondence to Dr Lisa E Manhart, Department of Epidemiology, University of Washington, 325 9th Avenue, Box 359931, Seattle, Washington 98104, USA; lmanhart{at}u.washington.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Despite systems to monitor the incidence of disease dating from the mid-1800s,1 behavioural surveillance is in relative infancy. The first systematic data on human sexual behaviour, the Kinsey Reports, were published in the late 1940s/early 1950s, but it was not until 40 years later that national systematic efforts to collect data on sexual behaviours were undertaken. Great Britain's National Surveys of Sexual Attitudes and Lifestyles (NATSAL) and the National Health and Social Life Survey in the USA were both launched in the early 1990s in response to the emerging HIV/AIDS epidemic, and both were accompanied by significant debate about public funding of research on the still taboo subject of sexual behaviour. Today, taboos have loosened and systematic collection of data on sexual behaviour is routine in many settings. Nevertheless, there are few instances of data that can be compared across time and geographical location. In response to this, the European Centre for Disease Prevention and Control (ECDC) launched an effort to harmonise behavioural surveillance by recommending uniform collection of core indicators across populations. In this issue, Jørgensen et al describe results of Denmark's baseline behavioural surveillance survey using the ECDC core indicators in a general population sample of young adults (see Jørgensen et al). Approximately 30% did not use condoms at sexual debut, placing them at risk of sexually transmitted infections (STI). Even more were unprotected at their last sexual encounter, and this was amplified in casual partnerships. While the results themselves are not surprising, the establishment of baseline data for the ECDC core indicators in Denmark is an important milestone.

The development of core indicators was undertaken by an expert panel that mapped existing behavioural surveillance strategies in European countries in 2008.2 Topics covered were similar across countries, but a wide variety of indicators were in use and the sustainability of many systems was questionable. To address the former concern, the panel recommended five core indicators that should be consistently collected across populations: (1) number of sexual partners (last 12 months); (2) condom use at last intercourse (last 12 months) for stable, casual and paid partners; (3) experience of HIV testing; (4) having paid for sex (last 12 months) and associated condom use; (5) a composite indicator of HIV knowledge.2 Among youth, the additional collection of age at sexual debut and condom/other contraceptive use at sexual debut was recommended. Although many settings collect some or all of these indicators, results have been inconsistently reported, and the work by Jørgensen and colleagues represents the first explicit publication of the ECDC core indicators. In addition to setting the stage for future comparisons, Jørgensen's report highlights several key considerations for behavioural surveillance.

To establish their baseline indicators, Denmark elected to sample the general population of young adults. While most settings must employ complicated and expensive sampling schemes to achieve a representative sample of the general population, typically in the form of stratified multistage cluster designs,3 ,4 Denmark is a notable exception. National registration of all residents is required by law and is maintained under the Danish Civil Registration System. Jørgensen and colleagues leveraged this system to identify a random sample of eligible respondents and sent invitations to participate. This random sampling from the entire population is the gold standard for survey research and a distinct strength of their study.

Although Jørgensen's sampling scheme was optimal for surveillance of heterosexual youth, it was less well-suited to young men who have sex with men (MSM), the group at highest risk of HIV and STI. In Denmark, as in many settings, MSM constitute only 2%–5% of the population, many members of this population are difficult to identify, and risk behaviours differ. General population surveillance is typically inefficient for MSM and separate surveillance activities employing a unique set of core indicators are recommended.2 Denmark's Sex Life Surveys use convenience sampling to recruit MSM from gay venues and online,5 similar to the Gay Men's Sex Survey (GMSS) in the UK,6 and have been conducted since 2000. Time-space sampling (TSS) and respondent-driven sampling (RDS) can yield more representative samples of high-risk subgroups, but involve complicated and resource-intensive methods. TSS has been used since 2003 to recruit MSM in the US National HIV Behavioural Surveillance (NHBS)7 and requires ethnographic mapping to identify sampling locations. RDS relies on social network connections to recruit members of hidden populations, but the number of waves required for an unbiased sample is not always achieved. Despite these challenges, the advantages of more representative data are generally worth the costs and, whenever economically feasible, TSS and/or RDS should be considered for hard-to-reach or under-represented populations.

Obtaining the high response rates needed to minimise non-response bias was a challenge for Jørgensen et al, with only 20% of sampled young adults participating. Low response rates remain an ongoing concern for behavioural surveillance and appear to be declining. The screening rate in NHBS (the proportion of MSM screened out of those approached) dropped from 43% in 2008 to 30% in 2011.8 ,9 Return rates for the booklet component of the GMSS have also declined substantiality, from 14.3% to 9.3% in 2007 and 2008, respectively.6 Even the most recent NATSAL experienced challenges with non-response, with only 58% of sampled respondents participating.3

When participation rates are low, weighting methods are typically employed to adjust for potential bias. In most settings, individual-level information on non-responders is unavailable, and non-response weights are constructed using population-level averages. Here Jørgensen and colleagues directly matched non-responders to a national market research database and constructed weights based on complete individual-level sociodemographic data. While this likely produced more accurate weights, the weighting process assumes that non-responders of the same age, ethnicity and education as the responders have similar behaviours, which is probably not always the case. No weighting system can completely correct for non-response, and the development of novel methods to boost completion rates should be a priority.

A number of surveillance systems now use web-based surveillance to augment or replace existing systems,10 including the most recent NATSAL.11 In the Danish study, the investigators took a unique approach by coupling a traditional survey method (ie, a postal mailing to a random sample of the population) with web-based data collection (ie, each mailing included a survey link). Though this method may have generated more representative estimates than traditional web-based convenience samples, the requirement to manually enter an electronic survey link likely contributed to the low response rate. Fully electronic surveys that are linkable from, and administered on, a mobile phone, tablet, or personal computer, such as those used in the US NHBS12 may enhance response rates. This technology exists and should serve as a launching pad for additional innovative methods to increase participation in behavioural surveillance.

Despite the low response rate, the establishment of baseline data on the ECDC core indicators in Denmark is noteworthy. Critical next steps include the development of creative approaches to optimise sexual behaviour survey response rates, repeated years of data collection and the explicit implementation and reporting of these core indicators in other geographical settings. Realising the vision of a consistent set of core indicators collected across nations will permit valid cross-country comparisons and allow us to more nimbly respond to the changing landscape of sexual behaviours that place populations at risk of HIV and STI.

Acknowledgments

We thank Hanne Thiede for helpful discussions.

References

Footnotes

  • Funding LEM is funded by the US National Institutes of Health (NIH/NIAID R01 AI110666 and U19 AI113173). CMK is also supported by the US National Institutes of Health (NIH/NIAID T32 AI07140). LEM has served on a scientific advisory board for QIAGEN and has received reagents and test kits for diagnostic assays from Hologic/Gen-Probe.

  • Competing interests None.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles