Impact of deploying multiple point-of-care tests with a ‘sample first’ approach on a sexual health clinical care pathway. A service evaluation

Objectives To assess clinical service value of STI point-of-care test (POCT) use in a ‘sample first’ clinical pathway (patients providing samples on arrival at clinic, before clinician consultation). Specific outcomes were: patient acceptability; whether a rapid nucleic acid amplification test (NAAT) for Chlamydia trachomatis/Neisseria gonorrhoeae (CT/NG) could be used as a POCT in practice; feasibility of non-NAAT POCT implementation for Trichomonas vaginalis (TV) and bacterial vaginosis (BV); impact on patient diagnosis and treatment. Methods Service evaluation in a south London sexual health clinic. Symptomatic female and male patients and sexual contacts of CT/NG-positive individuals provided samples for diagnostic testing on clinic arrival, prior to clinical consultation. Tests included routine culture and microscopy; CT/NG (GeneXpert) NAAT; non-NAAT POCTs for TV and BV. Results All 70 (35 males, 35 females) patients approached participated. The ‘sample first’ pathway was acceptable, with >90% reporting they were happy to give samples on arrival and receive results in the same visit. Non-NAAT POCT results were available for all patients prior to leaving clinic; rapid CT/NG results were available for only 21.4% (15/70; 5 males, 10 females) of patients prior to leaving clinic. Known negative CT/NG results led to two females avoiding presumptive treatment, and one male receiving treatment directed at possible Mycoplasma genitalium infection causing non-gonococcal urethritis. Non-NAAT POCTs detected more positives than routine microscopy (TV 3 vs 2; BV 24 vs 7), resulting in more patients receiving treatment. Conclusions A ‘sample first’ clinical pathway to enable multiple POCT use was acceptable to patients and feasible in a busy sexual health clinic, but rapid CT/NG processing time was too long to enable POCT use. There is need for further development to improve test processing times to enable POC use of rapid NAATs.

• It may not be possible to include information about every numbered guideline item in reports of original formal studies, but authors should at least consider every item in writing their reports.
• Although each major section (i.e., Introduction, Methods, Results, and Discussion) of a published original study generally contains some information about the numbered items within that section, information about items from one section (for example, the Introduction) is often also needed in other sections (for example, the Discussion).

Text section; Item number and name Section or Item description
Title and abstract Did you provide clear and accurate information for finding, indexing, and scanning your paper? 1. Title a. Indicates the article concerns the improvement of quality (broadly defined to include the safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity of care) b. States the specific aim of the intervention c. Specifies the study method used (for example, "A qualitative study," or "A randomized cluster trial")

Abstract
Summarizes precisely all key information from various sections of the text using the abstract format of the intended publication Introduction Why did you start?

Background Knowledge
Provides a brief, non-selective summary of current knowledge of the care problem being addressed, and characteristics of organizations in which it occurs

Local problem
Describes the nature and severity of the specific local problem or system dysfunction that was addressed 5. Intended improvement a. Describes the specific aim (changes/improvements in care processes and patient outcomes) of the proposed intervention b. Specifies who (champions, supporters) and what (events, observations) triggered the decision to make changes, and why now (timing)

Study question
States precisely the primary improvement-related question and any secondary questions that the study of the intervention was designed to answer Methods What did you do?

Ethical issues
Describes ethical aspects of implementing and studying the improvement, such as privacy concerns, protection of participants' physical well-being, and potential author conflicts of interest, and how ethical concerns were addressed 8. Setting Specifies how elements of the local care environment considered most likely to influence change/improvement in the involved site or sites were identified and characterized 9. Planning the intervention a. Describes the intervention and its component parts in sufficient detail that others could reproduce it b. Indicates main factors that contributed to choice of the specific intervention (for example, analysis of causes of dysfunction; matching relevant improvement experience of others with the local situation)

Planning the intervention (continued)
c. Outlines initial plans for how the intervention was to be implemented: e.g., what was to be done (initial steps; functions to be accomplished by those steps; how tests of change would be used to modify intervention), and by whom (intended roles, qualifications, and training of staff)

Planning the study of the intervention
a. Outlines plans for assessing how well the intervention was implemented (dose or intensity of exposure) b. Describes mechanisms by which intervention components were expected to cause changes, and plans for testing whether those mechanisms were effective c. Identifies the study design (for example, observational, quasiexperimental, experimental) chosen for measuring impact of the intervention on primary and secondary outcomes, if applicable d. Explains plans for implementing essential aspects of the chosen study design, as described in publication guidelines for specific designs, if applicable (see, for example, www.equator-network.org) e. Describes aspects of the study design that specifically concerned internal validity (integrity of the data) and external validity (generalizability) 11. Methods of evaluation a. Describes instruments and procedures (qualitative, quantitative, or mixed) used to assess a) the effectiveness of implementation, b) the contributions of intervention components and context factors to effectiveness of the intervention, and c) primary and secondary outcomes b. Reports efforts to validate and test reliability of assessment instruments c. Explains methods used to assure data quality and adequacy (for example, blinding; repeating measurements and data extraction; training in data collection; collection of sufficient baseline measurements) 12. Analysis a. Provides details of qualitative and quantitative (statistical) methods used to draw inferences from the data b. Aligns unit of analysis with level at which the intervention was implemented, if applicable c. Specifies degree of variability expected in implementation, change expected in primary outcome (effect size), and ability of study design (including size) to detect such effects d. Describes analytic methods used to demonstrate effects of time as a variable (for example, statistical process control)

Results
What did you find? 13. Outcomes a) Nature of setting and improvement intervention i. Characterizes relevant elements of setting or settings (for example, geography, physical resources, organizational culture, history of change efforts), and structures and patterns of care (for example, staffing, leadership) that provided context for the intervention ii. Explains the actual course of the intervention (for example, sequence of steps, events or phases; type and number of participants at key points), preferably using a time-line diagram or flow chart iii. Documents degree of success in implementing intervention components iv. Describes how and why the initial plan evolved, and the most important lessons learned from that evolution, particularly the effects of internal feedback from tests of change (reflexiveness) b) Changes in processes of care and patient outcomes associated with the intervention i. Presents data on changes observed in the care delivery process ii. Presents data on changes observed in measures of patient outcome (for example, morbidity, mortality, function, patient/staff satisfaction, service utilization, cost, care disparities)

Outcomes (continued)
iii. Considers benefits, harms, unexpected results, problems, failures iv. Presents evidence regarding the strength of association between observed changes/improvements and intervention components/context factors v. Includes summary of missing data for intervention and outcomes

Discussion
What

Interpretation
a. Explores possible reasons for differences between observed and expected outcomes b. Draws inferences consistent with the strength of the data about causal mechanisms and size of observed changes, paying particular attention to components of the intervention and context factors that helped determine the intervention's effectiveness (or lack thereof), and types of settings in which this intervention is most likely to be effective c. Suggests steps that might be modified to improve future performance d. Reviews issues of opportunity cost and actual financial cost of the intervention 18. Conclusions a. Considers overall practical usefulness of the intervention b. Suggests implications of this report for further studies of improvement interventions

Other information
Were other factors relevant to conduct and interpretation of the study?

Funding
Describes funding sources, if any, and role of funding organization in design, implementation, interpretation, and publication of study