More information about text formats
We apologise for the delay in responding to your letter. We were only recently notified of this by email. Thank you for taking the time to construct your letter in response to our published short report, to which you raise several points which require addressing.
Firstly we feel it is important to highlight that although this service evaluation focussed specifically on HIV, we acknowledged that the HIV sampling kit was part of a more comprehensive STI kit (syphilis, chlamydia, and gonorrhoea tests). We were upfront with this fact in our report, and therefore refute the claim by the responder that our paper failed to consider the wider test portfolio required by sexual health screening services.
Of greater concern to us, we note a major error in the calculations from the data provided by the responders for their “RRR” and “HIV result obtained/ STI kit requested” values. This is important, as the foundation of their concluding statement is based on this error. The responder's have incorrectly used the number of returned kits (256,717) instead of the number of requested kits (319,485) in calculating the RRR (request-to-return ratio) and the “HIV result obtained/STI kit requested” proportion. Applying the correct calculation, the RRR value using the responder's data is not 1.36 (256,717/188,187) but 1.70 (319,485/188,187). The “HIV result obtained/STI kit requested” proportion using the correct calculation is 58.9% (188,187/319,485) and not 73.3%...
Of greater concern to us, we note a major error in the calculations from the data provided by the responders for their “RRR” and “HIV result obtained/ STI kit requested” values. This is important, as the foundation of their concluding statement is based on this error. The responder's have incorrectly used the number of returned kits (256,717) instead of the number of requested kits (319,485) in calculating the RRR (request-to-return ratio) and the “HIV result obtained/STI kit requested” proportion. Applying the correct calculation, the RRR value using the responder's data is not 1.36 (256,717/188,187) but 1.70 (319,485/188,187). The “HIV result obtained/STI kit requested” proportion using the correct calculation is 58.9% (188,187/319,485) and not 73.3% as they have presented. We note that using the data provided by the responder's, the successful processing proportions of their MT samples were lower than that from our DBS samples (84.6% vs 98.8% for the DBS). The data presented by the responder’s highlights there are MT processing issues once a sample has been successfully returned by the user, and we would be interested to know the reasons for these processing failures, which account for 15.4% of their returned samples (compared with 0.2% of returned DBS samples from our paper). For these reasons it becomes clear that the responder’s claims do not undermine the findings of our study.
We would like to address the responder's comments about the “low return rates for postal testing” to which the responder's regard as a shortfall within this research. As detailed in the report, data collection spanned a short 4-month period, where return rates were found to be 68.7% for MT and 66.5% for DBS. We do not accept that these rates are considered as “low”. The return rates are average when compared with other peer-reviewed published data with a STI kit return rate focus, with rates ranging from 32.8% - 84% (Osmond 2000, Elliot 2015, Cordwell 2015, Manavi 2017, Turner 2019). The impressive return rates described by the responder's are provided without much context (e.g. is this looking at a historic complete data-set of all their kit requests across the nation, or is this a snapshot over a short time period from a region, which included kits that may have been requested, but returns fell outside of the time-period?). Without this context to frame the responder's return rates to, it is impossible to make inferences on their return rates. Had they provided the same data, but stated it was from the same region as our study, these comments would have been more credible.
Some of the critiques made by the responder's were already addressed in the short report, and we fail to understand why these points have been raised again, and presented as new information. We clearly acknowledged our small sample size (550) in our limitations, and encouraged further comparative evaluations, of greater numbers, across different regions to test the robustness of our report. We additionally addressed the low processing rates of MT from our report by stating that a large proportion of samples were of insufficient volumes supplied by the kit user (accounting for 22.5% of all MT samples). Our report also addressed the disadvantages of DBS, citing a more complex extraction process, higher cost and fewer laboratories accredited to analyse these samples. The nature of the comparative review meant that assessing the “real-world true HIV negative results” was difficult in the context of service provision. Our clinical experience makes us certain that outside of the remit of prospective research, service providers would not ask its users to do two differing HIV tests (besides point-of care testing) for HIV negative results. Pragmatically, it is usually easier to verify reactive results, as most sexual health clinics will mandate repeat HIV testing with the gold standard venous blood sampling. Once again, we acknowledge this fact in our report.
Several criticisms raised by the responder's towards our laboratory methodology appear to be misplaced, and perhaps should have been directed towards the potential for operator-based (client-dependent) difficulties. We agree that MT samples used in a clinical setting and stored in controlled laboratory conditions may have remained stable in excess of 4 days (the responder's quoted 8 days for their laboratory). We know that in reality, these laboratory conditions cannot be replicated by the general public (e.g. users may leave samples on surfaces exposed to direct sunlight, or leave them in their bag for extended periods of time before posting). Furthermore, as part of the UKAS accredited laboratory validation process, the laboratory used for this report identified samples older than 4 days (stored in realistic conditions) were associated with a higher false positive HIV rate (compared to those which were under 4 days).
We would like to provide some clarity of some of the questions asked by the responder's. Regarding their comment on the lack of quantification of what was considered a haemolysed sample. We can confirm that this process was two-stage. Obviously haemolysed blood was rejected manually. Borderline samples however, were still put through the analyser, and those which were rejected by the instrument software were deemed to have been done so due to haemolysis.
Regarding the responder's commentary about the quoted sensitivities and specificities of the assays; these standardised references are explicitly with reference to venous blood samples and not capillary blood samples. The quoted sensitivities and specificities are not necessarily transferrable between the two, and laboratory-based validation processes are required to ensure test accuracy. The laboratory used in our report had previously conducted sensitivity and specificity studies to which they have validated the DBS assay, which was directly compared with known HIV-positive whole blood samples. From this, they found that an 8-fold reduction in concentration was acceptable in order to make a diagnosis. Furthermore, the process was accredited for routine use following a successful UKAS inspection. We would like to add that we do comment in our supplementary data that both DBS filter paper and MT containers were CE marked, as the responder's seemed to infer otherwise.
While the responder's rightly state a small sample (200-400microlitres of serum once centrifuge) is required for processing, they have omitted some key points. Firstly, the sample that the user provides is not serum, but whole blood; and therefore more than 400microlitres of blood will be required from the user. Insufficient samples in our study were in reference to whole blood volumes. In order to obtain 200-400microlitres of serum, you need to start off with 1ml of blood, which was not what was being achieved by many of the MT users in our report. We also dispute being able to carry out all the 5 tests mentioned (HIV, Syphilis Ab, Hepatitis B, Hepatitis C, and quantitative syphilis RPR and TPHA) with 200-400microlitres of serum, which the responder's have made an ambiguous reference to.
We strongly refute the claim of author bias considering that at the time the report was conducted, the service was offering both MT and DBS. We also refute the responder’s suggestion of inadequate procedural quality control, and have justified this in our report, supplementary details and in this response letter.
In summary, the data compiled by the responder's was inaccurate and misleading regarding their overall HIV result-to-STI kits requested, and RRR values. The data they have provided do not compare favourably to the performance the DBS from our report. Their data suggests that much of their success was driven by higher kit return rates compared to our paper (80.4% vs our papers 68.7% for MT and 66.5% for DBS). The responder's data suggests that they too experienced a drop in the successful processing rate for MT (84.6% compared with 98.8% for our papers DBS). This drop was also observed in our paper when comparing MT and DBS. This in essence, is the key message that our short paper concludes with, and the responder’s results seem to fit this narrative. With our report being a comparative evaluation (as opposed to a single arm evaluation), we were able to state that from our findings that return rates were not the deciding factor in the RRR values of the MT or DBS modalities, as they were the same. If we apply this logic to the responder’s data, it may be reasonable to suggest that should they use DBS, their RRR value would be nearing 1.00, when we factor in the high kit return rates that they have presented in their response letter.
We would encourage the responder's to fully publish their data in a peer-reviewed journal, in order for its readers to draw conclusions from their results. This is a rapidly expanding area and it has not been thoroughly explored through research.