Intended for healthcare professionals

General Practice

Can students learn clinical method in general practice? A randomised crossover trial based on objective structured clinical examinations

BMJ 1997; 315 doi: https://doi.org/10.1136/bmj.315.7113.920 (Published 11 October 1997) Cite this as: BMJ 1997;315:920
  1. Elizabeth Murray, senior lecturer in primary health carea,
  2. Brian Jolly, directorb,
  3. Michael Modell, professor of primary health carea
  1. a Department of Primary Care and Population Sciences University College London Medical School and Royal Free Hospital School of Medicine Whittington Hospital London N19 5NF
  2. b Medical Education Unit University of Leeds Leeds LS2 9NL
  1. Correspondence to: Dr Murray
  • Accepted 1 August 1997

Abstract

Objective: To determine whether students acquired clinical skills as well in general practice as in hospital and whether there was any difference in the acquisition of specific skills in the two environments.

Design: Randomised crossover trial.

Subjects and setting: Annual intake of first year clinical students at one medical school.

Intervention: A 10 week block of general internal medicine, one half taught in general practice, the other in hospital. Students started at random in one location and crossed over after five weeks.

Outcome measures: Students' performance in two equivalent nine station objective structured clinical examinations administered at the mid and end points of the block: a direct comparison of the two groups' performance at five weeks; analysis of covariance, using their first examination scores as a covariate, to determine students' relative improvement over the second five weeks of their attachment.

Results: 225 students rotated through the block; all took at least one examination and 208 (92%) took both. For the first half of the year there was no significant difference in the students' acquisition of clinical skills in the two environments; later, however, students taught in general practice improved slightly more than those taught in hospital (P=0.007).

Conclusions: Students can learn clinical skills as well in general practice as in hospital; more work is needed to clarify where specific skills, knowledge, and attitudes are best learnt to allow rational planning of the undergraduate curriculum.

Introduction

There is an international move towards community based undergraduate medical education.1 2 In Britain half of medical schools have some primary care input into teaching clinical skills3; most new curricula will have a substantial increase in community based teaching. This change reflects the “primary care led NHS”4 that is having a profound impact on delivery of health care5 and on undergraduate education. However, community based teaching is no cheaper than hospital based teaching,6 has some specific disadvantages, including geographical dispersal of tutors making quality control difficult,7 and results in considerable travel costs to students.8

Most evaluations of community based teaching have concentrated on ascertaining student or faculty perceptions of its educational value.9 10 11 Using an objective structured clinical examination, Satran et al found equal acquisition of clinical skills,12 but in their study both community and hospital based students were taught by paediatricians and only two of the examination stations were patient based. There is little evidence on whether students taught in general practice can acquire their clinical skills as well as those taught in hospital. This study was designed to address this question.

Methods

The subjects were all first year clinical students at University College London Medical School during the academic year 1995-6. The school has a traditional medical curriculum with two years of basic science followed by three years of clinical medicine. After a four week introduction to clinical skills at the beginning of the first clinical year, students were randomly divided into four groups. Each group started with a different 10 week block (covering general medicine; surgery; medical specialties; and geriatrics, rheumatology, and orthopaedics) and rotated through all four blocks during the year. The block under study here (general medicine based at the Whittington Hospital) consisted of two five week attachments, one in general practice and one in hospital. Students were randomly allocated to start either in hospital or in the community and changed over after five weeks.

The intervention: hospital and general practice based teaching

The five weeks in general practice—the “medicine in the community” clerkship—at the Whittington was designed to replace a traditional hospital clerkship and is described elsewhere.8 Both hospital and general practice attachments shared common aims but the teaching structure was different (see box). For example, as in most medical schools, the contracts of senior academic and NHS staff are unspecific about teaching commitments, whereas the general practitioners were paid specifically to provide protected teaching time.

Aims of the attachment

Both hospital and general practice attachments aim to enable students to acquire:

  • the basic clinical skills of history taking, physical examination, and communication skills

  • sufficient knowledge to understand and apply these clinical skills

  • appropriate professional attitudes

Structure of teaching in the two sites

View this table:
RETURN TO TEXT

Trial design and outcome measures

As well as determining whether students acquired their clinical skills as well in the medicine in the community attachment as in the hospital attachment we also wanted to see whether there was any difference in the acquisition of specific skills in the two environments.

The outcome measure was students' performance in two parallel nine station objective structured clinical examinations (P and Q, see box) given to all students at five and 10 weeks. In the first and third blocks students had P followed by Q, and in the second and fourth blocks they had Q followed by P. Two different examinations were used to promote test security, to minimise learning effects within the design, and to maximise the number of cases used, thus optimising the potential for detecting differential skill acquisition, overall, in the two environments. The disadvantage of using two examinations was that it was impossible to equate P and Q until after the first two blocks.

Contents of examinations P and Q

View this table:
RETURN TO TEXT

Stations lasted 7 minutes and, except for two on data interpretation, each used a trained standardised patient. Stations were chosen to reflect problems found in both hospital medicine and general practice; all but two had been developed and validated over the previous five years. These two, on acute medical problems, were written by a consultant physician in consultation with a general practitioner. Examiners were drawn from senior hospital physicians and general practice tutors. Most were engaged in first year teaching; all others were experienced examiners in objective structured clinical examinations. Checklists, unavailable to students, were used for marking.

The performance of the students in hospital or community blocks could be directly compared at the five week point for each of the four blocks throughout the year; subsequently their improvement over the second five weeks could also be determined by subtracting the five week score from that at 10 weeks. This was possible because the balanced design cancelled out any differences due to differential difficulty of the examinations.

Analysis of data from the first two blocks suggested that examination Q was slightly harder than P and that this was due to a hard station in Q and an easy one in P. In particular, all students scored so highly on the ascites station that there was no room for improvement. Therefore the same two examinations were used for the final two blocks, but with both photograph stations removed and the two blood test stations used each time.

Results were analysed using SPSS 6.1 for Windows. Firstly, data from each of the four blocks were analysed to compare the effect of the two locations at the five week point. Total mean scores and mean scores for each skill domain were compared using the t test for unrelated groups. Subsequently improvements between 10 week and five week scores were analysed using analysis of covariance, a statistically equivalent variant of a design used by Ali et al13 and Nyquist et al14 to compare two groups in a balanced crossover design. In this analysis blocks 1 and 2 and blocks 3 and 4 were combined to use fully the balanced nature of the design.

Results

A total of 225 students rotated through the medicine in the community firm in the study year. All took at least one objective structured clinical examination, and 208 (92%) took both. Table 1 shows the results of a direct comparison of mean scores for the hospital and community groups at five weeks for blocks 1 and 2. Table 2 displays the equivalent data for blocks 3 and 4. Table 3 shows the incremental improvement in the two groups over the second five weeks of the attachment for each of the four blocks. In table 1 total examination scores were higher for hospital students on examination P1 and for community students on examination Q2, but only the latter reached statistical significance. These differences were due partly to performance on the interpretation domain and to isolated differences on individual stations, but not to other skill domains. There were no consistent differences between locations. There were no such differences for later groups, apart from one on data interpretation.

Table 1

Comparison of mean scores on objective structured clinical examinations (OSCE) of students taught for their first five weeks of blocks 1 and 2 either in general practice or in hospital

View this table:
Table 2

Comparison of mean scores of students on objective structured clinical examinations (OSCE) taught for their first five weeks of blocks 3 and 4 in either general practice or in hospital

View this table:
Table 3

Incremental improvement in total scores on the objective structured clinical examination (OSCE) for the two groups over the second five weeks of their attachments

View this table:

Further analysis combined the first two and last two blocks in separate anlayses. This showed that students' improvement over the second five weeks, using the first examination as a covariate, was not significantly different between learning locations for the first two blocks (P=0.128) but was significantly better in the community than in hospital for the last two blocks: mean improvement in score 6.6 (95% confidence interval 3.4 to 9.8) for the community attachment and 1.7 (−1.3 to 4.7) for students studying in hospital (P=0.007). Further analysis of scores broken down into skills showed that this was due to improved examination skills in the community students (mean improvement: community 4.23 (2.38 to 6.08), hospital 0.26 (−1.5 to 1.94; P<0.002)). These data represent a difference between locations of about 3%-10% on five weeks' experience as measured by performance in the examination.

In addition, mean scores for all students for each examination showed gradual improvement over the 40 weeks. After equation of the examinations, scores showed a significant monotonic linear trend (P<0.001). The increase over the last 20 weeks was about 10%. In other words, five weeks training in the community attachment, for experienced first year clinical students, was worth about the equivalent of 10 to 20 weeks of average clinical experience in terms of performance in the examination.

Discussion

The results suggest that, overall, students acquire their clinical skills as well, if not better, in general practice as in hospital. This appears to be true for all the skill domains tested. In particular, examination skills improved more for experienced students in community locations. Generally students' clinical skills continued to improve throughout the year. The spread of results around the mean diminished throughout the year, so it is possible that there is a ceiling effect: given the sort of experience provided in the first clinical year, students can only progress so far.

The method was specifically chosen to sample as wide a range of skills as possible within a balanced design. Its strengths include the randomisation of students both to the order in which they took the four blocks making up the first year and, once on the medicine block, to starting in either community or hospital medicine. The crossover design allowed both for direct comparisons of students taught in the two venues and for pre- and post-exposure testing, a design which has been widely favoured as a method of assessing efficacy of skills teaching.13 14 15 16 17 The comparison group is both plausible and fair: the medicine in the community firm was expressly designed to replace a traditional hospital medical clerkship and its brief was to teach the same core clinical skills as those students learn in hospital. The outcome measure, an objective structured clinical examination, is a recognised and widely used method for testing clinical skills18 in undergraduate and postgraduate settings.19 20 The sample size in this study is large enough to detect educationally important differences. Although longer objective structured clinical examinations are needed before inferences about a single student can be made, we were interested in group performance, which is adequately determined by this number of stations.

Potential weaknesses in the design include the small number of stations in each skill domain, which limits the power of the study to determine comprehensively whether specific skills are acquired better in either location. The crossover design minimised the impact of varying ability in different groups, but it was not possible to control for this altogether. Previous work has shown that test security is not necessarily a problem with objective structured clinical examinations,21 22 23 and that under appropriate circumstances, replicated in this study, repetition of stations throughout the year does not jeopardise the validity of the examination.24 25 Stations were chosen to reflect problems that the student might encounter in either hospital or general practice, but it is possible that the problems tested favoured students who had learnt in one or other setting. The apparent validity of the examination is enhanced by the steady improvement of student performance throughout the year. Examiners were partially “blinded” as no information was given on students' prior experience. Any potential bias from examiners preferentially marking students known to them was minimised through drawing examiners from both settings and using structured marking sheets and would have been limited to scores from one station.

These data support current efforts to redistribute resources from traditional locations for student learning to the community. It is necessary to determine which specific knowledge, skills, and attitudes are best acquired in the community, which are best acquired in hospital, and which can be equally well acquired in either environment given appropriate, well structured, and adequately resourced teaching. Further work is also needed on which teaching methods optimise student learning in all settings. Only then can we progress to rational planning of new curricula and provide students with well structured teaching and an optimum balance between hospital based and community based learning.

Key messages

  • Students can learn clinical method as well in general practice as in hospital

  • This supports efforts to redistribute resources from traditional learning locations to general practice

  • More work is needed to determine which specific skills, knowledge, and attitudes are best acquired in each location.

Acknowledgments

We thank all the general practice tutors and consultant physicians at the Whittington Hospital for their hard work as examiners and Terri Charrier for organising the examinations.

Funding: Department of Health, through the Ce-MENT (community based medical education in North Thames) project.

Conflict of interest: None.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.