Mapping domains of general practice care
The broad quality framework informing the development of the survey defines quality of service provision in terms of 'access' and 'effectiveness', with the latter subdivided into 'interpersonal' and 'technical' effectiveness . We mapped the aspects of general practice care which have been identified as important to patients from a number of published reviews [2, 7, 8]. We then reviewed a number of discrete choice experiments where patients have been asked to rank the importance of different aspects of general practice care [9–12]. We also included the requirements for the survey outlined in the Department of Health tender, which contained issues which the Department believed to be of importance to patients, as well as specific issues which were linked to payments in the general practitioner contract. As expected, there was very substantial overlap between these various sources of information on what patients value from their general practice care.
For out of hours care, we identified aspects of care that would reflect the Department of Health tender requirement of understanding, use and overall experience of out of hours services. Aspects of care in these areas were drawn largely from our previous work on out of hours care [13, 14].
We specifically excluded technical aspects of care from consideration. Previous evidence suggests that patients conflate technical and interpersonal aspects of care when making judgements about technical care , and this is supported by more recent unpublished research in Manchester on patients' perceptions of medical errors and by empirical evidence  suggesting that patients' assessments may not be a sufficient basis for assessing the technical quality of their primary care. Technical aspects of care are more appropriately assessed though other mechanisms, e.g. the Quality and Outcomes Framework of the general practitioner contract.
Identifying items for the questionnaire
We then cross referenced the attributes of general practice care valued by patients to items in a number of questionnaires commonly used in primary care in the UK, US and Europe [17–22] to identify items without copyright restrictions which might be used or adapted to meet the needs of the questionnaire. Our aim was to identify items for the new questionnaire which would reflect likely face and construct validity, and were likely to have ability to distinguish between practices with the size of sample proposed. This last criterion was different for items addressing out of hours care where data were to be reported at Primary Care Trust rather than at practice level.
The draft questionnaire was then subjected to an iterative process of development over five months which included (i) regular meetings of a joint review group, containing representatives of the academic advisors (JC and MR), staff from Ipsos MORI (including PS and SN), and representatives of the Department of Health (ii) three meetings of a stakeholder review group, including patient representatives, the British Medical Association, the Royal College of General Practitioners, the Royal College of Nursing, the Healthcare Commission, and NHS employers (iii) four waves of cognitive testing
Four waves of cognitive testing were undertaken between July and November 2008 with progressive drafts of the questionnaire. This included a total of fifty interviews lasting between 45 and 60 minutes carried out by Ipsos MORI, with interview subjects selected to represent people from a range of socio-demographic backgrounds and people with specific types of disability (e.g. deafness) or recent experience of healthcare relevant to specific domains within the questionnaire (such as out-of-hours care).
Full details of the cognitive testing are available . The interviews were conducted one-to-one and began with the respondent completing the questionnaire with an Ipsos MORI researcher present. Some respondents spontaneously mentioned issues while filling in the questionnaire while others simply completed it to the best of their ability. Once the survey had been completed the questionnaire as a whole was discussed as well as questions of interest, which were discussed in more detail.
As a result of the cognitive testing, repeated minor changes were made to the questionnaire which were then tested in the next round of cognitive interviewing. This process resulted in progressive refinement of the questionnaire over a period of five months. There were significant constraints in the development of the questionnaire in two areas. The first related to patients' ability to get an appointment within a fixed period of time (e.g. two working days). Responses to these questions were tied to ongoing payments to general practitioners as part of their contract with the NHS, and a degree of back comparability with a previous questionnaire was necessary, even though there remained some uncertainties, especially around patients' interpretation of the questions of the form 'Thinking about the last time you tried to see a doctor fairly quickly, were you able to see a doctor on the same day or in the next two days the surgery was open'.
The second area where there remained some uncertainty about patients' interpretation of the questions related to care planning. Although the UK Department of Health had made an important policy commitment to deliver written care plans to all patients with long term conditions, a significant proportion of patients found the concept difficult to interpret. The questions in this section were formulated to allow patients to express this uncertainty, with the aim that we would be able to assess over time the proportion of patients able to engage with these questions, an important issue for UK policymakers. Questions on socio-demographic aspects of care were drawn from published approved questions from the Office of National Statistics .
Piloting and analysis
In November 2008, a pilot version of the questionnaire [see Additional file 1] was sent to a random sample of 1500 members of the public drawn from the electoral roll mailed second class with a covering letter. Two reminders were sent to non-responders after intervals of approximately two weeks. The results below summarise analyses of this pilot data. Except where we draw attention to differences, all items in the pilot questionnaire were identical to those in the final questionnaire [see Additional file 2].
(a) Response rates
In order to test the impact of questions of religion and sexuality on response rate, half of the subjects (randomly selected) received questionnaires containing these items, and half received questionnaires without them.
(b) Extreme and error responses
Floor and ceiling effects were investigated by inspection of the number of respondents validating extreme response categories expressed as a proportion of valid responses obtained. Errors arising from questions offering a 'branching' option were investigated by examining the number of 'error respondents' expressed as a percentage of the total number of responses in the question immediately following the question offering a branching option.
(c) Internal structure of the questionnaire
The internal structure of the general evaluative items (excluding items relating to care planning or out of hours care) was evaluated using exploratory principal components analysis with listwise deletion of missing variables. Inspection of a scree plot of unrotated components was used to determine the number of factors, followed by varimax rotation of the final solution to assist in the interpretation of components.