Skip to main content

Development of the Health Literacy of Caregivers Scale - Cancer (HLCS-C): item generation and content validity testing



Health literacy refers to an individual’s ability to engage with health information and services. Cancer caregivers play a vital role in the care of people with cancer, and their capacity to find, understand, appraise and use health information and services influences how effectively they are able to undertake this role. The aim of this study was to develop an instrument to measure health literacy of cancer caregivers.


Content areas for the new instrument were identified from a conceptual model of cancer caregiver health literacy. Item content was guided by statements provided by key stakeholders during consultation activities and selected to be representative across the range of cancer caregiver experiences. Content validity of items was assessed through expert review (n = 7) and cognitive interviews with caregivers (n = 16).


An initial pool of 82 items was generated across 10 domains. Two categories of response options were developed for these items: agreement with statements, and difficulty undertaking presented tasks. Expert review revealed that the majority of items were relevant and clear (Content Validity Index > 0.78). Cognitive interviews with caregivers suggested that all except three items were well understood.


A resultant 88 item questionnaire was developed to assess cancer caregiver health literacy. Further work is required to assess the construct validity and reliability of the new measure, and to remove poorly performing and redundant items, which will result in a shorter, final measure. The new measure has the potential to inform the development and evaluation of interventions and the improvement of health service delivery to cancer caregivers.

Peer Review reports


A diagnosis of cancer impacts not only the person diagnosed, but also their family members and friends. These social supports are often called upon to provide informal care and assistance managing the disease [1] and to provide practical, emotional and physical support [2]. Individuals who provide informal care and support, often referred as caregivers [3], also play a significant role in health-related decision-making [4], are involved in communications with healthcare providers [5], and assist with sourcing and interpreting health information [6]. These caregiving responsibilities are often undertaken unexpectedly, and caregivers are often provided limited information and support [1]. Recognition of the challenges of the caregiving role has led to development of interventions designed to meet the informational, practical, and psychosocial needs of caregivers [7]–[9]. Although information provision is included in the majority of these interventions [10], few studies have examined improvements in the level of caregivers’ knowledge and skills [7],[10]. This may, in part, be due to the lack of measurement tools that assess caregiver knowledge and skills [2].

Consistent with broad definitions of health literacy [11]–[13], caregiver health literacy is defined here as the personal characteristics and social resources needed for caregivers to access, understand, appraise and use information and services to participate in decisions relating to the health and care of the care recipient. This includes the capacity to communicate, assert and enact these decisions. Whilst evidence suggests an association between poor health literacy and poorer health outcomes [14], worse physical functioning and reduced quality of life [14]–[18], little is understood about the relationship between caregiver health literacy and the health outcomes of care recipients.

To accurately identify the health literacy needs of cancer caregivers, and understand the impact of caregiver health literacy on care recipient health outcomes, it is essential to measure the construct effectively. Previous studies of caregiver health literacy [19]–[22] have used measures that assess a subset of health literacy constructs. Measures such as the Test of Functional Health Literacy in Adults (TOFHLA [23]) or its short form [24], the Rapid Estimate of Adult Literacy in Medicine [25], and the Newest Vital Sign [26] assess an individual’s reading, numeracy, and comprehension skills in relation to healthcare. Reviews of health literacy measurement instruments increasingly call for the development of tools that capture the full range of health literacy constructs [27]–[29], such as critical thinking, interaction and communication, and confidence [11].

In response to this gap in the literature, health literacy measurement tools are now emerging that capture the multidimensional nature of health literacy [30],[31]. However, these tools are grounded in the perspectives of the potential care recipient, and have limited utility for the identification of the needs of caregivers. Similarly, caregiver health literacy measures designed to assess health literacy of parents of infants [32],[33] cover domains not relevant to the role of caregiving for an adult recipient.

The aim of the current study was to develop a measure of health literacy specifically for caregivers of people with cancer. Best practice guidelines for questionnaire development require a detailed conceptual basis to guide development [34],[35]. The conceptual model of caregiver health literacy developed by the authors (Yuen, Dodson, Batterham, Knight, Chirgwin, & Livingston, in press) was used as the basis for the development of the Health Literacy of Caregivers Scale - Cancer (HLCS-C). The model, as shown in Figure 1, proposes six major themes and 17 sub-themes associated with caregiver health literacy.

Figure 1
figure 1

Conceptual model of cancer caregiver health literacy (Yuen et al., in press).


A validity-driven approach [36] was employed in the development of the HLCS-C. The steps undertaken are outlined in Figure 2. The study was approved by the Eastern Health Human Research Ethics Committee (E41-1011) and Deakin University Human Research and Ethics Committee (2011–115), in Melbourne, Australia.

Figure 2
figure 2

Steps undertaken to develop items for the new measure of cancer caregiver health literacy.

Content area specification

The content areas for inclusion in the questionnaire were drawn directly from 17 sub-themes in the conceptual model of cancer caregiver health literacy (see Figure 1). The following considerations were used to determine whether (and how) themes should be represented in the questionnaire: 1) the questionnaire should capture the experiences of caregivers caring for recipients with a wide range of cancer types, stages, treatments, and potential outcomes; 2) the questionnaire should capture the experience of caregivers providing differing forms and levels of support; 3) the questionnaire should be consistent with the broad definition of caregiver health literacy, and encompass factors associated with accessing, understanding, appraising and using health information to promote and maintain the health of the care recipient; 4) the questionnaire should be presented as a list of items/statements accompanied by an appropriate set of response options; and 5) the questionnaire should contain the fewest number of domains as possible to reduce length and administration burden.

Another consideration when identifying content areas for inclusion was whether representative statements generated by participants during consultation activities captured caregiver experiences or whether they captured broader contextual factors that influenced caregiver health literacy. In addition, content areas were examined to determine whether statements representative of a sub-theme could be combined to form a scale; previous scale development studies that used similar processes to derive a conceptual model, have found that although statements within some sub-themes were conceptually related, could not be summed to form a scale score, and required deletion on psychometric grounds [30]. Further, to assist cross-referencing of the new measure against other measurement tools that assess related constructs, the included content areas were also aligned with a recently developed taxonomy that identified 12 dimensions of health literacy [11]: literacy; interaction; comprehension; numeracy; information seeking; application/function; decision making/critical thinking; evaluation; responsibility; confidence; navigation; and maintaining and promoting health (Table 1).

Table 1 Specification of the ten scales hypothesized to define cancer caregiver health literacy, reasons for exclusion of content areas, and example items for each scale

Generation of items and response scale

Statements and words provided by participants during consultation activities associated with the development of the cancer caregiver health literacy conceptual model (see Table 1) were used as the starting point for questionnaire items [37] to maximize content validity. For each content area, item selection and refinement was guided by two vignettes developed to describe an individual with a high degree of capacity in that area, and the other with low levels [30]. Where the proposed content areas for the new measure were similar to domains included in the Health Literacy Questionnaire (HLQ; [30]), a validated measure of health literacy derived using similar approaches, the HLQ items were used as the basis and revised to accommodate the caregiver audience. Response scales for each content area were developed to match the nature of the associated items and vignettes. Refinements to how content areas and vignettes were framed were undertaken to ensure consistency in response scales across the content areas of the proposed questionnaire. Items were also examined against a structured item development criteria [38] (see Table 2). Readability of items was assessed using Flesch Reading Ease [39] and Flesch-Kincaid Grade Level [40] formulas available through Microsoft Word.

Table 2 Structured item development criteria used to assess quality of items

Item difficulty was included in the item development criteria to ensure that the final items formed a scale that could distinguish between low, moderate, and high levels of health literacy (i.e., scale sensitivity). The revised Bloom’s taxonomy, which includes two dimensions (knowledge and cognitive process; [41],[42]) was used to guide the selection of set of items for each content area to ensure they captured a range of difficulty. The first Bloom dimension describes levels of knowledge acquired (factual, conceptual, procedural, or metacognitive) whilst the second dimension describes cognitive processes that occur during learning (remembering, understanding, applying, analyzing, evaluating, and creating; [41],[42]). It is posited that items that address higher level cognitive tasks (e.g., decision-making) would elicit fewer maximum ratings compared to items that addressed lower level cognitive tasks (e.g., access to information). The taxonomy has been previously used to guide the development of health literacy measures [30],[43],[44].

Expert review

Expert review of items was undertaken to establish the content validity of the proposed items [45]. In a judgment-quantification process [46], items within each proposed scale were assessed by seven experts for relevance and clarity. Participants included two oncologists, a general practitioner, an oncology social worker, a general medical nurse, a health researcher, a policy advisor for a state-wide caregiver organization, and a retired executive member of a cancer information and support service. The content validity of the tool as a representation of its intended purpose was also qualitatively assessed. Experts were identified and recruited from the research team’s existing professional networks. Between 5 and 10 experts have been suggested as a number sufficient for establishing content validity using expert review [46].

Experts were asked to assess each item for relevance and clarity using a 3-point scale (“low, moderate, high” and “unclear, neutral, clear” respectively). To determine content validity, expert ratings for relevance and clarity were quantified using the Content Validity Index (CVI) calculated as the percentage of experts who indicated 2 or 3 on the scale. It has been recommended that when six or more experts have evaluated the instrument, items with a CVI less than 0.78 should be considered for revision or deletion [46].

Experts were also asked to consider all items within individual scales and respond to two open-ended questions, “Do you suggest including any other ideas to represent the scale”, and “Do you suggest changing any words for any of the above items”. Experts were also asked to provide feedback on whether any major concepts or ideas were omitted in the questionnaire and to make suggestions on how to improve the instrument. To guide the revision of items, responses to the open-ended questions were synthesized and reviewed.

Cognitive interviews

Cognitive interviews are frequently used in questionnaire development to determine whether respondents interpret and respond to items in the way the researchers intended [47]. The think-aloud approach [47],[48] was the predominant method used in the current study. A convenience sample of participants was recruited from a not-for-profit government funded caregiver organization. Ninety-nine caregivers who identified themselves as providing care to a family member or friend with cancer were invited to participate. Nineteen caregivers (19%) who returned the questionnaire were then contacted via telephone about taking part in a telephone interview. Three respondents completed the questionnaire; however, they declined to participate in the cognitive interview because of personal circumstances. Of the 16 caregivers who participated, the majority were female (94%), and ranged in age between 42 and 80 years (Mdn = 61.5; see Table 3).

Table 3 Demographic characteristics of caregivers who participated in cognitive interviews

To minimize respondent burden, a sampling scheme was applied to allow each participant to be interviewed on items from approximately 6, rather than all 10, constructs in the questionnaire. Participants were randomly assigned an item set that included items from complete constructs. Using this method, each item in the questionnaire was reviewed by at least 8 participants (range = 8 – 11; Mdn = 9). Although participants did not complete the full set of items, the sampling scheme was sufficient as the purpose of the cognitive interviews was to test the items across a range of individuals to inform decision making [47].

Responses from the cognitive interviews were analyzed using a systematic evaluation of participant responses for each item [49]. Each item was assessed using three criteria: whether the participant interpreted the question as the researchers had intended; whether the item was applicable to the participant; and whether the participant found it difficult to respond to the item. In cases where responses had problems with an item, common themes and issues were noted.


Selection of content areas

Inspection of the 17 sub-themes outlined in the cancer caregiver health literacy model against the considerations for inclusion of content areas led to the identification of 10 constructs for the new questionnaire (see Table 1). Several sub-themes were subsumed under the encompassing scale titles: Adequate information about cancer and cancer management, and Understanding the healthcare system. Two sub-themes were considered broader contextual factors that influenced caregiver health literacy, and thus were excluded from the questionnaire. For example, statements in the Financial and Legal Support sub-theme related to availability of support from Government services, which was considered a broader contextual factor that influenced a caregiver’s capacity to effectively engage with the caregiving role. Two additional sub-themes were excluded because their representative statements, although conceptually related, were considered unable to be summed to form a scale.

Item generation and response options

Eighty-two items were developed for expert review, with 7 to 12 items for each construct (see Table 1). An item pool 50% larger than that intended for the final scale was drafted to enable identification of items with adequate internal consistency as determined through psychometric analyses (Phase 3; see Figure 2) [45]. For eight content areas, an ‘agree/disagree’ Likert scale was suitable. For the remaining two content areas (Processing health information, and Active engagement with healthcare providers) a ‘cannot do/very easy’ Likert scale was more suitable. Readability analysis of the items showed a Flesch-Kincaid reading level of grade 6.7, with a Flesch reading ease of 80.6 (out of a possible 100, with higher scores indicating greater ease).

Expert review

The range of content validity indices for relevance and clarity for the ten scales as assessed by 7 experts are provided in Table 4. Although 8 experts responded, one participant provided general comments about including additional content areas rather than assessing all individual items, thus, was excluded from the content validity analysis. The participant’s comments were considered when determining the inclusion of additional content areas. Items were considered relevant by experts (CVI > 0.78) for all but one item related to processing health information (#70, “Find out if the health information that I have received is suitable for the person I am caring for”). Item #70 was considered invalid both for relevance and clarity (CVI < 0.78), and thus was revised (see Table 5) after considering expert comments, and reviewing participant statements generated during concept mapping workshops.

Table 4 Range of CVI scores for relevance and clarity for ten hypothesized scales of cancer caregiver health literacy
Table 5 Seven revised items in response to content validity index scores for relevance and clarity, and comments from experts

Using the content validity equation, five items although deemed relevant by experts, were considered to lack clarity (CVI < 0.78; see Table 4). These five items were revised (see Table 5). An additional item (#81, “I know which healthcare providers look after the health of the person I care for”), although deemed relevant and clear, was deleted in response to expert comments about its similarity to another item in the scale.

Twelve items, although demonstrated adequate relevance and clarity (CVI > 0.78), underwent minor revisions in response to suggested improvements from experts (See Table 6). Item #12 (“I have strong support from at least one friend” was combined with item #66 (“I have strong support from at least one family member) following feedback about the similarity of items, and suggestions from experts to merge the two items.

Table 6 Revised items following expert suggestions for revision

Nine new items were included in the questionnaire in response to comments from experts (see Table 7). Revision of the item pool resulted in 89 items for testing through cognitive interviews. Experts identified three main areas that were missing from the questionnaire: understanding of healthcare services, palliative care, and sexuality issues. However, only the concept of understanding of healthcare services was captured in newly generated items.

Table 7 New items following expert review and reasons for inclusion

Cognitive interviews

Overall, participants interpreted and responded to the majority of the questionnaire items in ways intended. However, three items (#18, #74 and #1) emerged as having common issues. For item #18 (“I have all the information I need to help make decisions about treatments”) two participants reported that they did not help make decisions about treatments, thus the item was not personally relevant to them (e.g. “I’m not a doctor and I wouldn’t know of other treatments, so I trusted what doctors told me” [Participant 1]). For item #74 (”Find out if health information from various resources is suitable for the person I am caring for”) participants interpreted the word ‘resource’ as being internet-specific (e.g. “Yeah I think it is, you just borrow the kids internet and have a look” [Participant #11]). Further, for item #1 (“I spend a lot of time looking for information about the cancer”) two caregivers reported that although they spent time looking for information when their care recipient was first diagnosed with cancer, it was no longer relevant after many years of providing care (e.g. “My husband has had cancer now for years. At the beginning I spent a lot of time researching but now only when you feel up to it” [Participant #10]).


The current study describes item generation and content validity testing of a new questionnaire to assess the self-reported health literacy of caregivers of people with cancer, the Health Literacy of Caregivers Scale–Cancer (HLCS-C). As a result of the expert review and cognitive interviews, the HLCS-C now contains 88 items across 10 scales: proactivity and determination to seek information; adequate information about cancer and cancer management; supported by healthcare providers to understand information; social support; communication with the care recipient; understanding the care recipient; self-care; understanding the healthcare system; processing health information; and active engagement with healthcare providers.

The scales included in the HLCS-C covered a broad range of themes that assessed individual, interpersonal as well as healthcare provider and healthcare system factors that may be relevant to caregiver health literacy. Many of these themes are currently not included in widely-used measures of health literacy. For example, some scales in the HLCS-C assess an individual’s comprehension (e.g., Adequate information about cancer and cancer management, and Understanding the healthcare system), or their critical thinking skills (e.g., Processing health information), while other scales assess a caregiver’s interpersonal relationship with the care recipient (e.g., Communication with the care recipient, and Understanding the care recipient). Caregivers’ capacity to effectively engage with healthcare providers was also included (Active engagement with healthcare providers). Further extending dimensions of health literacy measures, the HLCS-C assesses external influences on an individual’s health literacy. Similar to the Health Literacy Questionnaire [30], the HLCS-C contains a scale that assesses the caregivers’ perspectives of healthcare provider provision of services and information in ways that enable them to adequately navigate the caregiving role and the healthcare system (e.g., Supported by healthcare providers to understand information). Unlike the existing unidimensional measures of health literacy, the multidimensional nature of the HLCS-C allows identification of specific strengths and difficulties and therefore the identification of opportunities to improve caregiver health literacy and the health literacy responsiveness of the healthcare system.

As part of the expert review, experts suggested including items related to sexuality issues. However, the authors made the decision to not include items related to sexuality issues as this topic was not identified by stakeholders during the concept mapping workshops. Concept mapping workshop participants included caregivers providing care for, and people with cancer, across a range of cancer types and stages. It is possible that issues of sexuality were not their primary concern when identifying health literacy needs. Further, it is possible that given the workshop setting, participants may have felt uncomfortable discussing the topic of sexuality. Moreover, studies have shown that caregivers of people with gender-specific cancers (e.g., breast or prostate) were more likely to report additional information needs related to sexual and physical intimacy [50]. Further revisions of the questionnaire could consider sub-sets of items relevant to specific cancer types.

Similarly, experts commented on the inclusion of items related to palliative care. However, the questionnaire was designed for use with caregivers across the cancer trajectory. Thus, the authors considered that items related to palliative care would not be relevant to all cancer caregivers. Future revisions of the questionnaire could consider items that are specific to caregivers providing care to people with advanced stage cancer.

To address the three items identified as having common issues following cognitive interviews, the decision was made to revise two items and delete one item. To ensure included items were relevant to all cancer caregivers, item #18 was revised to “I have enough information to understand the potential side effects of cancer treatment”, which still captured the concept of adequate cancer information. To improve clarity for item #74, the word ‘resources’ was replaced with ‘places’, as participants frequently used this word during cognitive interviews to describe sources of information. Further, as item #1 was not relevant for all caregivers across the caregiving trajectory, the item was deleted. Cognitive testing of the revised items is suggested to ensure items are understood as intended.

Two of the 16 participants responded with ‘disagree/very difficult’ on five items, which suggested that they had difficulty, or were unable to complete that task. However, during the cognitive interviews, it was revealed that these participants had provided care to someone who had deliberately avoided conventional cancer treatments for exclusive use of complementary and alternative therapies to manage the cancer. Thus, in responding to specific items, these participants were not conveying difficulty or inability to complete the task; rather their intention was to convey that the item was ‘not applicable’ to their circumstance. Item writing was guided by statements generated by participants during concept mapping workshops who were recipients of, or caregivers of people who received, conventional cancer treatments. It is therefore recommended that future studies be conducted with caregivers of people who solely receive complementary and alternative therapies to manage their cancer to ensure a sub-set of items that address the health literacy needs of this caregiver population.

Limitations of the study included the low response rate for expert reviews (29%). Although low response rates may potentially affect generalizability of the results, the sample size for the expert review analysis was in line with recommendations [51]. Participation rate was also low for the cognitive interviews (19%); however, between 8 to 11 interviews were conducted for each item, which met the recommended sample size of 5 to 15 participants to identify problems with items [47]. Further, participants for cognitive interviews were predominantly female (94%), which limits generalizability of the findings. Further, reporting error may occur due to the self-report nature of the questionnaire, in which respondents may report differently depending on their social experiences [52].


Using systematic grounded approaches, a new measure of cancer caregiver health literacy is being developed that contains 10 key constructs hypothesized to represent a caregiver’s capacity to find, understand, appraise, and use health information to provide optimal care. The next step in the development of this measure is to assess the reliability and validate the questionnaire in a large sample of Australian cancer caregivers, and reduce the number of items it contains.

Practice implications

The current study represents the first attempt to establish an instrument to measure the health literacy of caregivers of people with cancer. Assessment and understanding of the health literacy needs of caregivers has the potential to enable the evaluation and development of interventions designed to improve caregiver knowledge and skills.


Written informed consent was obtained from caregivers for the publication of this report.



Content Validity Index


Health Literacy of Caregivers Scale-Cancer


Health Literacy Questionnaire


Test of Functional Health Literacy in Adults


  1. van Ryn M, Sanders S, Kahn K, van Houtven C, Griffin JM, Martin M, Atienza AA, Phelan S, Finstad D, Rowland J: Objective burden, resources, and other stressors among informal cancer caregivers: a hidden quality issue?. Psycho-Oncology. 2011, 20 (1): 44-52. 10.1002/pon.1703.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Given BA, Given CW, Sherwood PR: Family and caregiver needs over the course of the cancer trajectory. J Support Oncol. 2012, 10 (2): 57-64. 10.1016/j.suponc.2011.10.003.

    Article  PubMed  Google Scholar 

  3. Reinhard SC, Given B, Petlick NH, Bemis A: Supporting Family Caregivers in Providing Care. Patient safety and quality: An evidence-based handbook for nurses. Volume 1. 2008, AHRQ Publication, Rockville, M.D, 1

    Google Scholar 

  4. Hubbard G, Illingworth N, Rowa-Dewar N, Forbat L, Kearney N: Treatment decision-making in cancer care: the role of the carer. J Clin Nurs. 2010, 19 (1): 2023-2031. 10.1111/j.1365-2702.2009.03062.x.

    Article  PubMed  Google Scholar 

  5. Laidsaar-Powell RC, Butow PN, Bu S, Charles C, Gafni A, Lam WWT, Jansen J, McCaffery KJ, Shepherd HL, Tattersall MHN: Physician–patient–companion communication and decision-making: a systematic review of triadic medical consultations. Patient Educ Couns. 2013, 91 (1): 3-13. 10.1016/j.pec.2012.11.007.

    CAS  Article  PubMed  Google Scholar 

  6. Bevan JL, Pecchioni LL: Understanding the impact of family caregiver cancer literacy on patient health outcomes. Patient Educ Couns. 2008, 71 (3): 356-364. 10.1016/j.pec.2008.02.022.

    Article  PubMed  Google Scholar 

  7. Applebaum AJ, Breitbart WS: Care for the cancer caregiver: a systematic review. Palliat Support Care. 2013, 11 (3): 231-252. 10.1017/S1478951512000594.

    Article  PubMed  Google Scholar 

  8. Badr H, Krebs P: A systematic review and meta‐analysis of psychosocial interventions for couples coping with cancer. Psycho-Oncology. 2013, 22 (8): 1688-1704. 10.1002/pon.3200.

    Article  PubMed  Google Scholar 

  9. Hudson PL, Remedios C, Thomas K: A systematic review of psychosocial interventions for family carers of palliative care patients. BMC Palliat Care. 2010, 9 (1): 17-22. 10.1186/1472-684X-9-17.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Northouse LL, Katapodi MC, Song L, Zhang L, Mood DW: Interventions with family caregivers of cancer patients: meta‐analysis of randomized trials. CA Cancer J Clin. 2010, 60 (5): 317-339.

    PubMed  PubMed Central  Google Scholar 

  11. Sorensen K, Broucke SV, Fullam J, Doyle G, Pelikan J, Slonska Z, Brand H: Health literacy and public health: a systematic review and integration of definitions and models. BMC Public Health. 2012, 12 (1): 80-93. 10.1186/1471-2458-12-80.

    Article  PubMed  PubMed Central  Google Scholar 

  12. World Health Organization: Health Promotion Glossary. 1998, The Organization, Geneva, Switzerland

    Google Scholar 

  13. Ophelia Toolkit: a step-by-step guide for identifying and responding to health literacy needs within local communities. Part A: introduction to health literacy. []

  14. Zhang NJ, Terry A, McHorney CA: Impact of health literacy on medication adherence: a systematic review and meta-analysis. Ann Pharmacother. 2014, 48 (6): 741-751. 10.1177/1060028014526562.

    Article  PubMed  Google Scholar 

  15. Apter AJ, Wan F, Reisine S, Bender B, Rand C, Bogen DK, Bennett IM, Bryant-Stephens T, Roy J, Gonzalez R: The association of health literacy with adherence and outcomes in moderate-severe asthma. J Allergy Clin Immunol. 2013, 132 (2): 321-327. 10.1016/j.jaci.2013.02.014.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Berkman ND, Sheridan SL, Donahue KE, Halpern DJ, Crotty K: Low health literacy and health outcomes: an updated systematic review. Ann Intern Med. 2011, 155 (2): 97-107. 10.7326/0003-4819-155-2-201107190-00005.

    Article  PubMed  Google Scholar 

  17. Kamimura A, Christensen N, Tabler J, Ashby J, Olson LM: Patients utilizing a free clinic: physical and mental health, health literacy, and social support. J Commun Health. 2013, 38 (4): 716-723. 10.1007/s10900-013-9669-x.

    Article  Google Scholar 

  18. Bostock S, Steptoe A: Association between low functional health literacy and mortality in older adults: longitudinal cohort study. Br Med J. 2012, 344: e1602-10.1136/bmj.e1602.

    Article  Google Scholar 

  19. Garcia CH, Espinoza SE, Lichtenstein M, Hazuda HP: Health literacy associations between Hispanic elderly patients and their caregivers. J Health Commun. 2013, 18 (Suppl1): 256-272. 10.1080/10810730.2013.829135.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Lindquist LA, Nelia JBS, Tam K, Martin GJ, Baker DW: Inadequate health literacy among paid caregivers of seniors. J Gen Intern Med. 2011, 26 (5): 474-479. 10.1007/s11606-010-1596-2.

    Article  PubMed  Google Scholar 

  21. Greenberg D, Dave M, Cagan PW, Ehrlich A: Health literacy in a geriatrics ambulatory practice: an assessment of older adults and their caregivers. Gerontologist. 2009, 49: 253-253.

    Google Scholar 

  22. Greenberg D, Cho K, Wald-Cagan P, Ehrlich A: Health literacy in a geriatric ambulatory practice: an exploratory study of older adults and their caregivers. Gerontologist. 2008, 48: 485-485. 10.1093/geront/48.4.485.

    Article  Google Scholar 

  23. Parker RM, Baker DW, Williams MV, Nurss JR: The test of functional health literacy in adults. J Gen Intern Med. 1995, 10 (10): 537-541. 10.1007/BF02640361.

    CAS  Article  PubMed  Google Scholar 

  24. Baker DW, Williams MV, Parker RM, Gazmararian JA, Nurss J: Development of a brief test to measure functional health literacy. Patient Educ Couns. 1999, 38 (1): 33-42. 10.1016/S0738-3991(98)00116-5.

    CAS  Article  PubMed  Google Scholar 

  25. Davis TC, Crouch M, Long SW, Jackson RH, Bates P, George RB, Bairnsfather LE: Rapid assessment of literacy levels of adult primary care patients. Fam Med. 1991, 23 (6): 433-435.

    CAS  PubMed  Google Scholar 

  26. Weiss BD, Mays MZ, Martz W, Castro KM, DeWalt DA, Pignone MP, Mockbee J, Hale FA: Quick assessment of literacy in primary care: the Newest Vital Sign. Ann Fam Med. 2005, 3 (6): 514-522. 10.1370/afm.405.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Haun JN, Valerio MA, McCormack LA, Sørensen K, Paasche-Orlow MK: Health literacy measurement: an inventory and descriptive summary of 51 instruments. J Health Commun. 2014, 19 (sup2): 302-333. 10.1080/10810730.2014.936571.

    Article  PubMed  Google Scholar 

  28. Jordan JE, Osborne RH, Buchbinder R: Critical appraisal of health literacy indices revealed variable underlying constructs, narrow content and psychometric weaknesses. J Clin Epidemiol. 2010, 64 (4): 366-379. 10.1016/j.jclinepi.2010.04.005.

    Article  PubMed  Google Scholar 

  29. Nielsen-Bohlman L, Panzer AM, Kindig DA: Health Literacy: A Prescription To End Confusion. 2004, The National Academies Press, Washington, DC

    Google Scholar 

  30. Osborne RH, Batterham RW, Elsworth GR, Hawkins M, Buchbinder R: The grounded psychometric development and initial validation of the Health Literacy Questionnaire (HLQ). BMC Public Health. 2013, 13 (1): 658-674. 10.1186/1471-2458-13-658.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Jordan JE, Buchbinder R, Briggs AM, Elsworth GR, Busija L, Batterham R, Osborne RH: The Health Literacy Management Scale (HeLMS): a measure of an individual’s capacity to seek, understand and use health information within the healthcare setting. Patient Educ Couns. 2013, 91 (2): 228-235. 10.1016/j.pec.2013.01.013.

    Article  PubMed  Google Scholar 

  32. Kumar D, Sanders L, Perrin EM, Lokker N, Patterson B, Gunn V, Finkle JP, Franco V, Choi L, Rothman RL: Parental understanding of infant health information: health literacy, numeracy, and the Parental Health Literacy Activities Test (PHLAT). Acad Pediatr. 2010, 10 (5): 309-316. 10.1016/j.acap.2010.06.007.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Yin HS, Sanders LM, Rothman RL, Mendelsohn AL, Dreyer BP, White RO, Finkle JP, Prendes S, Perrin EM: Assessment of health literacy and numeracy among Spanish-speaking parents of young children: Validation of the Spanish Parental Health Literacy Activities Test (PHLAT-Spanish). Acad Pediatr. 2011, 12 (1): 68-74. 10.1016/j.acap.2011.08.008.

    Article  PubMed  PubMed Central  Google Scholar 

  34. U.S. Department of Health and Human Services Food Drug Administration: Guidance for Industry: Patient-Reported Outcome Measures—Use in Medical Product Development to Support Labeling Claims. Health Qual Life Outcomes. vol. 5. 2009, US Food and Drug Administration, Rockville, MD

    Google Scholar 

  35. Streiner DL, Norman GR: Health Measurement Scales: A Practical Guide to Their Development and Use. 2008, Oxford University Press, Oxford, UK

    Book  Google Scholar 

  36. Buchbinder R, Batterham R, Elsworth G, Dionne CE, Irvin E, Osborne RH: A validity-driven approach to the understanding of the personal and societal burden of low back pain: development of a conceptual and measurement model. Arthritis Res Ther. 2011, 13 (5): 1-13. 10.1186/ar3468.

    Article  Google Scholar 

  37. Hox JJ: From Theoretical Concept to Survey Question. Survey Management and Process Quality. Edited by: Lyberg LE, Biemer P, Collins M, Leeuw ED, Dippo C, Schwarz N, Trewin D. 1997, John Wiley & Sons, Inc, New York: NY, 1

    Google Scholar 

  38. Patrick DL, Burke LB, Gwaltney CJ, Leidy NK, Martin ML, Molsen E, Ring L: Content validity - Establishing and reporting the evidence in newly developed patient-reported outcomes (PRO) instruments for medical product evaluation: ISPOR PRO good research practices task force report: Part 1 - Eliciting concepts for a new PRO instrument. Value Health. 2011, 14 (8): 967-977. 10.1016/j.jval.2011.06.014.

    Article  PubMed  Google Scholar 

  39. Flesch R: A new readability yardstick. J Appl Psychol. 1948, 32 (3): 221-10.1037/h0057532.

    CAS  Article  PubMed  Google Scholar 

  40. Kincaid JP, Fishburne RP, Rogers RL, Chissom BS: Derivation of New readability Formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy Enlisted Personnel. DTIC Document. 1975

    Google Scholar 

  41. Anderson LW, Krathwohl DR: A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. 2001, Longman, New York

    Google Scholar 

  42. Krathwohl DR: A revision of Bloom’s taxonomy: an overview. Theory Pract. 2002, 41 (4): 212-218. 10.1207/s15430421tip4104_2.

    Article  Google Scholar 

  43. Leung AYM, Cheung MKT, Lou VWQ, Chan FHW, Ho CKY, Do TL, Chan SSC, Chi I: Development and validation of the Chinese Health Literacy Scale for chronic care. J Health Commun. 2013, 18 (Suppl1): 205-222. 10.1080/10810730.2013.829138.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Leung AYM, Lou VWQ, Cheung MKT, Chan SSC, Chi I: Development and validation of Chinese Health Literacy Scale for Diabetes. J Clin Nurs. 2012, 22 (15–16): 2090-2099.

    PubMed  Google Scholar 

  45. DeVellis RF: Scale Development: Theory and Applications, Vol. 3. 2011, Sage Publications, Inc, Thousand Oaks: CA

    Google Scholar 

  46. Lynn MR: Determination and quantification of content validity. Nurs Res. 1986, 35 (6): 382-386. 10.1097/00006199-198611000-00017.

    CAS  Article  PubMed  Google Scholar 

  47. Willis GB: Cognitive Interviewing: A Tool for Improving Questionnaire Design. 2005, Sage Publications, Inc, Thousand Oaks: CA

    Book  Google Scholar 

  48. Beatty PC, Willis GB: Research synthesis: the practice of cognitive interviewing. Public Opin Q. 2007, 71 (2): 287-311. 10.1093/poq/nfm006.

    Article  Google Scholar 

  49. Miles MB, Huberman AM, Saldaña J: Qualitative Data Analysis: A Methods Sourcebook, Vol. 3. 2013, Sage Publications, Inc, Thousand Oaks: CA

    Google Scholar 

  50. McCarthy B: Family members of patients with cancer: what they know, how they know and what they want to know. Eur J Oncol Nurs. 2011, 15 (5): 428-441. 10.1016/j.ejon.2010.10.009.

    Article  PubMed  Google Scholar 

  51. Polit DF, Beck CT, Owen SV: Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Res Nurs Health. 2007, 30 (4): 459-467. 10.1002/nur.20199.

    Article  PubMed  Google Scholar 

  52. Sen A: Health: perception versus observation: self reported morbidity has severe limitations and can be extremely misleading. BMJ. 2002, 324 (7342): 860-861. 10.1136/bmj.324.7342.860.

    Article  PubMed  PubMed Central  Google Scholar 

Download references


The authors would like to sincerely thank the experts who participated in the expert review, the caregivers who participated in the cognitive interviews, Anne Muldowney from Carers Victoria for her help with recruiting caregivers, and Professor Richard Osborne for his contributions to the study.

Eva YN Yuen was supported by a Deakin University Postgraduate Research Scholarship.

The research was, in part, supported by a Deakin University Population Health Strategic Research Centre small project funding grant.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Eva YN Yuen.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

EYNY conceived and designed the study, coordinated participant recruitment, conducted the cognitive interviews, analyzed the data, and drafted the manuscript. PML and SD participated in the conception and design of the study, and helped to draft the manuscript. TK, LR and SB participated in the coordination of the study, data analysis, and helped to draft the manuscript. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Yuen, E.Y., Knight, T., Dodson, S. et al. Development of the Health Literacy of Caregivers Scale - Cancer (HLCS-C): item generation and content validity testing. BMC Fam Pract 15, 202 (2014).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Cancer
  • Caregivers
  • Health literacy
  • Information needs
  • Questionnaire development