Skip to main content
  • Original research
  • Open access
  • Published:

Developing a tool for measuring the disaster resilience of healthcare rescuers: a modified Delphi study

Abstract

Background

Disaster resilience is an essential personal characteristics of health rescue workers to respond to disasters in an effective manner, and maintain a state of adaptation after deployment. It is essential for disaster managers to recruit, assess, and prepare healthcare rescuers with this characteristic. A specific tool for measuring the disaster resilience of healthcare rescuers has yet to be devised.

Objective

The purpose of this study was to establish the content validity of a tool for measuring the disaster resilience of disaster rescue workers.

Methods

A modified Delphi approach was employed. Experts in disaster work and research were invited to rate the domains and items of a prototype tool for measuring disaster resilience in healthcare rescue workers. The panel of experts rated the relevance of the items using a 4-point Likert scale. The median and interquartile range, as well as the level of agreement, were calculated for each item using the Kendall coefficient W, to assess the consensus of the experts. The content validity index (CVI) was calculated to assess the content validity of this tool.

Results

A total of 22 and 21 experts were involved in the first and second rounds of this modified Delphi study (response rate of 91.7 and 95.5%), respectively. After two rounds of expert query, an eight-domain and 27-item disaster resilience measuring tool was established. The median range of all of the included items was 3.50 to 4.00 and the interquartile range was 0.00 to 1.00, and all items achieved ≥85% agreement. The Kendall coordination coefficient W was 0.21 and 0.33 in the first and second rounds, respectively, with P < 0.01. The I-CVI ranged from 0.85 to 1.0, while the S-CVI/UA and S-CVI /Ave were 0.69 and 0.97, respectively.

Conclusion

Consensus was reached on a disaster resilience measuring tool covering 27 items. The content validity of this tool for measuring the disaster resilience of healthcare rescuers was excellent. This tool is validated and ready to be tested in a pilot study to assess its psychometric properties.

Background

Resilience is regarded as the ability to “bounce back” from disaster, sustaining well-being and life satisfaction without negative psychological symptoms over time [1]. Resilience is also considered one of protective factors against occupational burnout [2, 3]. It has been suggested that healthcare rescue workers who have a high level of disaster resilience are not only less likely to suffer from negative psychological problems such as anxiety and depression, and posttraumatic stress disorder (PTSD), but also work more effectively [4,5,6]. Thus, disaster resilience is essential for the health and well-being of both disaster rescuers and the survivors of a disaster. It is desirable for disaster rescue workers to be recruited from among those with a high level of resilience.

An annual average of 77,144 deaths due to disasters were recorded between 2000 and 2017 [7]. Recent data show that 10,373 lives were lost in 2018 because of catastrophic events such as earthquakes, tsunamis, and volcanic activity, which is a demonstrable decline. The prevalence of Post-traumatic stress disorder PTSD among healthcare disaster rescuers was reported to be as high as 28.6% at 8 months after the Yushu earthquake in China [8]. Nurses who responded to the 2008 Wenchuan earthquake were at higher risk of suffering from PTSD (30%), compared to other healthcare rescuers [9].

Studies have also suggested that factors protective of resilience, such as social support and coping strategies, can be modified, learned, or cultivated through intervention programs [10, 11]. Thus, it is possible to design and develop interventions to foster the resilience of rescuers who are at a high risk of suffering from negative psychological consequences. There is also a need to have a valid and reliable tool for measuring disaster resilience, for use in recruiting disaster rescue workers and in evaluating the effectiveness of interventions that have been developed to enhance the resilience of individuals.

Existing instruments, such as the Connor-Davidson Resilience Scale (CD-RISC) [12, 13] and the Resilience Scale [14] have been used in studies to measure resilience among disaster rescue workers. However, these instruments were originally developed based on the general population or on patients with psychological disorders rather than specifically on rescue workers. Instruments that are “borrowed” from other populations or contexts may not be appropriate for the specific population or context of interest [15]. It is thus inappropriate to use an existing non-specific measuring tool to screen rescue workers for resilience in the recruitment process, or to evaluate the effectiveness of intervention programs aimed at fostering resilience in disaster rescue workers. As there is no specific resilience scale that can serve as a “gold standard,” and no specific instrument to measure the disaster resilience of rescue workers, the use of “borrowed” instruments on resilience has led to confusion in disaster management and research. It is therefore imperative to develop a valid and reliable instrument specifically for assessing the disaster resilience of disaster rescue workers in the context of disaster deployment.

Validation of a prototype tool

To our knowledge, there is no consensus on a framework for assessing disaster resilience in healthcare rescuers. A prototype tool for measuring the disaster resilience of rescue workers was developed by the research team. The tool was developed based on an extensive review of the literature on the characteristics of resilience among disaster rescue workers, a concept analysis of the concept “disaster resilience” and a focus group interview study of disaster healthcare rescuers, who were asked to give their views on disaster resilience [16]. Based on the results of these works, a scoping review of the tools for measuring the resilience of adults was conducted, and a prototype disaster resilience tool for healthcare rescuers was developed. The scale consists of eight domains: optimism, altruism, preparations for disaster, social support, perceived control, self-efficacy, coping strategies, and positive growth.

Methods

This study adopted a modified Delphi method to validate the instrument. A modified Delphi was a kind of technique of establishing consensus among a panel of experts on a topic of interest [17]. A traditional Delphi process begins with an open-ended questionnaire, which is time-consuming and usually leads to a low response rate [18, 19]. In a modified Delphi study approach, experts are consulted in the very first round using a structured questionnaire developed based on extensive reviews of the literature and / or on a focus group interview study [20]. The use of a modified Delphi process is appropriate when basic information concerning the target issue / topic is available and usable [17].

An online modified Delphi approach [19, 21] was the approach adopted in this study to obtain the judgment of a panel of independent experts on this specific issue, on which there is insufficient knowledge and research evidence to provide guidance on practice [22].

The aim of this study was to refine the domains and items of the prototype tool for measuring disaster resilience among healthcare rescuers and to establish the content validity of the items in that tool.

Panel selection

A purposive and criterion-based sampling method [23, 24] was adopted for selecting the members of the panel.

In a Delphi study, the experts represent those from various geographical locations [25], who are knowledgeable [19, 24, 26], possess professional and special expertise [27], have attained a certain level of educational status [28], and are willing to participate in the survey [26, 28, 29].

It has been suggested that a sample of experts can be identified through conferences [30] and published literature [31]. The potential experts for this study were acquaintances from international conferences / workshops on disaster nursing/management, such as the World Society of Disaster Nursing (WSDN, Germany, October 2018) and the Asia Pacific Emergency and Disaster Nursing Network (APEDNN, Cambodia, November 2018), as well as internationally known experts identified from published research studies / books on topics related to disaster healthcare.

The experts who were involved in academic and / or empirical work on disasters were selected in accordance with the purpose of this project [32]. They are from various geographical locations: the United States of America, the United Kingdom, Australia, Japan, South Korea, Taiwan, Hong Kong, and mainland China. The members of the panel of experts in the present study were selected from different countries based on the following criteria for inclusion: [1] the possession of a bachelor’s degree or above [2]; relevant experience / significant contributions in disaster management, disaster nursing / medicine / healthcare, or disaster-related research; and [3] at least 5 years of disaster-related clinical or academic experience. Those who could not read English, or could not be reached by electronic means via computer / email were excluded.

In the literature on Delphi studies, it is suggested that ten to fifteen subjects could be sufficient [17, 26]. It is also common for three experts to be considered sufficient for assessing the content validity of an instrument that has been developed [33]. Our aim was to recruit at least 15 international experts to take part in this study.

Format of the prototype tool for validation

The experts were invited to provide comments on the domains / components of the tool for measuring the disaster resilience of healthcare rescuers by rating the relevance of each item of the prototype tool on a 4-point Likert scale, with 1 = not relevant, 2 = somewhat relevant, 3 = quite relevant, 4 = highly relevant [34]. Although a 3-or 5-point rating scale is the commonly adopted format for validation, a 4-point Likert scale was adopted to obtain an even number of possible responses to avoid a neutral and ambivalent result at mid-point [33].

A pilot test of the validation form of the prototype tool was conducted among three experts who were not included in the panel of expert in the Delphi rounds. The pilot was to estimate the time required to complete the form and to ensure the clarity of the items. The experts suggested that information on the background and aims of this study should be provided, and information the background and demographics of the experts in the Delphi survey should be collected. Some minor clarifications were made to the wording of the items.

Data collection procedure

At the commencement of the Delphi expert query, an invitation letter, information sheet with an explanation of the background and aim of the Delphi survey, together with the prototype tool survey form, were sent to the experts via email. These experts were those whom the researchers had approached while attending various conferences or whom they had identified from the literature and contacted by email. All of them agreed to take part in this Delphi query.

The panelists were asked to rate the relevance of each item on a 4-point Likert scale. The experts were also given the opportunity to suggest additional domains and items that might not have been included in the tool, and to give comments on the tool at the end of the survey form. As it has been suggested that ten to fourteen days should be a sufficient interval between rounds of assessment for expert query [35], the experts were given 2 weeks to return their ratings and comments. An email of reminder was sent to those in the panel who had not given their feedback after 2 weeks. If there was still no response within the next 2 weeks (4 weeks in total), it was concluded that the expert was not available or no longer interested in taking part in the study, and no further attempts were made to contact that person.

There was a two-week interval between the rounds of the Delphi survey. During this period, the feedback from the experts was summarized, scrutinized, and studied to refine the prototype tool. This feedback was also sent to the panel of experts in the next round. Only those who took part in the first round were invited to take part in the subsequent round(s) of the Delphi survey. In those subsequent round(s), the panelists were asked to rate the items using the same criteria for assessment described earlier.

The number of rounds depended on the level of consensus reached [19, 36], and the amount of time available [24]. Recent evidence has shown that two to three rounds are sufficient in a modified Delphi study [37,38,39,40]. The number of rounds of surveys in this study ceased when a consensus was reached on all items, as indicated when 70% of the experts reached an agreement [41,42,43].

Statistical analysis

After the completion of each round of the Delphi survey, the data were inputted for statistical analyses into SPSS (Statistical Package for the Social Scientists) software version 25.0 for Windows (IBM Corp., New York, NY, USA). Consensus is one of the most contentious components of the Delphi method [36]. The median and interquartile range, as well as the level of agreement, were calculated to evaluate the consensus for each item in this Delphi research [44]. A consensus was considered to have been reached on the inclusion of an item in the disaster resilience scale if the median of the item was up to 3.25 on a 4-point scale [17], the interquartile range was less than 1 [45, 46], and the level of agreement was at least 70% [19]. The Kendall coefficient W test was adopted to evaluate the consensus on agreement among the panel of experts [47]. A two-tailed p-value of < 0.05 was considered statistically significant.

The content validity was calculated using the content validity index (CVI) [33]. Content validity was computed for each item (I-CVI) as well as for the overall scale (S-CVI), including the universal agreement (S-CVI/UA) and average (S-CVI/Ave). In this study, both I-CVI and S-CVI were calculated, and the values of statistical significance for I-CVI, S-CVI/UA, and S-CVI /Ave were set at ≥0.78, 0.8, and 0.9, respectively [48].

Ethical considerations

Ethical approval for this study was obtained from the Human Research Ethics Committee of the School of Nursing, the Hong Kong Polytechnic University (HSEARS20190102004), and the West China Hospital, Sichuan University (2019#65). The experts were informed that their participation in this Delphi study was voluntary. Experts who returned their ratings of the tool were considered to have given their implied consent to participate in this study. In our study, only the researchers knew the name of the experts, and no individuals are identified in the report.

Results

This Delphi survey took place from 4th February to 20th April 2019. A consensus on the items was achieved after two rounds of the survey. A flow diagram of the Delphi process is given in Fig. 1.

Fig. 1
figure 1

Flow diagram of panel selection and Delphi process

Before commencing the first round of the Delphi survey, a total of 38 experts were invited to take part in this modified Delphi study. Twenty-eight of them responded, of whom 24 had the willingness and time to become involved in this study. Emails were then sent to these 24 experts. A total of 22 of them gave feedback on the prototype tool, although a few required email reminders before doing so (for a response rate of 91.7%). In the second round, emails were sent to these 22 experts. In the end, a total of 21 experts completed the Delphi survey (for a response rate of 95.5%). The demographic characteristics of the panel experts are presented in Table 1.

Table 1 The characteristics of the experts in panel

Table 2 shows the median, interquartile range, and level of agreement of all items in the prototype tool in the first round of the Delphi study. Regarding the 66 items in the first draft of the prototype tool, a total of 81 comments were received from experts. The researchers held a meeting to discuss these comments. As a result of the discussion, a total of 17 items were accepted and their wording was revised as suggested, 25 items were merged into 11 items, and 3 items were added. Another 19 items were regarded as irrelevant by 8 experts or did not meet the criteria for a consensus and were deleted as suggested. The Kendall’s coefficient of concordance (W) of the first round of the Delphi survey was calculated to be 0.21 (P < 0.01). The prototype tool was reduced from 66 to 36 items after the first round of the survey.

Table 2 The median, interquartile range, and the level of agreement of items in the first-round query

Table 3 shows the median, interquartile range, and level of agreement of all items in the disaster resilience tool following the second round. Another meeting was held among the researchers to discuss the comments and suggestions that were received. A total of 8 items were deleted since a consensus was not reached on them. A total of 12 items were accepted but the wording of these items was changed to achieve more precision in proper English, 4 items were merged into 2 items due to overlap, and 1 item was added after the discussion and approval from all researchers. The Kendall’s coefficient of concordance (W) in the second round of queries was 0.33 (P < 0.01). The final version of a 27-item tool for measuring the disaster resilience of healthcare rescue workers was established (Table 4).

Table 3 The median, interquartile range, and the level of agreement of items in the second-round query
Table 4 The final version of the disaster resilience tool for healthcare rescuers after a two-round Delphi survey

After two rounds of the modified Delphi survey, the I-CVI for the disaster resilience tool for healthcare rescuers ranged from 0.85 to 1.0. The S-CVI/UA and S-CVI /Ave were 0.69 and 0.97, respectively. The consensus of all items for the disaster resilience tool was reached. Therefore, the Delphi experts survey was completed after two rounds.

Discussion

To the best of our knowledge, this is the first study to validate a tool for measuring the disaster resilience of healthcare rescuers through the use of a Delphi survey to gauge the views of experts in the field of disaster work and research. After two rounds of a web-based modified Delphi survey, a 27-item tool for screening the disaster resilience of rescuers was identified (Table 4). The outcome of this study, the measuring tool, can be used as a reference to recruit and identify disaster rescue workers who have the characteristics of disaster resilience, or as a tool to evaluate the effectiveness of resilience training programs for healthcare disaster rescue workers [49]. The modified approach adopted in this Delphi survey is considered superior to the original approach because it is highly effective and less time consuming [50, 51].

Having an “expert panel” is central to the process of the Delphi technique, although there are no standard criteria for determining expertise [52]. In the current study, the panel of experts comprised people from seven countries/cities who are in diverse professions, such as university academics, physicians, and nurses. They are from the fields of disaster nursing, disaster medicine, disaster education, disaster management, and disaster research. A panel consisting of experts from different geographical locations [25] and areas of professional expertise [27, 53] will produce better results than a panel comprised of those from the same field [54].

Among the experts in our panel, some had taken part in national and / or international disaster rescue work, and the panel as a whole reflected the full range of stakeholders with a common interest [55]. These experts can also be regarded as “consumers” with lived experience of disaster rescue [23]. Therefore, the measuring tool from this Delphi study can be applied in all countries by disaster practitioners, educators, researchers, and management personnel.

Through their active and very timely responses, the expert panel in this study showed strong motivation and interest in taking part in the Delphi survey. Although there is no strict rule for what is considered an acceptable response rate for Delphi studies, a response rate of 70% is suggested necessary for each round [19]. The response rates of the two rounds of Delphi surveys in this study were higher than 90%, and a great number of constructive suggestions and comments were received, indicating the experts’ considerable enthusiasm and interest in this topic. This may be related to the fact that most of the experts also considered it is important to have such measuring tool, and that they were approached in disaster-related international conferences and invited in person. The short timeframe between the two rounds of surveys (2 to 3 weeks) also served to keep the subject fresh in their minds and prevent fatigue. This also helped to enhance the content validity of this modified Delphi study, ultimately strengthening the validity of the results [24].

The Content Validity of this study was good. The I-CVI of the included items varied from 0.85 to 1.00, which is higher than the recommended level of 0.78, suggesting that the content of each item of the disaster resilience tool is excellent [48]. The S-CVI/UA was 0.69, which did not reach the acceptable value of 0.8. This can be explained by the larger sample size of experts in this study for the calculation of S-CVI/AV, and the consequent risk of disagreements [48]. The S-CVI /Ave was as high as 0.97, indicating that the content validity of the whole scale is excellent. Thus, the content validity of this disaster resilience tool can be regarded as excellent, although some of the items may need to be slightly revised based on the comments or suggestions of the expert panel.

After two rounds of surveys, the experts reached a consensus on all of the items that were finally included in the disaster resilience tool. The median range of all of the included items was 3.50 to 4.00 and the interquartile range was 0.00 to 1.00, while the statement of all items demonstrated ≥85% agreement, indicating good consensus [41,42,43]. The Kendall coordination coefficient W, used to assess the agreement among several expert evaluators [56], was 0.207 in the first round and 0.33 in the second round, with P < 0.01. This suggests a highly significant level of consensus among the experts in the panel.

As with any research, there are some limitations to this study that need to be acknowledged. First, to prevent peer influence in a Delphi study, each member of the panel of experts should not know about the others [57]. However, the members of the expert panel in this study were mainly approached during international conferences. Thus, it was inevitable that some of these experts would know about each other, which means that absolute anonymity could not be achieved in this study. Nevertheless, all of the experts rated the items of this tool independently at their own location, so the results of the rating of each item were anonymous. Second, because the experts lived in different countries/cities, no face-to-face meetings were held among them in the process of carrying out this modified technique. Finally, the opinion of the panel of experts in this study may not be representative of all experts within the field of disaster studies, as experts from other countries that are frequently affected by disasters, such as India and Indonesia, were not involved in this study.

In spite of its limitations, this study provides significant information for disaster management. In the next step, this developed disaster resilience tool is to be validated for its psychometric quality, including its reliability and validity, in a cross-sectional study on healthcare disaster rescue workers. Researchers, management, and policy makers will then have a validated tool to use in recruiting or assessing disaster resilience among healthcare rescuers.

The validation process would include the following: translating the tool into languages other than English if it is to be validated in countries where English is not the main language, conducting a pilot test among a sample of disaster healthcare rescuers to assess the clarity and pertinence of the items in the language of the country where the tool is being tested and, finally, conducting a cross-sectional survey among a large sample of disaster healthcare rescuers to test the reliability and construct validity of the tool.

Conclusion

This study has established a tool for assessing the disaster resilience of healthcare rescuers, using a modified Delphi technique. The tool is a scale made up of a total of eight domains and 27 items. The panel of experts reached a consensus on all of the items in this scale, and the items and the overall scale were found to have excellent content validity. A study to establish the psychometric properties of this scale is needed in the next step before it can be used as a tool in the recruitment and management of disaster healthcare rescue workers.

Availability of data and materials

The data were hold by the first author, if it is needed, please contact at xiaorong.mao@connect.polyu.hk.

Abbreviations

APEDNN:

Asia Pacific Emergency and Disaster Nursing Network

CD-RISC:

Connor-Davidson Resilience Scale

CVI:

Content validity index

PTSD:

Posttraumatic stress disorder

SFDRR:

Sendai Framework for Disaster Risk Reduction

SPSS:

Statistical Package for the Social Scientists

USA:

United States of America.

WSDN:

World Society of Disaster Nursing

References

  1. Bonanno GA, Diminich ED. Annual research review: positive adjustment to adversity–trajectories of minimal–impact resilience and emergent resilience. J Child Psychol Psychiatry. 2013;54(4):378–401.

    Article  PubMed  Google Scholar 

  2. Jackson J, Vandall-Walker V, Vanderspank-Wright B, Wishart P, Moore SL. Burnout and resilience in critical care nurses: a grounded theory of managing exposure. Intensive Crit Care Nurs. 2018;48:28–35.

    Article  PubMed  Google Scholar 

  3. Moon Y, Shin SY. Moderating effects of resilience on the relationship between emotional labor and burnout in care work ers. J Gerontol Nurs. 2018;44(10):30–9.

    Article  PubMed  Google Scholar 

  4. Kašpárková L, Vaculík M, Procházka J, Schaufeli WB. Why resilient workers perform better: the roles of job satisfaction and work engagement. J Work Behav Health. 2018;33(1):43–62.

    Article  Google Scholar 

  5. Mao XR, Fung OWM, Hu XY, Loke AY. Psychological impacts of disaster on rescue workers: a review of the literature. Int J Disaster Risk Reduction. 2018;27:602–17.

    Article  Google Scholar 

  6. McCain RS, McKinley N, Dempster M, Campbell WJ, Kirk SJ. A study of the relationship between resilience, burnout and coping strategies in doctors. Postgrad Med J. 2018;94(1107):43–7.

    Article  Google Scholar 

  7. CRED, UNISDR. 2018 Review of Disaster Events 2019 [Available from: https://www.cred.be/publications.

  8. Kang P, Lv Y, Hao L, Tang B, Liu Z, Liu X, et al. Psychological consequences and quality of life among medical rescuers who responded to the 2010 Yushu earthquake: a neglected problem. Psychiatry Res. 2015;230(2):517–23.

    Article  PubMed  Google Scholar 

  9. Zhen Y. Huang Z-q, Jin J, Deng X-y, Zhang L-p, Wang J-g. posttraumatic stress disorder of red cross nurses in the aftermath of the 2008 Wenchuan China earthquake. Arch Psychiatr Nurs. 2012;26(1):63–70.

    Article  PubMed  Google Scholar 

  10. Connor KM, Davidson JR. Development of a new resilience scale: the Connor-Davidson resilience scale (CD-RISC). Depress Anxiety. 2003;18(2):76–82.

    Article  PubMed  Google Scholar 

  11. Southwick SM, Charney DS. The Science of Resilience: Implications for the Prevention and Treatment of Depression. Science (New York, NY). 2012;338(6103):79–82.

    Article  CAS  Google Scholar 

  12. Shepherd D, McBride D, Lovelock K. First responder well-being following the 2011 Canterbury earthquake. Disaster Prev Manag. 2017;26(3):286–97.

    Article  Google Scholar 

  13. McCanlies EC, Mnatsakanova A, Andrew ME, Burchfiel CM, Violanti JM. Positive psychological factors are associated with lower PTSD symptoms among police officers: post hurricane Katrina. Stress Health. 2014;30(5):405–15.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Chang K, Taormina RJ. Reduced secondary trauma among Chinese earthquake rescuers: a test of correlates and life indicators. J Loss Trauma. 2011;16(6):542–62.

    Article  Google Scholar 

  15. Windle G. What is resilience? A review and concept analysis. Rev Clin Gerontol. 2011;21(02):152–69.

    Article  Google Scholar 

  16. Mao X, Loke AY, Fung OWM, Hu X. What it takes to be resilient: the views of disaster healthcare rescuers. Int J Disaster Risk Reduction. 2019;36:1–8.

    Article  Google Scholar 

  17. Hsu C-C, Sandford BA. Delphi technique. In: Encyclopedia of research design; 2010. p. 344–7.

    Google Scholar 

  18. Goodman CM. The Delphi technique: a critique. J Adv Nurs. 1987;12(6):729–34.

    Article  CAS  PubMed  Google Scholar 

  19. Keeney S, McKenna H, Hasson F. The Delphi technique in nursing and health research. Wiley; 2011.

  20. Custer RL, Scarcella JA, Stewart BRJJov, education t. The Modified Delphi Technique--A Rotational Modification 1999;15(2):50–58.

    Google Scholar 

  21. Schmidt RCJdS. Managing Delphi surveys using nonparametric statistical techniques. 1997;28(3):763–774.

    Google Scholar 

  22. Keeney S, Hasson F, McKenna H. Consulting the oracle: ten lessons from using the Delphi technique in nursing research. J Adv Nurs. 2006;53(2):205–12.

    Article  PubMed  Google Scholar 

  23. Williams KE, Sansoni J, Morris D, Thompson C. A Delphi study to develop indicators of cancer patient experience for quality improvement. Support Care Cancer. 2018;26(1):129–38.

    Article  PubMed  Google Scholar 

  24. Hasson F, Keeney S, McKenna H. Research guidelines for the Delphi survey technique. J Adv Nurs. 2000;32(4):1008–15.

    CAS  PubMed  Google Scholar 

  25. Biondo PD, Nekolaichuk CL, Stiles C, Fainsinger R, Hagen NAJScic. Applying the Delphi process to palliative care tool development: lessons learned 2008;16(8):935–942.

    Google Scholar 

  26. Skulmoski GJ, Hartman FT, Krahn J. The Delphi method for graduate research. J Inf Technol Educ Res. 2007;6:1–21.

    Google Scholar 

  27. Huang HC, Lin WC, Lin JDJJocn. Development of a fall-risk checklist using the Delphi technique 2008;17(17):2275–2283.

    Google Scholar 

  28. Evans CJP. The use of consensus methods and expert panels in pharmacoeconomic studies. 1997;12(2):121–129.

    Google Scholar 

  29. Keeney S, Hasson F, McKenna HP. A critical review of the Delphi technique as a research methodology for nursing. Int J Nurs Stud. 2001;38(2):195–200.

    Article  CAS  PubMed  Google Scholar 

  30. Moreno-Casbas T, Martín-Arribas C, Orts-Cortés I, Comet-Cortés P. Identification of priorities for nursing research in Spain: a Delphi study. J Adv Nurs. 2001;35(6):857–63.

    Article  CAS  PubMed  Google Scholar 

  31. Klimenko E, Julliard K. Communication between CAM and mainstream medicine: Delphi panel perspectives. Complement Ther Clin Pract. 2007;13(1):46–52.

    Article  PubMed  Google Scholar 

  32. Jairath N, Weinstein J. The Delphi methodology (Part one): A useful administrative approach. 1994;7(3):29–42.

  33. MRJNr L. Determination and quantification of content validity; 1986.

    Google Scholar 

  34. LLJAnr D. Instrument review: Getting the most from a panel of experts. 1992;5(4):194–7.

  35. NLJJoNE MC. A test of Cohen's developmental model for professional socialization with baccalaureate nursing students. 1985;24(5):180–6.

  36. Heiko A. Consensus measurement in Delphi studies: review and implications for future quality assurance. Technol Forecast Soc Chang. 2012;79(8):1525–36.

    Article  Google Scholar 

  37. Alshehri SA, Rezgui Y, Li H. Delphi-based consensus study into a framework of community resilience to disaster. Nat Hazards. 2015;75(3):2221–45.

    Article  Google Scholar 

  38. Joling KJ, Windle G, Dröes R-M, Huisman M, Hertogh CMM, Woods RT. What are the essential features of resilience for informal caregivers of people living with dementia? A Delphi consensus examination. Aging Ment Health. 2017;21(5):509–17.

    Article  PubMed  Google Scholar 

  39. Thompson SR, Dobbins S. The applicability of resilience training to the mitigation of trauma-related mental illness in military personnel. J Am Psychiatr Nurses Assoc. 2018;24(1):23–34.

    Article  PubMed  Google Scholar 

  40. Zhong S, Clark M, Hou XY, Zang YL, FitzGerald G. Development of key indicators of hospital resilience: a modified Delphi study. J Health Serv Res Policy. 2015;20(2):74–82.

    Article  PubMed  Google Scholar 

  41. Hampshaw S, Cooke J, Mott L. What is a research derived actionable tool, and what factors should be considered in their development? A Delphi study. BMC Health Serv Res. 2018;18.

  42. Lis R, Sakata V, Lien O. How to choose? Using the Delphi method to develop consensus triggers and indicators for disaster response. Disaster Med Public Health Prep. 2017;11(4):467–72.

    Article  PubMed  Google Scholar 

  43. Veziari Y, Kumar S, Leach M. The development of a survey instrument to measure the barriers to the conduct and application of research in complementary and alternative medicine: a Delphi study. BMC Complement Altern Med. 2018;18.

  44. MJHta M. Consensus development methods, and their use in clinical guideline development. 1998;2(3):1–88.

  45. Raskin MSJJoSWE. The Delphi study in field instruction revisited: Expert consensus on issues and research priorities. 1994;30(1):75–89.

  46. Rayens MK, Hahn EJJP, politics,, practice n. Building consensus using the policy Delphi method 2000;1(4):308–315.

    Google Scholar 

  47. Chen L, Huang LH, Xing MY, Feng ZX, Shao LW, Zhang MY, et al. Using the Delphi method to develop nursing-sensitive quality indicators for the NICU. J Clin Nurs. 2017;26(3–4):502–13.

    Article  PubMed  Google Scholar 

  48. Polit DF, Beck CT, Owen SVJRin, health. Is the CVI an acceptable indicator of content validity? Appraisal and recommendations 2007;30(4):459–467.

    Google Scholar 

  49. Tidball KG, Krasny ME. From risk to resilience: What role for community greening and civic ecology in cities? 2007. p. 149–64.

    Google Scholar 

  50. Eubank BH, Mohtadi NG, Lafave MR, Wiley JP, Bois AJ, Boorman RS, et al. Using the modified Delphi method to establish clinical consensus for the diagnosis and treatment of patients with rotator cuff pathology. BMC Med Res Methodol. 2016;16(1):56.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Okoli C, Pawlowski SD. The Delphi method as a research tool: an example, design considerations and applications. Inf Manag. 2004;42(1):15–29.

    Article  Google Scholar 

  52. Shaw KL, Brook L, Cuddeford L, Fitzmaurice N, Thomas C, Thompson A, et al. Prognostic indicators for children and young people at the end of life: a Delphi study. Palliat Med. 2014;28(6):501–12.

    Article  PubMed  Google Scholar 

  53. Keeney S. The Delphi technique in nursing and health research. Hasson F, McKenna HP, editors. Chichester: Wiley-Blackwell; 2011.

    Book  Google Scholar 

  54. Bantel K. Comprehensiveness of strategic planning: the importance of heterogeneity of a top team. Psychol Rep. 1993;73:35.

    Article  Google Scholar 

  55. Boulkedid R, Abdoul H, Loustau M, Sibony O, Alberti C. Using and Reporting the Delphi Method for Selecting Healthcare Quality Indicators: A Systematic Review. PLoS One. 2011;6(6).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  56. Gearhart A, Booth DT, Sedivec K, Schauer C. Use of Kendall's coefficient of concordance to assess agreement among observers of very high resolution imagery. Geocarto Int. 2013;28(6):517–26.

    Article  Google Scholar 

  57. Petrosyan Y, Barnsley JM, Kuluski K, Liu B, Wodchis WP. Quality indicators for ambulatory care for older adults with diabetes and comorbid conditions: A Delphi study. PLoS One. 2018;13(12).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

None declared.

Author information

Authors and Affiliations

Authors

Contributions

Conceived and designed this study: XM, AY, XH. Collected the data: XM. Supervised the process, including quality control and statistical advice: AY, XH. Analyzed the data: XM. Drafted the manuscript: XM, AY, XH. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Alice Yuen LOKE or Xiuying HU.

Ethics declarations

Ethics approval and consent to participate

As stated in the methods section, all the experts in the study was voluntary and anonymous. All the data collected from the experts only used for research plan. Verbal or written informed consent was obtained and the participants could withdraw from this study at any time.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

MAO, X., LOKE, A.Y. & HU, X. Developing a tool for measuring the disaster resilience of healthcare rescuers: a modified Delphi study. Scand J Trauma Resusc Emerg Med 28, 4 (2020). https://doi.org/10.1186/s13049-020-0700-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13049-020-0700-9

Keywords