Open Access

Developing quality indicators for physician-staffed emergency medical services: a consensus process

Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine201725:14

DOI: 10.1186/s13049-017-0362-4

Received: 16 October 2016

Accepted: 10 February 2017

Published: 15 February 2017

Abstract

Background

There is increasing interest for quality measurement in health care services; pre-hospital emergency medical services (EMS) included. However, attempts of measuring the quality of physician-staffed EMS (P-EMS) are scarce. The aim of this study was to develop a set of quality indicators for international P-EMS to allow quality improvement initiatives.

Methods

A four-step modified nominal group technique process (expert panel method) was used.

Results

The expert panel reached consensus on 26 quality indicators for P-EMS. Fifteen quality indicators measure quality of P-EMS responses (response-specific quality indicators), whereas eleven quality indicators measure quality of P-EMS system structures (system-specific quality indicators).

Discussion

When measuring quality, the six quality dimensions defined by The Institute of Medicine should be appraised. We argue that this multidimensional approach to quality measurement seems particularly reasonable for services with a highly heterogenic patient population and complex operational contexts, like P-EMS. The quality indicators in this study were developed to represent a broad and comprehensive approach to quality measurement of P-EMS.

Conclusions

The expert panel successfully developed a set of quality indicators for international P-EMS. The quality indicators should be prospectively tested for feasibility, validity and reliability in clinical datasets. The quality indicators should then allow for adjusted quality measurement across different P-EMS systems.

Keywords

Quality indicators Physician-staffed emergency medical services Modified nominal group technique

Background

The European Resuscitation Council has identified five critical conditions that require immediate pre-hospital management; cardiac arrest, severe respiratory failure, severe trauma, chest pain and stroke. Four of these conditions are among the leading causes of death in the European Union [1]. An observational study on Scandinavian physician-staffed emergency medical services (P-EMS) observed a pre-hospital incidence of severe illness or injury of 25–30 per 10 000 person-years [2]. Many of these conditions benefit from interventions that rapidly correct deranged physiology and improve tissue oxygen delivery [3]. Services delivering pre-hospital critical care remain a critical link in the chain of survival for several frequent and life-threatening conditions.

Pre-hospital emergency care is primarily delivered by paramedics or nurses in ambulance EMS. In addition many health care systems employ P-EMS to respond to selected patients [46]. These P-EMS normally use rapid response cars or helicopters depending on distance to the scene and receiving hospital, weather, and the characteristics of each assignment [7]. However, although P-EMS is widely established in many countries little is known about the quality delivered by P-EMS.

The importance of quality measurement in health care is widely recognized [811]. Moreover, defining quality indicators (QI) for P-EMS and EMS is identified as a high priority research area [12, 13]. QIs are instrumental to aid clinicians, organizations, health care managers and societies to achieve improvements in health care quality [14]. Further, QIs should integrate the best research evidence with clinical expertise and patient values [15] and allow measurement of health care quality by creating a quantitative basis that indicates performance.

The literature on QIs in pre-hospital critical care is scarce [13, 16] and there is no international agreement on conceptual framework or choice of QIs for P-EMS. Appropriate QIs are needed to identify both high-quality care as well as areas where there is room for improvement in care. The current study describes the development of a comprehensive set of QIs for P-EMS and is a necessary initial step towards quality measurement in this field of health care.

Methods

Conceptual framework

For the purpose of this study, we used the framework described by Donabedian, which groups QIs in three broad categories; structure, process or outcome of health care [17, 18]. Structure indicators describe the infrastructure of a health care system, such as competence of the staff, available equipment, deployment and response times. Process indicators evaluate the care provided to the patient, whereas outcome indicators address the change in the patient’s health status as a result of the provided care. Each type of QI will not give a complete description of the quality of care, but rather addresses a component of the care. Thus, different types of QIs should be combined when estimating the quality of a service [14].

To identify potential QIs, a widely used method is a combination of a systematic review of current literature and a formal process to obtain expert opinions. In this study, we tasked an expert panel to develop QIs for P-EMS using the modified nominal group technique [19, 20]. We defined P-EMS as a dedicated unit staffed with physicians trained in emergency care exceeding the competency of a general practitioner on call [21]. The QIs should be feasible to collect during the pre-hospital time interval or in the emergency department at hand-over. Further, the QIs should as far as possible cover the six quality dimensions that define high-quality care, stated by the Institute of Medicine [22], and appreciated by the World Health Organization [10]. The six quality dimensions are timeliness, safety, efficiency, equity, effectiveness, and patient-centeredness. An overview of the conceptual framework for this study; using structure-, process- and outcome-indicators to address six established quality dimensions, is depicted in Fig. 1.
Fig. 1

Conceptual framework for multidimensional quality measurement in P-EMS

The experts

When developing QIs the expert panel should consist of people considered experts in the appropriate area and who have credibility in the target audience [19]. Clinical expertise is represented by physicians, scientific expertise by researchers and user-expertise by patients. Accordingly, this study’s expert panel consisted of clinicians and researchers from different P-EMSs, but also of stakeholders representing other perspectives in P-EMS. More specifically the 18 members of the expert panel consisted of, three general practitioners, two P-EMS medical directors, a director of a public health institute, a specialist in community medicine, a patient-organization leader and ten physicians working in P-EMS. All panel members were in different ways considered experts in P-EMS or in collaborating services of P-EMS, and practiced in Australia, Austria, Denmark, Finland, Norway, Scotland, United Kingdom and the United States of America. The experts were recruited through PubMed and Google Scholar searches, and via the professional network of the study group. 26 experts were invited by e-mail or telephone. 18 accepted the invitation, two declined and two did not respond. Non-responders were reminded three times by e-mail and three times by telephone.

The modified nominal group process

In our study, the expert panel developed the QIs through a four-step modified nominal group technique. Stage 1, 2 and 4 were e-mail correspondences. In stage 3, the expert panel gathered for a 2-day consensus meeting in Oslo, Norway.

Stage 1. The members of the expert panel were asked to propose QIs for P-EMS according to the following predefined instructions: A total of 3–10 QIs should be proposed for each of the three categories of QIs; structure, process and outcome. A fourth category (Other indicators) was available for proposing QIs difficult to fit into the Structure-Process-Outcome - system. All proposed QIs should be possible to obtain during the pre-hospital time interval. The experts were asked to consider both evidence base and feasibility of data-collection when proposing QIs. However, it was not required that the proposed QIs could be extracted from existing databases in P-EMS.

The panel members returned their proposals to the project group pr. e-mail in a predesigned Excel spreadsheet (Microsoft Office 2013, Microsoft Inc., USA). QIs with identical meaning were merged. No proposed QIs were deleted. Further, the QIs within each category (structure, process and outcome) were ranked according to the number of experts who had included each QI in their proposal.

Stage 2. The experts were asked to use the revised spreadsheet to rank the ten most important QIs in each of the three categories (structure, process and outcome). In each category, the quality indicator ranked in first place was given a point value of 10, second of 9, third of eight and so forth, until the tenth place that was given one point. The point values from all panel members were added, and quality indicators with no ranking were removed from the list. The list with the remaining quality indicators, prioritized according to achieved point value, formed the basis for the consensus meeting.

Stage 3. The expert panel gathered for a 2-day consensus meeting in Oslo, Norway. A moderator (MR) led the experts through discussions on the QIs in the spreadsheet developed in stage 2. The experts decided which QIs should be included in the final set. Further, preliminary definitions and limitations were defined. All debates and discussions were plenary.

Stage 4. Based on the results from the consensus meeting, the project group prepared a document with the selected QIs, including definitions. This document was submitted to the panel members for comments. At this stage, no additional QIs were accepted. However, minor changes pertaining to definitions were allowed.

Consensus was defined as agreement on the proposed QIs during the meeting among the attending experts.

Results

The 18 experts proposed and ranked QIs in stage 1 and 2 (one expert did not submit rankings in stage 2). In stage 1, 358 QIs were proposed by the expert panel. After merging, 179 QIs entered stage 2. At stage 2, 45 QIs obtained 0 points and were excluded, leaving 134 indicators to be discussed at the consensus meeting. Thirteen experts attended the consensus meeting. During the consensus meeting the expert panel recommended the QIs from stage 2 to be classified into two different categories for clarity; response-specific QIs and system-specific QIs. The former is data from the pre-hospital time interval, measuring quality on the response level, and should be feasible to collect from any P-EMS response by the P-EMS physician. The latter should be administrative data describing fixed system characteristics, and should be registered once a year for services using the set of quality indicators continuously or for study purposes. The expert panel argued that the combined information from response- and system-specific QIs allows for a more thorough quality measurement than exclusively relying on response-specific QIs.

Consensus was reached on 15 response- and 11 system-specific QIs (Tables 1 and 2). More specific definitions for each QI are given in the explanation and elaboration document (Additional file 1). The expert panel allowed the project group to finalize the definitions of the indicators and propose them to all 18 experts in stage 4, where the final result was agreed upon. The QIs were allocated into one of the six quality dimensions as defined by the Institute of Medicine. All six quality dimensions were covered by the QIs, and both structure-, process- and outcome-indicators were represented. An overview of the distribution of the QIs is presented in Table 3.
Table 1

Response-specific quality indicators for physician-staffed emergency medical services

#

Quality indicator

Type of quality indicator

Quality dimension

1

Was the P-EMS unit able to respond immediately to the actual response?

Structure

Timeliness

2

What is the time interval from the dispatch center receives the alarm call until P-EMS unit arrives at the patient?

Structure

Timeliness

3

What is the time interval from P-EMS unit arrives at the patient until transportation of patient is initiated?

Process

Timeliness

4

What is the time interval from the P-EMS unit received the alarm call until the patient was delivered at the preferred destination?

Process

Timeliness

5

Did the patient arrive hospital alive?

Outcome

Timeliness

6

Was the P-EMS response debriefed?

Process

Safety

7

Did you experience any adverse events during the P-EMS response?

Process

Safety

8

Are all defined key variables measured and documented in the patient chart?

Process

Efficiency

9

Did the service have a guideline for the medical problem encountered in the response?

Structure

Equity

10

Was a physician and/or a paramedic from P-EMS involved in deciding if the P-EMS unit should be dispatched to the particular job or not?

Process

Equity

11

Without the assistance of the P-EMS unit: Do you consider that the level of competence on scene was sufficient to give the patient appropriate care?

Process

Equity

12

Did P-EMS provide advanced treatment in the actual response?

Process

Effectiveness

13

Did the logistical contribution by P-EMS give the patient a significant better service than the existing alternative?

Process

Effectiveness

14

Was the patient enrolled in a scientific study involving the pre-hospital care?

Structure

Effectiveness

15

Did you ensure that the relatives’ needs were addressed; either by P-EMS or by collaborating services?

Process

Patient-centeredness

Table 2

System-specific quality indicators for physician-staffed emergency medical services

#

Quality indicator

Type of quality indicator

Quality dimension

16

Is the dispatch center staffed 24/7 by specially trained pre-hospital physician?

Structure

Effectiveness

17

What is the number of P-EMS units per 100 000 inhabitants in the service area?

Structure

Equity

18

What is the number of P-EMS units per km2 in the area covered by the service?

Structure

Equity

19

Does the service regularly perform interfacility transports coordinated by a dispatch centre?

Structure

Effectiveness

20

What level of regular in-hospital service do the P-EMS doctors practice in addition to their pre-hospital work?

Structure

Effectiveness

21

Proportion of P-EMS doctors with achieved speciality in: 1; anesthesiology 2; emergency medicine 3; other specialities.

Structure

Effectiveness

22

Proportion of P-EMS doctors who have attended and passed formalized training in major incident management.

Structure

Efficiency

23

Proportion of P-EMS doctors’ assistants with the following qualification: Paramedic or nurse with supplemental regular training in assisting during induction of general anesthesia and/or formal education in anesthesia or intensive care.

Structure

Safety

24

Does the P-EMS service collect data pertaining to patient satisfaction?

Structure

Patient- centeredness

25

What is the number of documented complaints from patients, relatives or receiving hospitals per total number of P-EMS events (ratio)?

Outcome

Patient- centeredness

26

Does it exist a system for registration and reviewing of adverse events, critical incidents and educational events in the service?

Structure

Safety

Table 3

Classification of quality indicators from the consensus process

 

Timeliness

Safety

Efficiency

Equity

Effectiveness

Patient-centeredness

Number

Percent

Structure

0

2

1

2

6

1

12

46,2

Process

4

1

1

2

2

1

11

42,3

Outcome

1

1

0

0

0

1

3

11,5

n

5

4

2

4

8

3

26

 

%

19,2

15,4

7,7

15,4

30,8

11,5

  

Discussion

This paper presents a set of potential QIs for quality measurement of P-EMS. Using a modified nominal group technique an international expert panel achieved consensus on these QIs that describe six quality dimensions and include structure-, process- and outcome-indicators.

Quality measurement of pre-hospital services has been identified as a high-priority research area and pivotal to achieve improvement in care [12, 13, 23, 24]. However, identifying valid quality indicators that are feasible to collect in the operational context of pre-hospital services has been a challenge [25]. We deliberately asked the experts to propose quality indicators themselves, not simply selecting from a pre-defined list. The rationale behind this was to make this process as open as possible in order to achieve a broad selection of proposals. The multidisciplinary composition of the expert panel was partly to facilitate this broad approach.

A premise for this study is that the principles for quality measurement in health care also applies to P-EMS. P-EMS is the practice of medicine outside hospitals, and we find it reasonable to accept this premise. The six core characteristics of quality depicted in Fig. 1 were defined by Institute of Medicine, naming them dimensions of quality [22]. Each of these is distinct and no one is defined more important than the others. When measuring quality, all six quality dimensions should be appraised. We argue that this multidimensional approach to quality measurement seems particularly reasonable for services with a highly heterogenic patient population, like P-EMS. Patients cared for by P-EMS differ a lot: Neonates vs. elderly patients, medical vs. surgical diagnoses, patients rescued from open water vs. Intensive Care Unit transferals [2, 5]. What is considered high quality care for each patient will be context-specific. With this complexity of cases, treatments and operational contexts, we argue that adequate quality measurement of P-EMS should be multidimensional.

Quality dimensions

Timely care is about reducing needless and potentially harmful delays before the patient receives specialized care from the P-EMS. Traditionally, attempts on quality measurement of pre-hospital services, have been limited to data on time-variables corresponding the quality dimension “timeliness” [13, 26]. Studies of EMS have shown that response time affects outcome only for a small group of patients [27, 28]. Moreover, time variables describe the logistics, but not the provided care. Response time of P-EMS is measured in QI 2 “Time to arrival of P-EMS” and is indeed important for some time critical conditions such as cardiac arrest and major trauma [29]. However, the importance of short response times cannot be generalized to all emergency responses [30]. In selected situations, too much emphasis on timeliness is misleading in respect of what really represents quality for the patient. In the United Kingdom paramedics criticized the use of a time target structure measure (eight-minute response time for 75% of category A or emergency calls) as the main performance indicator in EMS. They argued this QI was “too simplistic and narrow” and that it could also increase risk for patients and ambulance crews [31]. An example may illustrate the limitation of time-variables as the sole QIs in P-EMS: Performing an ultra-sound scan of the traumatized patient may prolong the time on scene slightly. However, the examination can result in changes in treatment or triage [32], hence making the extra time spent on scene well worth.

The quality dimension “safety” focus on safety issues related to P-EMS responses for patient, EMS-staff or others. The safety issues can be medical, technical or operational. P-EMS operates rapid response cars and helicopters, all activities associated with operative risks for patients, bystanders and crew [33]. Additionally, P-EMS care for severely injured or ill patients without access to safety initiatives as seen in hospitals e.g. senior assistance or access to patient history. Moreover, the pre-hospital environment can be associated with hazards like extreme temperatures, traffic and difficult access requiring application of rescue techniques [34].

The quality dimension “efficiency” is about avoiding medical waste; including waste of use of P-EMS personal, equipment and energy. Advanced major incident management reduce over-triage and is an example of how to prevent waste of resources [35]. This issue is covered in QI 22, which measures the proportion of P-EMS doctors who have completed a major incident management program.

“Equity” is about ensuring that quality of care is provided equally regardless of the patient’s gender, ethnicity, geographic location and socioeconomic status. P-EMS contributes to equitable care by reducing transportation times (when using a helicopter) and by bringing the hospital competencies to the pre-hospital environment. This role of P-EMS can also be defined a governmental objective [36] as an initiative to give people living in scattered spread populations specialized care within due time. Thus, a more equitable access to centralized medical treatments like neurosurgery and invasive cardiology can be provided. The expert panel argued that the involvement of a physician or a paramedic from P-EMS in the dispatch decision would secure the most correct use of P-EMS, thus contributing to equitable care. This is addressed in QI 9 «P-EMS involvement in dispatch decision».

“Effectiveness” is about ensuring that provided treatment is evidence-based. Care proven effective should be provided, thereby preventing undertreatment. Similarly, care proven ineffective should not be provided, thereby preventing overtreatment. There is some evidence that the use of physicians in EMS for selected patient groups, improve outcome or proxy outcomes such as physiological variables [1, 37]. However, the current documentation on the impact of P-EMS initiatives is controversial and, therefore, effectiveness QIs are difficult to derive from the literature. The expert panel combined existing evidence with the experience and considerations of all panel members. One of the resulting QIs, QI 12 “Advanced Treatment”, addresses care considered indicated, but not feasible without the competence of P-EMS. Please note that withholding unethical or unnecessary treatment by the P-EMS physician also was defined as “advanced treatment” by the expert panel. Thus, critical decision making as illustrated for pre-hospital advanced airway management by Rognås et al.[38], is recognized as a part of quality care.

“Patient-centeredness” is about ensuring that care is responsive to individual needs. Although most stakeholders and clinicians in P-EMS presumably put the patient in the center of the care, the study group wanted to secure that the patients were represented in the expert panel. Therefore, a leading representative from a major patient organization was invited to join the expert panel. Developing quality indicators for this quality dimension in P-EMS is challenging, primarily because many of the patients cared for by the service are unconscious or at least not capable of expressing their own needs in their usual manner. This can be due to the clinical condition itself, stressful situation or pharmacological interventions. The needs of the patient’s family, however, can be expressed more easily. Moreover, the term “patient-centeredness” has been argued expanded to “patient- and family-centeredness” [39]. Patient- and family-centered care is based on the beneficial partnership between patient, family and health care workers, and it can be applied to patients in all ages and in any health care setting [39, 40]. As a surrogate for measuring the patient’s needs, the needs of the patient’s relatives could be addressed, as defined in QI 15 “Care for relatives”. This QI addresses the relatives’ needs, including the need for practical and emotional assistance.

Types of quality indicators

J. Mainz has reviewed the strengths of structure-, process- and outcome-indicators [14]. Structure indicators are found most useful when they predict variations in processes or outcomes of care. Process indicators are particularly useful when coping with short time frames, low volume providers and when tools to adjust or stratify for patient related factors are difficult to apply. Comparison of process indicators are generally easier to interpret and more sensitive to small differences than comparison of outcomes data. Based on these characteristics, we consider process indicators particularly suitable for continuous quality measurement of P-EMS. Although necessary to get information about a patient’s final outcome, long-term outcome indicators appear less feasible for measuring the isolated quality of P-EMS. From a patient is admitted to hospital by P-EMS until a long-term outcome is measured, the patient has received care from numerous units, each potentially influencing outcome [41]. Unless performing risk adjustment and outcome measurement for each of these care intervals, it will problematic to use long term outcome measures as indicators of the isolated quality of P-EMS. Instead, quality indicators from the pre-hospital care interval should be developed for this purpose [23]. The Institute of Medicine has stated that «quality of care is the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge» [42]. This definition of quality is a reminder that good quality is not identical to good outcomes. Despite excellent health care is provided, outcome for patients can be poor. Opposite, patients receiving poor quality health care can have good outcome.

Strengths and limitations

Using the professional network of the study group for recruitment of panel members, may have limitations: Colleagues that share our own professional interests may have been easier to identify and invite, than those with other views and mindsets. This practice can possibly lead to an imbalance in the composition of the expert panel. Although the expert panel reflected the inter-disciplinary nature of EMS, we recognize that we did not include a representative from an Emergency Medical Communication Central (EMCC). There was a trade-off between a manageable number of experts and the need for an inter-disciplinary composition of the expert panel. Consensus methodology literature describe an optimal group size of eight to twelve members [43]. Our efforts in making the panel sufficiently inter-disciplinary resulted in a group size of 18 experts. However, due to a rigorous time schedule throughout the process, the slightly larger expert panel did not lead to any unnecessary delay.

Eight nations were represented in the expert panel; all from developed countries and the majority from Scandinavia. Therefore, we recognize that other areas may have other QIs which should be implemented locally. However, P-EMS as a service is usually only delivered in d eveloped countries. Hence, for these services the nationalities included should be representative.

In the consensus process we used a system of ranking and scoring to identify the QIs supported by the most experts in the panel. There are different methods to prioritize proposals and obtain consensus, and no method is considered clearly superior to the others [44]. At the consensus meeting, any proposal was omitted if vigorously opposed by one or more of the participants.

The use of a Likert scale is another recognized method for defining the level of consensus. Likert scores are used for QI selection in several studies, including a recent Danish study selecting QIs for hospital-based emergency care [45, 46]. Whether the use of a Likert scale would have improved our consensus process remains unclear. Moreover, it is methodological important to prevent that verbally skilled panel members dominate the consensus process. This issue also relates to “strong” personalities or experts enjoying a higher reputation than the other panel members [19]. Therefore, proposals and rankings in stage 1 and 2 were anonymous.

The value of this study is the development of multidimensional quality indicators for P-EMS. This represents a starting point for future studies on measuring and improving quality of P-EMS. The necessary next step should be to test the feasibility and validity of the QIs in a sample of P-EMSs. Thus, a more final set of QIs for P-EMS can be developed.

Conclusion

Using a modified nominal group technique, an international expert panel reached consensus on 15 response specific and 11 system specific quality indicators for P-EMS. All six quality dimensions stated by Institute of Medicine are covered, and the quality indicators represent structure, process and outcome indicators. This 26 quality indicators large set is developed to represent a broad and comprehensive approach to quality measurement in international P-EMS, allowing future quality measurement comparable across different P-EMS systems.

Abbreviations

EMCC: 

Emergency Medical Communication Central

EMS: 

Emergency medical service

P-EMS: 

Physician-staffed emergency medical service

QI: 

Quality indicator

Declarations

Acknowledgements

The project group would like to thank the following members of the “EQUIPE-collaboration group” for their valuable participation: Gry Elise Albrektsen (Norway), Peter Anthony Berlac (Denmark), Geir Sverre Braut (Norway), Per Bredmose (Norway), Robert Burman (Norway), Brian Burns (Australia), Alasdair Corfield (Scotland), Marta Ebbing (Norway), Magnus Hjortdahl (Norway), Freddy Lippert (Denmark), Pål Madsen (Norway), Per Oretorp (Norway), Leif Rognås (Denmark), Julian Thompson (UK), Oddvar Uleberg (Norway), Janne Virta (Finland), Wolfgang G Voelckel (Austria) and Ryan Wubben (USA).

The authors would like to thank all members and involved staff from NAAF for making this study possible.

Funding

This project was financed by the Norwegian Air Ambulance Foundation (NAAF), and the first author (HH) holds a position as a PhD-student financed by NAAF.

Availability of data and materials

The datasets during and/or analyzed during the current study available from the corresponding author on reasonable request.

Authors’ contributions

HH was the main author, communicated with the members of the expert panel and collected the data. AK conceptualized and initiated the project. MR chaired the consensus meeting, and all authors were involved in the planning of the meeting and the writing process. All authors read and approved the final version of the manuscript.

Competing interests

MR is Deputy Editor of Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine. He had no involvement in the editorial management of this manuscript.

Consent for publication

Not applicable.

Ethics approval and consent to participate

Not applicable.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Authors’ Affiliations

(1)
Department of Research and Development, Norwegian Air Ambulance Foundation
(2)
Department of Emergency Medicine and Pre-Hospital Services, St. Olavs Hospital
(3)
Department of Circulation and Medical Imaging, Medical Faculty, NTNU, Norwegian University of Science and Technology
(4)
Department of Health Studies, University of Stavanger
(5)
Division of Emergencies and Critical Care. Department of Anaesthesia, Oslo University Hospital
(6)
Department of Anaesthesiology and Intensive Care, St. Olav University Hospital

References

  1. Fischer M, Kamp J, Garcia-Castrillo Riesgo L, Robertson-Steel I, Overton J, Ziemann A, et al. Comparing emergency medical service systems--a project of the European Emergency Data (EED) Project. Resuscitation. 2011;82(3):285–93.View ArticlePubMedGoogle Scholar
  2. Kruger AJ, Lossius HM, Mikkelsen S, Kurola J, Castren M, Skogvoll E. Pre-hospital critical care by anaesthesiologist-staffed pre-hospital services in Scandinavia: a prospective population-based study. Acta Anaesthesiol Scand. 2013;57(9):1175–85. doi:10.1111/aas.12181.View ArticlePubMedGoogle Scholar
  3. Ghosh R, Pepe P. The critical care cascade: a systems approach. Curr Opin Crit Care. 2009;15(4):279–83. doi:10.1097/MCC.0b013e32832faef2.View ArticlePubMedGoogle Scholar
  4. Lockey D. International EMS, systems: Geographical lottery and diversity but many common challenges. Resuscitation. 2009;80(7):722. doi:10.1016/j.resuscitation.2009.04.006.View ArticlePubMedGoogle Scholar
  5. Kruger AJ, Skogvoll E, Castren M, Kurola J, Lossius HM. ScanDoc Phase 1a Study G. Scandinavian pre-hospital physician-manned Emergency Medical Services--same concept across borders? Resuscitation. 2010;81(4):427–33. doi:10.1016/j.resuscitation.2009.12.019.View ArticlePubMedGoogle Scholar
  6. Garner AA. The role of physician staffing of helicopter emergency medical services in prehospital trauma response. Emerg Med Australas. 2004;16(4):318–23. doi:10.1111/j.1742-6723.2004.00636.x.View ArticlePubMedGoogle Scholar
  7. Rehn M, Davies G, Smith P, Lockey DJ. Structure of Rapid Response Car Operations in an Urban Trauma Service. Air Med J. 2016;35(3):143–7. doi:10.1016/j.amj.2015.12.015.View ArticlePubMedGoogle Scholar
  8. Porter ME. What is value in health care? N Engl J Med. 2010;363(26):2477–81.View ArticlePubMedGoogle Scholar
  9. Institute of Medicine. Emergency Medical Services at a Crossroads. Washington DC, USA: The National Academies Press; 2006.Google Scholar
  10. World Health Organization. Quality of care: a process for making strategic choices in health systems2006.Google Scholar
  11. National Committee for Quality Assurance. The essential guide to health care quality. http://www.ncqa.org/Newsroom/ResourceLibrary/EssentialGuidetoHealthCareQuality.aspx. Accessed 15 Feb 2016.
  12. Fevang E, Lockey D, Thompson J, Lossius HM, Torpo Research C. The top five research priorities in physician-provided pre-hospital critical care: a consensus report from a European research collaboration. Scand J Trauma Resusc Emerg Med. 2011;19:57. doi:10.1186/1757-7241-19-57.View ArticlePubMedPubMed CentralGoogle Scholar
  13. Snooks H, Evans A, Wells B, Peconi J, Thomas M, Woollard M, et al. What are the highest priorities for research in emergency prehospital care? Emerg Med J. 2009;26(8):549–50. doi:10.1136/emj.2008.065862.View ArticlePubMedGoogle Scholar
  14. Mainz J. Defining and classifying clinical indicators for quality improvement. Int J Qual Health Care. 2003;15(6):523–30.View ArticlePubMedGoogle Scholar
  15. Sackett D, Strauss S, Richardson W. Evidence-Based Medicin: how to practice and teach EBM. 2nd ed. Edinburgh: Churchill Livingston; 2000.Google Scholar
  16. Spaite DW, Criss EA, Valenzuela TD, Guisto J. Emergency medical service systems research: problems of the past, challenges of the future. Ann Emerg Med. 1995;26(2):146–52.View ArticlePubMedGoogle Scholar
  17. Donabedian A. The quality of care. How can it be assessed? JAMA. 1988;260(12):1743–8.View ArticlePubMedGoogle Scholar
  18. Donabedian A. Evaluating the quality of medical care. Milbank Mem Fund Q. 1966;44 Suppl 1:166–206.View ArticleGoogle Scholar
  19. Murphy MK, Black NA, Lamping DL, McKee CM, Sanderson CF, Askham J, et al. Consensus development methods, and their use in clinical guideline development. Health Technol Assess. 1998;2(3):i–iv. 1-88.PubMedGoogle Scholar
  20. Lossius HM, Kruger AJ, Ringdal KG, Sollid SJ, Lockey DJ. Developing templates for uniform data documentation and reporting in critical care using a modified nominal group technique. Scand J Trauma Resusc Emerg Med. 2013;21:80. doi:10.1186/1757-7241-21-80.View ArticlePubMedPubMed CentralGoogle Scholar
  21. Kruger AJ, Lockey D, Kurola J, Di Bartolomeo S, Castren M, Mikkelsen S, et al. A consensus-based template for documenting and reporting in physician-staffed pre-hospital services. Scand J Trauma Resusc Emerg Med. 2011;19:71. doi:10.1186/1757-7241-19-71.View ArticlePubMedPubMed CentralGoogle Scholar
  22. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the Twenty-first Century. Washington: National Academies Press; 2001.Google Scholar
  23. Rehn M, Kruger AJ. Quality improvement in pre-hospital critical care: increased value through research and publication. Scand J Trauma Resusc Emerg Med. 2014;22:34.View ArticlePubMedPubMed CentralGoogle Scholar
  24. Jones SE, Brenneis AT. Study design in prehospital trauma advanced life support-basic life support research: a critical review. Ann Emerg Med. 1991;20(8):857–60.View ArticlePubMedGoogle Scholar
  25. Moore L. Measuring quality and effectiveness of prehospital EMS. Prehosp Emerg Care. 1999;3(4):325–31.View ArticlePubMedGoogle Scholar
  26. El Sayed MJ. Measuring quality in emergency medical services: a review of clinical performance indicators. Emerg Med Int. 2012;2012:161630.View ArticlePubMedGoogle Scholar
  27. Department of Health. Emergency care 10 years on: reforming emergency care. London: Department of Health; 2007.Google Scholar
  28. Cooke M, Fischer J, Mc Leod E, et al. Reducing attendances and waits in emergency departments: a systematic review of present innovations. London: National Coordinating Centre for NHS Service Delivery and Organisation (NCCSDO); 2005. http://www.netscc.ac.uk/hsdr/files/project/SDO_ES_08-1204-029_V01.pdf.
  29. Eisenberg MS, Bergner L, Hallstrom A. Cardiac resuscitation in the community. Importance of rapid provision and implications for program planning. JAMA. 1979;241(18):1905–7.View ArticlePubMedGoogle Scholar
  30. Pons PT, Haukoos JS, Bludworth W, Cribley T, Pons KA, Markovchick VJ. Paramedic response time: does it affect patient survival? Acad Emerg Med. 2005;12(7):594–600. doi:10.1197/j.aem.2005.02.013.View ArticlePubMedGoogle Scholar
  31. Price L. Treating the clock and not the patient: ambulance response times and risk. Qual Saf Health Care. 2006;15(2):127–30. doi:10.1136/qshc.2005.015651.View ArticlePubMedPubMed CentralGoogle Scholar
  32. Walcher F, Weinlich M, Conrad G, Schweigkofler U, Breitkreutz R, Kirschning T, et al. Prehospital ultrasound imaging improves management of abdominal trauma. Br J Surg. 2006;93(2):238–42. doi:10.1002/bjs.5213.View ArticlePubMedGoogle Scholar
  33. Chesters A, Grieve PH, Hodgetts TJ. A 26-year comparative review of United Kingdom helicopter emergency medical services crashes and serious incidents. J Trauma Acute Care Surg. 2014;76(4):1055–60. doi:10.1097/TA.0000000000000170.View ArticlePubMedGoogle Scholar
  34. Kruger AJ, Lippert F, Brattebo G. Pre-hospital care and hazardous environments. Acta Anaesthesiol Scand. 2012;56(2):135–7. doi:10.1111/j.1399-6576.2011.02619.x.View ArticlePubMedGoogle Scholar
  35. Aylwin CJ, Konig TC, Brennan NW, Shirley PJ, Davies G, Walsh MS, et al. Reduction in critical mortality in urban mass casualty incidents: analysis of triage, surge, and resource use after the London bombings on July 7, 2005. Lancet. 2006;368(9554):2219–25. doi:10.1016/S0140-6736(06)69896-6.View ArticlePubMedGoogle Scholar
  36. Stortingsmelding nr. 50. Samarbeid og styring. 1993-94.Google Scholar
  37. Roudsari BS, Nathens AB, Cameron P, Civil I, Gruen RL, Koepsell TD, et al. International comparison of prehospital trauma care systems. Injury. 2007;38(9):993–1000.View ArticlePubMedGoogle Scholar
  38. Rognas L, Hansen TM, Kirkegaard H, Tonnesen E. Refraining from pre-hospital advanced airway management: a prospective observational study of critical decision making in an anaesthesiologist-staffed pre-hospital critical care service. Scand J Trauma Resusc Emerg Med. 2013;21:75. doi:10.1186/1757-7241-21-75.View ArticlePubMedPubMed CentralGoogle Scholar
  39. Institute for Patient- and Family-centered Care. www.ipfcc.org. Accessed 3 July 2016.
  40. Committee On Hospital C, Institute For P, Family-Centered C. Patient- and family-centered care and the pediatrician’s role. Pediatrics. 2012;129(2):394–404. doi:10.1542/peds.2011-3084.View ArticleGoogle Scholar
  41. Spaite DW, Maio R, Garrison HG, Desmond JS, Gregor MA, Stiell IG, et al. Emergency Medical Services Outcomes Project (EMSOP) II: developing the foundation and conceptual models for out-of-hospital outcomes research. Ann Emerg Med. 2001;37(6):657–63.View ArticlePubMedGoogle Scholar
  42. Institute of Medicine LK. Medicare: a strategy for quality assurance. Washington D.C: National Academy Press; 1990.Google Scholar
  43. Waggoner J, Carline JD, Durning SJ. Is There a Consensus on Consensus Methodology? Descriptions and Recommendations for Future Consensus Research. Acad Med. 2016;91(5):663–8. doi:10.1097/ACM.0000000000001092.View ArticlePubMedGoogle Scholar
  44. Fink A, Kosecoff J, Chassin M, Brook RH. Consensus methods: characteristics and guidelines for use. Am J Public Health. 1984;74(9):979–83.View ArticlePubMedPubMed CentralGoogle Scholar
  45. Madsen MM, Eiset AH, Mackenhauer J, Odby A, Christiansen CF, Kurland L, et al. Selection of quality indicators for hospital-based emergency care in Denmark, informed by a modified-Delphi process. Scand J Trauma Resusc Emerg Med. 2016;24(1):11. doi:10.1186/s13049-016-0203-x.View ArticlePubMedPubMed CentralGoogle Scholar
  46. Boulkedid R, Abdoul H, Loustau M, Sibony O, Alberti C. Using and reporting the Delphi method for selecting healthcare quality indicators: a systematic review. PLoS ONE. 2011;6(6):e20476. doi:10.1371/journal.pone.0020476.View ArticlePubMedPubMed CentralGoogle Scholar

Copyright

© The Author(s). 2017

Advertisement