Skip to main content
  • Original research
  • Open access
  • Published:

Can clinicians identify community-acquired pneumonia on ultralow-dose CT? A diagnostic accuracy study

Abstract

Background

Without increasing radiation exposure, ultralow-dose computed tomography (CT) of the chest provides improved diagnostic accuracy of radiological pneumonia diagnosis compared to a chest radiograph. Yet, radiologist resources to rapidly report the chest CTs are limited. This study aimed to assess the diagnostic accuracy of emergency clinicians’ assessments of chest ultralow-dose CTs for community-acquired pneumonia using a radiologist’s assessments as reference standard.

Methods

This was a cross-sectional diagnostic accuracy study. Ten emergency department clinicians (five junior clinicians, five consultants) assessed chest ultralow-dose CTs from acutely hospitalised patients suspected of having community-acquired pneumonia. Before assessments, the clinicians attended a focused training course on assessing ultralow-dose CTs for pneumonia. The reference standard was the assessment by an experienced emergency department radiologist. Primary outcome was the presence or absence of pulmonary opacities consistent with community-acquired pneumonia. Sensitivity, specificity, and predictive values were calculated using generalised estimating equations.

Results

All clinicians assessed 128 ultralow-dose CTs. The prevalence of findings consistent with community-acquired pneumonia was 56%. Seventy-eight percent of the clinicians’ CT assessments matched the reference assessment. Diagnostic accuracy estimates were: sensitivity = 83% (95%CI: 77–88), specificity = 70% (95%CI: 59–81), positive predictive value = 80% (95%CI: 74–84), negative predictive value = 78% (95%CI: 73–82).

Conclusion

This study found that clinicians could assess chest ultralow-dose CTs for community-acquired pneumonia with high diagnostic accuracy. A higher level of clinical experience was not associated with better diagnostic accuracy.

Background

Community-acquired pneumonia (CAP) is a common cause of hospitalisation worldwide [1], and antibiotic treatment is often indicated to treat and prevent potential deterioration to sepsis or respiratory failure. Even more often, CAP is a differential diagnosis requiring diagnostic investigations in acutely hospitalised adults [2, 3]. Fast and accurate CAP diagnosis is relevant for optimal patient treatment; hence, a core element to support in-hospital antimicrobial stewardship [4].

Diagnostic imaging is required in CAP diagnosis in a hospital setting, as clinical signs and symptoms alone have insufficient diagnostic ability [5]. Despite lack of accuracy, chest radiographs are most commonly used [1, 6]. Chest computed tomography (CT) is diagnostically superior to chest radiographs [6, 7]. Yet, standard-dose CT is not suitable for routine CAP diagnosis due to the cancer risk related to higher radiation exposure [8].

Reduced-dose chest CT is an emerging imaging modality for lung tissue that minimises risks and ethical concerns by lowering radiation exposure. Depending on the dose reduction, the investigation is referred to as low-dose or ultralow-dose CT (ULD-CT), with no strict definition of effective radiation doses [9]. In recent studies, the accuracy in detecting various pulmonic pathologies with ULD-CT compared to standard-dose CT has been good, including the identification of changes consistent with pneumonia [10,11,12]. These studies applied mean effective radiation doses between 0.05 and 0.26 mSv. This is similar to the radiation exposure from a regular chest radiograph (approximately 0.1 mSv [13]. For comparison, standard dose CT of the chest provides mean radiation exposures around 5–7 mSv [13, 14].

The addition of a reduced-dose chest CT reported by a radiologist has been shown to increase diagnostic certainty of clinicians treating patients suspected of having CAP in hospitals [7, 15]. However, radiologist resources are limited, especially in the emergency department (ED). Thus, adding the ULD-CT for patients suspected of CAP includes a potential delay in radiological reporting. Hypothetically, that could delay patient treatment or impair the clinician’s decision-making basis when starting treatment. If the ED clinicians were able to perform the first ULD-CT assessment for CAP changes, it would increase early diagnostic certainty and remove some time pressure from the ED radiologists.

The primary aim of this study was to investigate the diagnostic accuracy of ED clinicians’ independent assessments of the presence of CAP on chest ULD-CTs from acutely admitted patients with a clinical appearance suggesting CAP using an ED radiologist’s assessments as reference. Secondary aims was to investigate the association between level of clinical experience and diagnostic accuracy, reliability of the clinicians’ assessments, and their confidence in their assessments.

Methods

Study design

The study was a cross-sectional diagnostic accuracy study using retrospectively collected data. It was part of the umbrella project: Infectious Diseases in Emergency Departments (INDEED), on improving acute infection diagnostics to support antimicrobial stewardship in hospitals [16].

The study was registered by the Danish Data Protection Agency (no. 20/60508). Ethical approval was obtained from the local ethics committee, and all patients provided written and oral informed consent. Reporting was guided by the Standards for Reporting of Diagnostic Accuracy Studies (STARD) guidelines [17].

Setting

The study utilised ULD-CTs conducted on patients recruited from a Danish ED setting. Here, acutely hospitalised non-trauma patients were referred to a medical specialty (surgery, cardiology, neurology, or acute medicine) prior to medical assessment. The study focused on staff and patients referred to the acute medicine unit, where patients suspected of CAP are primarily assessed. Following the initial clinical assessment, further diagnostic investigations are ordered based on the tentative diagnoses, including referral to diagnostic imaging. A treatment plan is aimed at being set within the first four hours of the hospital stay. Patient inclusion was conducted by study assistants on weekdays between 8 a.m. and 8 p.m. from March 2021 to February 2022. Resources for patient inclusion were not available during night-time hours.

In this study, the ULD-CT data was utilised outside of the clinical environment after finalised patient inclusion. The planning, preparation, and collection of clinician assessments were conducted in the second half of 2022. The clinicians contributed to the study in their non-working hours.

Study population

The study population was a subsample of the INDEED project CAP population [16]. Patients were eligible if the receiving clinician suspected CAP at the initial clinical assessment, with no further diagnostic tests available. No specific requirements for symptoms or findings were set. Patients over 40 years old were eligible for a ULD-CT investigation to avoid unnecessary radiation exposure to younger adults. The most important exclusion criteria were: a) patients with verified SARS-CoV-2 infection within two weeks (to avoid a pandemic-related dominance of this disease in the study population); b) recent hospitalisation within 14 days (to avoid hospital-acquired infections); c) patients currently undergoing immunosuppressive or antineoplastic treatments (as they represent a population in need of specialist evaluations, and rarely candidates for restrictive antibiotic treatment). Further details on participation criteria is available in the protocol [16].

For the current study, we extracted consecutively included patients with an ULD-CT available from one inclusion site (Hospital Lillebaelt, Denmark) for consistency in ULD-CT images. No emphasis was put on image quality or the presence of CAP, as the study population should reflect a realistic flow of patients suspected of CAP.

Test methods

The index test was ten clinicians’ individual ULD-CT assessments for CAP. All clinicians were affiliated with the acute medicine unit of an ED. They represented two different levels of experience: five junior doctors with 0–1 year of clinical experience and five consultants in emergency medicine and/or internal medicine (not including pulmonologists, to avoid bias from their experience with assessing CT).

Prior to study assessments, all clinicians attended a five-hour web-based, interactive course focusing on assessing ULD-CT for typical pneumonic opacities. Other acute findings (pneumothorax, pleural effusion, and pulmonary oedema) were briefly covered as well. The course was organised and conducted by a professor in radiology (OG), with considerable experience in both teaching and research and clinical work with ULD-CT. The course included a short theoretical presentation, cases for individual assessment and plenary discussion, an individual test with ten ULD-CT case assessments, and a follow-up with feedback on test cases.

A web-based picture archiving and communication system (PACS) by Collective Minds Radiology (Sweden) was used for anonymised ULD-CT presentation.

The clinicians’ assessments were registered on a template in Research Electronic Data Capture (REDCap). The primary content was a binary assessment of the presence of pneumonic opacities consistent with CAP, and confidence in this assessment stated on a 7-point Likert scale. In addition, presence of pneumothorax, pleural effusion, and pulmonary oedema were to be registered (template available in Additional file 1).

The reference standard was a yes/no assessment of ULD-CTs for presence of CAP by one ED radiologist with 10 years of experience (CSS). Findings interpreted as pneumonia were consolidations that were not in a tumour or nodular pattern, tree-in-bud patterns, poorly defined peri-bronchial nodules observed in bronchopneumonia, and ground-glass opacifications. The reference assessment was part of a more thorough ULD-CT assessment. Thus, the radiologist’s assessment template in REDCap was not identical to the clinicians’ template (relevant parts of the template are available in Additional file 2). The radiologist also performed an assessment of image quality of the entire chest CT scan. However, the image quality of the lungs was always sufficient to address common point-of-care questions such as pneumothorax and pneumonia.

All assessors, including the radiologist, were aware that ULD-CTs were conducted to investigate for suspected CAP. They were blinded to other clinical data, comorbidities, previous and follow-up imaging, and ULD-CT assessments by the other assessors.

Analysis

Summary statistics were used to describe the data

Assisted by a statistician, sensitivity, specificity, and predictive values of the clinicians’ CAP assessments were calculated using generalised estimating equations (GEE) with a logit link function to account for correlations in assessments within each rater. The same model was applied for accuracy calculations on subgroups defined by image quality, chronic pulmonary disease diagnoses, and clinicians’ confidence in their assessments. We used a z-test to examine the statistical difference in diagnostic performance related to the two levels of clinical experience.

Rater reliability was calculated as both kappa (Cohens kappa for pairwise comparison, Conger’s kappa for multiple raters fully crossed design [18]) and percent agreement. Kappa interpretation was: κ ≤ 0: no agreement, 0.01–0.20: none to slight, 0.21–0.40: fair, 0.41–0.60: moderate, 0.61–0.80: substantial, and 0.81–1.00: almost perfect [19].

STATA statistical software (BE17.0, STATA Corporation, Texas) was used for analyses.

Sample size

The sample size was based on the precision of the sensitivity and specificity estimate, which was set at 15% point as we hypothesised sensitivity and specificity to be 85%, and wanted the lower limit of the confidence interval to be at least 70%. As the confidence intervals was on bootstrap we employed Monte Carlo simulation. From this, we needed 128 patients.

The number of clinicians was determined based on rational considerations, aiming to obtain reasonable face validity and heterogeneity among clinicians with respect for the clinician time required for the study.

The number of duplicate case assessments for intrarater reliability calculations was calculated to be 12 (expected kappa 0.85 and at least 0.5). Thereby, all clinicians ended up making 140 ULD-CT assessments. The clinicians were not informed of the presence of duplicates. Duplicates were presented with at least 90 other ULD-CTs in between and not in sequence. Assessments from one of each duplicate was randomly discarded prior to other analyses.

Ultralow-dose CT specifications

A GE Revolution CT scanner (GE Healthcare, Waukesha, US) was used for the non-enhanced ULD-CTs. The applied ULD-CT protocol administered a mean effective dose of 0.27 mSv to a test sample using an identical scanner. Standard parameters of the ULD-CT protocol are presented in Table 1. Detailed information on technical specifications was published in a technical note by Mussmann et al. [20].

Table 1 Standard parameters of the chest ULD-CT protocol

As a part of the umbrella project (INDEED), the ULD-CT protocol was validated for CAP diagnosis against a standard-dose chest CT conducted in the same sequence. Additionally, we collected data on chest radiographs which most patients underwent as part of standard care. The readings for the pneumonia from ULD-CT and standard-dose CT aligned in 86% of the cases (110/128). Chest X-ray aligned with standard-dose CT in 73% of the cases (92/126).

Results

Study population

The required sample of 128 patients was extracted from the INDEED population [16] (Fig. 1). Baseline characteristics of the patients are presented in Table 2. All ULD-CTs were performed within six hours of the patients’ arrival at the hospital (median 2.4 h). In 56% of the cases, CAP was radiologically present. Seventy-three percent of the ULD-CTs were deemed of sufficient quality to visualise any potential pathology. Only one ULD-CT was considered insufficient for diagnostics. Body mass index above 40 (4% of the patients) was always associated with reduced image quality.

Fig. 1
figure 1

Flow of patients aFrom the inclusion site at Hospital Lillebaelt, Kolding bPatients included 8th April 2021—9thDecember 2021 cPatients included between 1st March 2021—31th March 2021 & 10th December—25th February 2022 (March avoided to match availability of project blood samples.) INDEED: “Infectious Diseases in Emergency Departments” project CAP: Community-acquired pneumonia ULD-CT: Ultralow-dose computed tomography

Table 2 Characteristics of the study population

Diagnostic accuracy

All clinicians assessed all 128 cases over a six-week period, starting immediately after the training course.

Seventy-eight percent of the clinicians’ CAP assessments corresponded to the radiologist’s assessment. Cross-tabulations are available in Additional file 3.

The diagnostic accuracy parameters are presented in Table 3. The clinicians’ overall sensitivity was 83% (95%CI: 77–88), specificity was 70% (95%CI: 59–81). Positive predictive value was 80% (95%CI: 74–84), negative predictive value was 78% (95%CI: 73–82). No statistically significant difference in diagnostic accuracy related to the clinicians’ level of experience was observed. From the point estimates, junior clinicians’ accuracy tended to be slightly better compared to consultants. The clinicians’ individual sensitivity ranged from 62 to 94%, specificity ranged from 30 to 89%.

Table 3 The clinicians’ diagnostic accuracy and interrater reliability in identifying community-acquired pneumonia on ultralow-dose CT

Sub-analyses using only cases with a better image quality, patients without a chronic pulmonary disease diagnosis, or assessments with high confidence (5–7 on the Likert scale) all resulted in slightly increased specificity. Maximum specificity was 77% for junior clinicians on cases without chronic pulmonary disease. Sensitivity stayed unchanged for consultants. Junior clinicians’ sensitivity increased to 89% when they were confident in their assessments (sub-analysis estimates are available in Additional file 4).

Rater reliability

Inter-rater reliability among all clinicians was “moderate” (Table 3), kappa = 0.54 (95%CI: 0.46–0.61), and percent agreement = 78% (95%CI: 74–81). Kappa estimates and percent agreement were better among junior clinicians and had more narrow confidence intervals.

Intra-rater reliability was “almost perfect” for nine clinicians and “moderate” for one clinician. The average intra-rater kappa value was 0.87, average intra-rater percent agreement was 93.5%.

Clinicians’ confidence and time consumption

On the 7-point Likert scale, median confidence was 6 (IQR: 5–7) in assessments regarding CAP presence for both clinician groups. Median time consumption for each assessment, including registration, was 2 min (IQR 1–3) for consultants and 2 min (IQR: 2–3) for juniors (Fig. 2).

Fig. 2
figure 2

Examples of images from the study population (a) Ultralow-dose computed tomography (ULD-CT) of the chest from a patient with an opacity consistent with community-acquired pneumonia (CAP) in the right lobe, which is also seen in the corresponding chest radiograph (b). (c) ULD-CT of the chest from a patient with a left lower lobe opacity consistent with CAP. This opacity was not initially identified on the corresponding chest radiograph by the reporting radiologist (d). After reviewing the ULD-CT and comparing it with the chest radiograph, a small pneumonia could be suspected upon a second examination of the images (c, d)

Discussion

In this study, five consultants, five junior clinicians, and one radiologist assessed 128 unique ULD-CTs for CAP changes. Clinicians’ assessments showed good consensus (78%) with the radiologist, and clinicians’ confidence in their assessments was good. The clinicians’ overall sensitivity was 83% (95%CI: 77–88), specificity was 70% (95%CI: 59–81), and positive and negative predictive values were high. No statistically significant difference in diagnostic accuracy was found between juniors and consultants. Interrater reliability between clinicians was only moderate, with a non-significant tendency towards better reliability among junior clinicians compared to consultants.

To our knowledge, this is the first study investigating the diagnostic accuracy of clinicians’ assessments of ULD-CT targeted primarily at a CAP diagnosis. One study previously examined clinicians’ agreement in assessing ULD-CTs from ED patients presenting with dyspnoea [22]. The prevalence of pneumonia in their group was low (11%), and no comparable diagnostic accuracy estimates were available. Similar to this study, they reported a tendency towards better interrater reliability among junior clinicians (kappa = 0.66) compared to consultants (kappa = 0.33). A possible explanation of this trend is that the junior doctors form a more uniform group.

Previous studies reported that radiologist-reported reduced-dose CT improved the clinicians’ diagnostic certainty in patients admitted with suspected pneumonia, especially in terms of ruling out pneumonia in patients with an intermediate probability of having pneumonia prior to CT [7, 15]. In the current study, clinicians’ specificity was only moderate, and 13% of the assessments were false-positive. This suggests difficulties for clinicians to independently rule out pneumonia from pneumonia-negative ULD-CTs, thereby limiting the degree to which unnecessary antibiotic treatments can be reduced. Yet, it should be emphasised that imaging is not a stand-alone diagnostic tool in CAP [23]. Thus, this study primarily represents an isolated view of diagnosis with an unreported ULD-CT compared to having a radiological report.

We identified no recent, comparable studies evaluating clinicians’ diagnostic accuracy in assessing chest radiographs for pneumonia. A study from 1994 found that 66–72% of clinicians’ assessments for pneumonia on 15 chest radiographs were in agreement with the reference assessments by radiologists [24], which is slightly lower compared to our findings from ULD-CT. Previous studies on clinicians’ general assessments of chest radiographs indicated deficient skills, especially among junior doctors [25, 26]. In the current study, there was no effect of clinical experience on diagnostic accuracy. From this, it could be hypothesised that this study’s results represent a baseline level of clinicians’ accuracy, with room for improvement over time if the investigation becomes more widely used and clinicians’ gain more experience in assessing ULD-CT’s.

A strength of this study was the training and testing of the clinicians before ULD-CT assessments. This ensured equal basic qualifications. The course was pragmatic, interactive, and of reasonable duration, thus representing a realistic offer to medical staff outside a study setting. The training course did not cover all types of pneumonic changes in detail. Therefore, some disagreement was expected due to the radiologist’s ability to more confidently identify a broader spectrum of pneumonic changes. Possibilities exist to expand training and learning, for instance, by increasing the duration and intensity of the training programme or with ongoing feedback, which could be achieved with a possibility to follow up on the radiologists’ reports.

This study presents relevant data to support considerations of implementing chest ULD-CT for in-hospital infection diagnosis. Yet, several issues still need further clarification, including better clarification of the impact on radiological capacity, the amount and relevance of additional incidental findings, and the effect on patient management.

No clinical data were available for the clinicians and the radiologist while interpreting ULD-CTs. This was a strength in terms of enhancing objectivity of image interpretation. However, patient history and diagnostic expectations affect diagnostic performance among radiologists [27], and clinicians’ assessments will probably be influenced by other clinical data in a real setting as well. Thus, the same degree of objectivity might not apply outside the study setting. Further, no access to previous patient imaging can be viewed as a limitation, as previous images could assist in correctly identifying acute changes, thereby increasing diagnostic accuracy for acute CAP-related findings.

Another limitation was the exclusion criteria set for the INDEED population. More than half of the patients suspected of having CAP were excluded. The main reason was the SARS-CoV-2 infection, which seems reasonable in a post-pandemic context. Yet, especially the exclusion of patients with immunosuppressive treatments or recent admissions, could affect generalisability as ULD-CTs from these patients could be more difficult to assess.

Occurrence of some degree of inter-radiologist variability in image readings is well known, including from ULD-CT readings [22] and pneumonia diagnoses from chest radiographs [28, 29]. Thus, some classification bias could be present in this study, especially because early-stage CAP cases are represented in the study population. Further, a chest ULD-CT reported by one ED radiologist cannot be regarded as a gold standard for diagnostic imaging in CAP. It represents a pragmatic reference standard comparable to what is available in the ED.

Patients’ BMI impacts the possibilities for radiation dose reduction [30]. This study revealed that a BMI > 40 was always associated with reduced image quality. This indicates that for very obese patients, ULD-CT is not the best investigation, and adjustments in radiation doses or scanning protocols may be necessary.

Conclusions

This study found that clinicians could assess chest ULD-CTs for CAP with high, but not perfect, diagnostic accuracy using an ED radiologist’s assessments as reference standard. Interrater reliability among clinicians was moderate. A higher level of clinical experience was not associated with better accuracy or interrater reliability.

Availability of data and materials

The dataset used in the current study is available from the corresponding author on reasonable request.

Abbreviations

CT:

Computed tomography

ULD-CT:

Ultralow-dose computed tomography

CAP:

Community-acquired pneumonia

ED:

Emergency department

mSv:

Millisievert

PACS:

Picture archiving and communication system

REDCap:

Research Electronic Data Capture

GEE:

Generalised estimating equations

PPV:

Positive predictive value

NPV:

Negative predictive value

CI:

Confidence interval

References

  1. Aliberti S, Dela Cruz CS, Amati F, Sotgiu G, Restrepo MI. Community-acquired pneumonia. Lancet. 2021;398(10303):906–19.

    Article  PubMed  Google Scholar 

  2. Klompas M, Ramirez JA, Bond S. Clinical evaluation and diagnostic testing for community-acquired pneumonia in adults UpToDate [updated 2023.01.03. 73.0:[Available from: https://www.uptodate.com/contents/clinical-evaluation-and-diagnostic-testing-for-community-acquired-pneumonia-in-adults. Accessed 4 January 2024

  3. Long DA, Long B, Koyfman A. Clinical mimics: an emergency medicine focused review of pneumonia mimics. Intern Emerg Med. 2018;13(4):539–47.

    Article  PubMed  Google Scholar 

  4. Septimus EJ. Antimicrobial resistance: an antimicrobial/diagnostic stewardship and infection prevention approach. Med Clin North Am. 2018;102(5):819–29.

    Article  PubMed  Google Scholar 

  5. Ebell MH, Chupp H, Cai X, Bentivegna M, Kearney M. Accuracy of signs and symptoms for the diagnosis of community-acquired pneumonia: a meta-analysis. Acad Emerg Med. 2020;27(7):541–53.

    Article  PubMed  Google Scholar 

  6. Garin N, Marti C, Scheffler M, Stirnemann J, Prendki V. Computed tomography scan contribution to the diagnosis of community-acquired pneumonia. Curr Opin Pulm Med. 2019;25(3):242–8.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Claessens YE, Debray MP, Tubach F, Brun AL, Rammaert B, Hausfater P, et al. Early chest computed tomography scan to assist diagnosis and guide treatment decision for suspected community-acquired pneumonia. Am J Respir Crit Care Med. 2015;192(8):974–82.

    Article  PubMed  Google Scholar 

  8. Zewde N, Ria F, Rehani MM. Organ doses and cancer risk assessment in patients exposed to high doses from recurrent CT exams. Eur J Radiol. 2022;149: 110224.

    Article  PubMed  Google Scholar 

  9. Suliman II, Khouqeer GA, Ahmed NA, Abuzaid MM, Sulieman A. Low-dose chest CT protocols for imaging COVID-19 pneumonia: technique parameters and radiation dose. Life (Basel). 2023;13(4):992.

    PubMed  PubMed Central  Google Scholar 

  10. Tækker M, Kristjánsdóttir B, Graumann O, Laursen CB, Pietersen PI. Diagnostic accuracy of low-dose and ultra-low-dose CT in detection of chest pathology: a systematic review. Clin Imaging. 2021;74:139–48.

    Article  PubMed  Google Scholar 

  11. Tækker M, Kristjánsdóttir B, Andersen MB, Fransen ML, Greisen PW, Laursen CB, et al. Diagnostic accuracy of ultra-low-dose chest computed tomography in an emergency department. Acta Radiol. 2021;63:336.

    Article  PubMed  Google Scholar 

  12. Garg M, Devkota S, Prabhakar N, Debi U, Kaur M, Sehgal IS, et al. Ultra-Low dose CT chest in acute COVID-19 pneumonia: a pilot study from India. Diagnostics (Basel). 2023;13(3):351.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Masjedi H, Zare MH, Keshavarz Siahpoush N, Razavi-Ratki SK, Alavi F, Shabani M. European trends in radiology: investigating factors affecting the number of examinations and the effective dose. Radiol Med. 2020;125(3):296–305.

    Article  PubMed  Google Scholar 

  14. Mahesh M, Ansari AJ, Mettler FA Jr. Patient exposure from radiologic and nuclear medicine procedures in the United States and Worldwide: 2009–2018. Radiology. 2023;307(1): e221263.

    Article  PubMed  Google Scholar 

  15. Prendki V, Scheffler M, Huttner B, Garin N, Herrmann F, Janssens JP, et al. Low-dose computed tomography for the diagnosis of pneumonia in elderly patients: a prospective, interventional cohort study. Eur Respir J. 2018;51(5):1702375.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Skjøt-Arkil H, Heltborg A, Lorentzen MH, Cartuliares MB, Hertz MA, Graumann O, et al. Improved diagnostics of infectious diseases in emergency departments: a protocol of a multifaceted multicentre diagnostic study. BMJ Open. 2021;11(9): e049606.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Cohen JF, Korevaar DA, Altman DG, Bruns DE, Gatsonis CA, Hooft L, et al. STARD 2015 guidelines for reporting diagnostic accuracy studies: explanation and elaboration. BMJ Open. 2016;6(11): e012799.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Conger AJ. Integration and generalization of kappas for multiple raters. Psychol Bull. 1980;88:322–8.

    Article  Google Scholar 

  19. McHugh ML. Interrater reliability: the kappa statistic. Biochem Med (Zagreb). 2012;22(3):276–82.

    Article  PubMed  Google Scholar 

  20. Mussmann B, Skov PM, Lorentzen MH, Skjøt-Arkil H, Graumann O, Andersen MB, et al. Ultra-low-dose emergency chest computed tomography protocols in three vendors: a technical note. Acta Radiol Open. 2023;12(3):20584601231183900.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Plesner LL, Iversen AK, Langkjær S, Nielsen TL, Østervig R, Warming PE, et al. The formation and design of the TRIAGE study–baseline data on 6005 consecutive patients admitted to hospital from the emergency department. Scand J Trauma Resusc Emerg Med. 2015;23:106.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Kristjánsdóttir B, Taekker M, Andersen MB, Bentsen LP, Berntsen MH, Dahlin J, et al. Ultra-low dose computed tomography of the chest in an emergency setting: a prospective agreement study. Medicine (Baltimore). 2022;101(31): e29553.

    Article  PubMed  Google Scholar 

  23. Long B, Long D, Koyfman A. Emergency medicine evaluation of community-acquired pneumonia: history, examination, imaging and laboratory assessment, and risk scores. J Emerg Med. 2017;53(5):642–52.

    Article  PubMed  Google Scholar 

  24. Young M, Marrie TJ. Interobserver variability in the interpretation of chest roentgenograms of patients with possible pneumonia. Arch Intern Med. 1994;154(23):2729–32.

    Article  CAS  PubMed  Google Scholar 

  25. Cheung T, Harianto H, Spanger M, Young A, Wadhwa V. Low accuracy and confidence in chest radiograph interpretation amongst junior doctors and medical students. Intern Med J. 2018;48(7):864–8.

    Article  PubMed  Google Scholar 

  26. Christiansen JM, Gerke O, Karstoft J, Andersen PE. Poor interpretation of chest X-rays by junior doctors. Dan Med J. 2014;61(7):A4875.

    PubMed  Google Scholar 

  27. Yapp KE, Brennan P, Ekpo E. The effect of clinical history on diagnostic imaging interpretation - a systematic review. Acad Radiol. 2022;29(2):255–66.

    Article  PubMed  Google Scholar 

  28. Albaum MN, Hill LC, Murphy M, Li YH, Fuhrman CR, Britton CA, et al. Interobserver reliability of the chest radiograph in community-acquired pneumonia. PORT Investigators Chest. 1996;110(2):343–50.

    CAS  PubMed  Google Scholar 

  29. Hopstaken RM, Witbraad T, van Engelshoven JM, Dinant GJ. Inter-observer variation in the interpretation of chest radiographs for pneumonia in community-acquired lower respiratory tract infections. Clin Radiol. 2004;59(8):743–52.

    Article  CAS  PubMed  Google Scholar 

  30. Nabasenja C, Barry K, Nelson T, Chandler A, Hewis J. Imaging individuals with obesity. J Med Imaging Radiat Sci. 2022;53(2):291–304.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

We appreciate the great support from Collective Minds Radiology ApS, Denmark.

Funding

Open access funding provided by University of Southern Denmark. The Region of Southern Denmark supported the operating expenses for conducting the ULD-CTs. A CT scanner for the ULD-CTs was made available by the Department of Radiology, Hospital Lillebaelt, Kolding. Access to Collective Minds Radiology was provided by the University of Southern Denmark via their membership subscription. The University of Southern Denmark and Hospital Sønderjylland supported the operating expenses. The sponsors and supporters did not have any influence on the study or dissemination of the results.

Author information

Authors and Affiliations

Authors

Contributions

AH, OG, CBM, HS-A, CBL, MHL and SP conceptualized and designed the study. OG and AH developed and conducted the web-based training course for the study clinicians. MBA, BM, OG and CBL were technical responsible for ULD-CT data collection, including development and installation of the ULD-CT study scan protocol. CBM, MG, AAM, IH, JJA, SH, TTT, ANK, SK, UBK participated in the ULD-CT training course and reviewed all study scans forming the index test. CS contributed as reference standard assessor of all study ULD-CTs. AH were responsible for all data management, data analyses, and drafted the original manuscript. OG critically reviewed and revised the original manuscript followed by critical review of all authors. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Anne Heltborg.

Ethics declarations

Ethics approval and consent to participate

Ethical approval was obtained from the Regional Committees on Health Research Ethics for Southern Denmark (S-20200188). All patients provided written and oral informed consent.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: Index test assessment template.

13049_2024_1242_MOESM2_ESM.docx

Additional file 2: Reference standard assessment template; Parts of the radiological INDEED project ULD-CT assessment template used for this study.

Additional file 3: Cross-tabulations of test result proportions.

Additional file 4: Diagnostic accuracy; subgroup analyses.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Heltborg, A., Mogensen, C.B., Skjøt-Arkil, H. et al. Can clinicians identify community-acquired pneumonia on ultralow-dose CT? A diagnostic accuracy study. Scand J Trauma Resusc Emerg Med 32, 67 (2024). https://doi.org/10.1186/s13049-024-01242-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13049-024-01242-w

Keywords