Skip to main content

Table 5 Comparison between standard CPR training versus non-standard face to face, hybrid, and online CPR teaching methodologies

From: Cardiopulmonary resuscitation (CPR) training strategies in the times of COVID-19: a systematic literature review comparing different training methodologies

  Alternative CPR Training
Non-standard Face to Face CPR Training Hybrid CPR Training Online CPR Training
Standard CPR Training (Instructor-led Classroom-based) CPR Performance
1. Similar performance was seen between the peer-led (41.0%, N = 466) and instructor-led (40.3%, N = 471) groups [20].
2. No significant difference between jigsaw and instructor-led group. Chest compression depth was different between ventilation and compression groups (p < 0.01) [21].
3. Flowchart group was more confident than non-flowchart group (7 ± 2 vs. 5 ± 2, p = 0.0009) [22].
4. No difference in willingness to perform CPR (64.7% vs. 55.2%, p = 0.202) between peer-assisted and professional instructor groups [24].
CPR Performance
1. The kiosk group outperformed instructor-led group on hand placement (4.9) but not on compression depth score (− 5.6) [26].
2. For all outcome measures, mean scores were higher in the interactive-computer training plus instructor-led practice as compared to the instructor-led group [27].
CPR Performance
1. Video self-instruction group had superior overall performance with only 19% non-competent trainees in comparison to 43% non-competent trainees in the instructor-led group [28].
2. Forty percent of video self-instruction trainees were competent compared to 16% competent in the instructor-led group [31].
3. Video group had more accurate airway opening (p < 0.001), breathing check (p < 0.001), first rescue breathing (p = 0.004), hand positioning (p = 0.004), and higher confidence and willingness to perform CPR at 3 months [32].
4. Voice advisory mannequin feedback group performed more correct hand position (73% vs. 37%, p = 0.014) and better compression rate (124 vs 135, p = 0.089) than the instructor-led group. Women in the voice advisory mannequin feedback group showed more improvement in compression depth (p = 0.018) and adequate compressions (p = 0.021) [34].
5. The video-only group had lower compression depth scores (− 9.9) than the classroom group [26].
6. For all outcome measures, mean scores were higher in the interactive-computer training as compared to the instructor-led group [27].
7. Video-based group performed better scene safety (95.2% vs. 76.1%) and call for help (97.6% vs. 76.1%) than the instructor-based group (p < 0.05). Moreover, the video-based group had shorter response to compression time (35 ± 9 s vs. 54 ± 14 s) as compared to the instructor-based group (p < 0.001) [35].
8. The VR group was inferior to face-to-face training in chest compression depth (49 mm vs. 57 mm), chest compression fraction (61% vs. 67%, p < 0.001), proportion of participants fulfilling depth (51% vs. 75%, p < 0.001), and rate requirements (50% vs. 63%, p = 0.01), but superior in chest compression rate (114/min vs. 109/min) and compressions with full release (98% vs. 88%, p = 0.002). The VR group had lower overall scores (10 vs. 12, p < 0.001) as compared to the face-to-face group [37].
9. Immediately post-training, video group had higher scores in overall performance (60% vs. 42%), assessing responsiveness (90% vs. 72%), ventilation volume (61% vs. 40%), and correct hand placement (80% vs. 68%) but lower scores in calling 911 (71% vs. 82%) as compared to instructor-led training [38].
Standard CPR Training (Instructor-led Classroom-based) CPR Quality
1. Simplified CPR group performed better on the algorithm (p < 0.01), had higher number and adequate compressions (p < 0.01), and shorter hands-off time (p < 0.001). No difference in time taken to initiate CPR [19].
2. Shorter hands-off time in the flowchart (147 ± 30s) versus non-flowchart group (169 ± 55 s) (p = 0.024). However, time to chest compression was longer in the flowchart group (60 ± 24 s vs. 23 ± 18 s, p < 0.0001) [22].
3. 58% more compressions can be achieved with a silver-staged approach (50:5 ratio) in the first 8 critical minutes. Staged group had better ‘shout for help’ after 2 months (p = 0.02 to p < 0.01) and adequate compressions after retraining (p = 0.05) and at 4 months (p = 0.04) [23].
CPR Quality
1. No statistically significant difference in time to first chest compression (33 s vs. 31 s, U = 1171, p = 0.73) and number of total chest compressions (101.5 vs. 104, U = 1083, p = 0.75) between the instructor-led and flipped learning group, respectively [25].
2. There was no significant difference on total scores between instructor-led and kiosk participants [26].
CPR Quality
1. The instructor-led training group showed superior performance than the computer-based training group in the quality of CPR compressions (location, rate, depth, and release) [29].
2. Both brief video and instructor-led group called 911 more frequently and sooner, started chest compression earlier, and had improved chest compression rates and hands-off time. However, chest compression depth was better in the instructor-led versus the brief video group [30].
3. Voice advisory mannequin feeback group had more compressions with adequate depth and hand placement, and had more ventilations with adequate volume than the instructor-led group. However, compression rates between the groups were similar [33].
4. The video-only group had a lower total score (compression rate, depth, and correct hand placement) (− 9.7) than the instructor-led group [26].
Standard CPR Training (Instructor-led Classroom-based) CPR Knowledge
1. Better retention was seen in the bronze (50 compressions) and silver (50 compressions:5 breaths) stages when compared to conventional training [23].
2. No difference in knowledge retention (61.76 ± 17.80 vs. 60.78 ± 39.77, p = 0.848) between peer-assisted and professional instructor groups [24].
CPR Knowledge
1. Mean CPR knowledge was above 80% with use of a computer program two days after training [27].
CPR Knowledge
1. Although the computer-based training group had lower scores, there was no significant difference from the instructor-led training group [29].
2. Video self-instruction trainees and instructor-led trainees achieved comparable scores on CPR-related knowledge and attitudes [31].
3. Mean CPR knowledge was above 80% with use of a computer program two days after training [27].
4. After 3 months, the instructor group had better score in assessment of breathing (91% vs. 72%) as compared to the DVD-based group (p = 0.03). However, DVD-based group had better average inflation volume (844 ml vs. 524 ml, p = 0.006) and chest compression depth (45 mm vs. 39 mm, p = 0.005) [36].
5. At 2 months post-training, video group had higher scores in overall performance (44% vs. 30%), assessing responsiveness (77% vs. 60%), ventilation volume (41% vs. 36%), and correct hand placement (64% vs. 59%) but lower scores in calling 911 (53% vs. 74%) [38].