5-Year Impact Factor: 0.9
Volume 35, 12 Issues, 2025
  Original Article     July 2020  

WhatsApp as an Emergency Teleradiology Application for Cranial CT Assessment in Emergency Services

By Ibrahim Inan1, Abdullah Algin2, Mehmet Sirik3

Affiliations

  1. Department of Radiology, Biruni University Hospital, Istanbul, Turkey
  2. Department of Emergency Medicine, University of Health Sciences, Umraniye Training and Research Center, Istanbul, Turkey
  3. Department of Radiology, Faculty of Medicine, Adiyaman University, Adiyaman, Turkey
doi: 10.29271/jcpsp.2020.07.730

ABSTRACT
Objective: To evaluate the diagnostic agreement of transmitted images of cranial CT due to trauma, through WhatsApp software compared to workstation image-based diagnosis.
Study Design: Observational study.
Place and Duration of Study: Department of Emergency Medicine, Adiyaman University Training and Research Hospital, from January 2017 to May 2018.
Methodology: A total of 94 cases that presented to the Emergency Department and underwent cranial CT were included in the study. CT images were video-recorded by the emergency physician using an Apple iPhone 7. The images were evaluated by two different radiologists using Samsung Galaxy Edge 7 and Samsung Note 8 mobile phones. Later, the radiological images were reviewed by two different radiologists at the PACS workstation. Then, the WhatsApp-mediated and final diagnoses were compared for various lesions to evaluate the interobserver agreement and diagnostic success of the use of WhatsApp software.
Results: In the assessment of the interobserver agreement, the kappa values were found to be 0.89 for normal findings, 0.84 for subdural hematoma, 0.73 for subarachnoid hemorrhage, 0.81 for epidural hematoma, 0.85 for fractures, 1 for parenchymal hematoma, and 0.68 for parenchymal contusion.
Conclusion: Although WhatsApp can be used in the evaluation of emergency cranial CT images, it is essential to note that some findings, especially those indicating fractures, subdural hematoma, and parenchymal contusion, can be overlooked.

Key Words: Teleradiology, PACS, Medical software, Computed tomography, WhatsApp, Instant Messenger.

INTRODUCTION

In emergency services, rapid and effective radiological imaging is crucial. It is known that the requirement for CT is rapidly increasing in patients presenting to an emergency department.1 In an ideal situation, the obtained images are evaluated by radiologists specialised in the field; but in some centres, emergency service physicians may need to perform this evaluation when there is no on-duty radiologist, especially in cases where there is a need for emergency diagnosis and treatment. However, in recent years, with the development of the concept of teleradiology, online remote access to these images has become possible, and radiology physicians have been able to evaluate and report on the images without physically visiting the hospital.

There are very different approaches in teleradiology. It is possible to evaluate and report on images using mobile phones or tablets through the mobile versions of the picture archiving and communication system (PACS) software or remote access is provided by installing this software on user laptops or PCs.2 Although these applications create more ideal and reliable teleradiology platform with an adequate technological infrastructure, they may not be feasible in some emergencies or in centres without these facilities, since they require uncommon commercial and specialised softwares. In such cases, emergency radiological evaluations are offered by video and picture transfer through instant messengers. For this purpose, WhatsApp, the most widely used platform for video and image transfer today, is often used in clinical practice.3

Currently, the use of instant messengers is controversial due to loss and degradation of data during video and photo capture, failure to properly transmit visual data, and the consequent potential risks in diagnostic terms.4 In particular, there are doubts about the use of these messengers in small-sized pathologies that are likely to be overlooked.5 Although their significance for the evaluation of images in emergencies is undeniable, there is a need to determine which lesions cannot be evaluated through instant messaging in order to perform a reliable assessment.

Table I: Specifications of the monitors and mobile phones used.

 

Purpose of use

Screen size

Resolution (pixel)

Operating system

Samsung Galaxy Edge 7

Mobile image evaluation (observer 1)

5.5"

1440x2560

Android

Samsung Note 8

Mobile image evaluation (observer 2)

6.3"

1440x2960

Android

IPhone 7

Recording videos

4.7"

1334x750

iOS

iMAC Pro

Conventional image evaluation on monitor

21.5"

4096 x 2304

macOS


Table II: Statistical results of the observers.

 

Sensitivity (%)

Specificity (%)

Positive predictive value (%)

Negative predictive value (%)

Accuracy

(%)

Normal

Observer 1

Observer 2

 

81.6

82.9

 

98.2

94.9

 

96.9

90.6

 

88.7

90.3

 

91.5

90.4

Subdural hematoma

Observer 1

Observer 2

 

90.5

95

 

84.9

85.1

 

63.3

63.3

 

96.9

98.4

 

86.2

87.2

Subarachnoid hematoma

Observer 1

Observer 2

 

91.4

82.1

 

86.4

85.5

 

80

80

 

94.4

87

 

88.3

84

Epidural hematoma

Observer 1

Observer 2

 

100

88.9

 

98.8

98.8

 

88.9

88.9

 

100

98.8

 

98.9

97.9

Fractures

Observer 1

Observer 2

 

95

95.7

 

82.4

85.9

 

59.4

68.8

 

98.4

98.4

 

85.1

88.3

Parenchymal hematoma

Observer 1

Observer 2

 

100

100

 

98.7

98.7

 

94.1

94.1

 

100

100

 

98.9

98.9

Parenchymal contusion

Observer 1

Observer 2

 

92.9

75

 

92.5

91

 

68.4

63.2

 

98.7

94.7

 

92.6

88.3

WhatsApp is widely used in clinical practice in emergency practice today for a quick radiologic consultation. However, no study that assesses effects of this kind of consultation on diagnostic power is noted. In this study, it was aimed to evaluate the diagnostic power and interobserver reliability of transmitting the images of one of the most required emergency radiological modalities, cranial CT, through WhatsApp software, which is currently the most widely used instant messenger compared to workstation image-based diagnosis.

METHODOLOGY

Patients that presented to the Emergency Department, Adiyaman University Hospital, Adiyaman/Turkey between January 2017 and May 2018 and underwent cranial CT with a preliminary diagnosis of intracranial hemorrhage were evaluated retrospectively. The local Ethics Committee approval was obtained. Of the 108 cases initially evaluated, 14 were excluded due to artifacts or unavailability of all or some of their CT images in the PACS station. As a result, 94 cases were included in the study. The CT images of the patients were retrospectively video-recorded by the emergency physician from a high-resolution monitor equipped with a PACS station using a mobile phone. The screen specifications of the monitors and phones used in the study are summarised in Table I. An Apple iPhone 7 was used for video-recording. Two video records of minimum 15 seconds were obtained from the brain and bone windows at the transverse sections with a thickness of 5 mm cross-section, and care was taken to use a fixed rate when moving from one section to another. The videos were recorded from a 21-inch medical monitor with a high resolution (4096 x 2304 pixels) by keeping the mobile phone approximately 30 cm away from the monitor and perpendicular to the screen. In addition, the recording was performed in an environment that was as dark as possible to avoid any reflection on the monitor. It was also ensured that the videos fully covered the maximum size of the image on the monitor, but no magnification was used for this purpose. Lastly, during recording, the demographic information display feature was disabled to protect patient privacy with no personal data included in the videos.

The images were evaluated by two different radiologists using Samsung Galaxy Edge 7 and Samsung Note 8 mobile phones. The observers were blinded to the clinical findings and prior examination results of the patients. These radiologists had five and ten years of experience in cranial CT evaluation. According to their assessment, the radiologists were asked to comment on normal findings or presence of subdural hematoma, subarachnoid hemorrhage, epidural hematoma, parenchymal hematoma, a calvarial fracture, and parenchymal contusion. Patients were evaluated randomly during the image interpretation.

After the answers to these questions were obtained independently from the two observers, the radiological images were evaluated by two different radiologists at the PACS station using the monitor from which the video had been taken and these radiologists also responded to the same questions and reached a consensus on the final diagnoses. Then, the WhatsApp-mediated and final diagnoses were compared for different lesions to evaluate the inter-observer agreement and diagnostic success of the use of WhatsApp software.

SPSS 21.0 (IBM Corp. Armonk, New York, USA) software was used for statistical analysis. The numerical data were expressed as mean and standard deviation values. The Shapiro-Wilk test was used to evaluate the normality of the distribution of numerical data. The categorical data were obtained as number and percentages. Cohen’s kappa coefficient was calculated to evaluate interobserver agreement. Sensitivity, specificity, negative predictive value, positive predictive value, and accuracy were calculated by creating cross tables between the observers’ WhatsApp-mediated diagnoses and the final diagnoses made by conventional evaluation. Statistical significance was defined at p<0.05.

RESULTS

A total of 94 patients presented to the Emergency Department and underwent cranial CT were included in the study. The mean age of the cases was 49.8 ± 20.7 years. Sixty patients (63.8%) were males and 34 (36.2%) were females.

According to the classification based on the final diagnoses obtained by the consensus of two radiologists evaluating the images on the PACS station, 32 cases had normal findings (34%), while subdural hematoma was present in 30 cases (31.9%), subarachnoid hemorrhage in 40 (42.6%), epidural hematoma in nine (9.6%), fractures in 32 (34%), parenchymal hematoma in 17 (18.1%), and parenchymal contusion in 19 (20.2%).

In the assessment of interobserver agreement, the kappa values were calculated as follows: 0.89 for normal findings, 0.84 for subdural hematoma, 0.73 for subarachnoid hemorrhage, 0.81 for epidural hematoma, 0.85 for fractures, 1 for parenchymal hematoma, and 0.68 for parenchymal contusion.

The sensitivity, specificity and accuracy values of the diagnoses made by the observers using WhatsApp are summarised in Table II.

DISCUSSION

WhatsApp can be used as a cost-effective and widespread teleradiology application, especially in the early diagnosis and rapid initiation of treatment in emergency cases. The aim was to determine the diagnostic success of WhatsApp-mediated CT evaluation and interobserver agreement. The statistical results differed according to the intracranial pathology present. According to the WhatsApp evaluation, the agreement between the two observers in the detection of parenchymal hematoma was almost perfect, and there was also substantial agreement in the identification of normal findings, subdural hematoma, epidural hematoma, and fractures. However, in subarachnoid hemorrhage and parenchymal contusion, the kappa values were calculated as 0.73 and 0.68, indicating a moderate level of agreement between the observers. The reduced interobserver agreement in the diagnosis of these highly mortal pathologies raises questions about the reliability of this method in the diagnostic process.

The sensitivities in the detection of normal cranial CT findings were close to 80% for both observers, indicate that pathological findings may have been missed in the minority of the cases. In the detection of normal cranial CT findings, the specificity and positive predictive values were calculated as very high, being greater than 90%. The relatively low sensitivity in the detection of normal CT parenchyma is thought to be secondary to low positive predictive values in some pathologies such as subdural hematoma, subarachnoid hemorrhage, and fractures.  According to positive predictive values, it was seen that up to approximately 30-40% of the cases with subdural hematoma, bone fractures and parenchymal contusion could be misdiagnosed in WhatsApp evaluation. These lesions may be obscured on the WhatsApp images due to very small sizes and thin structure. However, the size of the lesions and its relationship with diagnostic performance was not evaluated. For subarachnoid hemorrhage, parenchymal hematoma, and epidural hematoma, the positive predictive value was 80% or higher. In addition, the accuracy of WhatsApp evaluation in the diagnosis of subdural hematoma, subarachnoid hemorrhage, and fractures was found to be less than 90% for both observers. The negative predictive values were close to or above 90% in all diagnoses, suggesting that WhatsApp-mediated diagnoses were mostly accurate.

In the literature, WhatsApp and similar applications have been used for medical purposes, especially to strengthen organisation and communication within the hospital. For example, Kankane et al. reported that the clinical use of WhatsApp in neurosurgery was cost-effective and useful for early diagnosis and early initiation of treatment, particularly in emergency cases.6 In a study undertaken in Ireland, it was found that all the participants selected from the staff of a university hospital had a WhatsApp account and were members of a WhatsApp group created to facilitate communication in their respective clinics. In addition, 95 of the participants considered that the use of instant messaging was beneficial for the security of patients.3

Mobile applications used in radiology field can be categorised into different groups: softwares used for decision support, diagnostic applications such as mobile PACS, medical books and encyclopedias, journal reader programmes, and document organisers.7 In the literature, studies on the use of mobile telephones or tablets in teleradiology mostly focus on assessing the diagnostic success of mobile versions of officially adopted PACS applications in hospitals, rather than commonly used instant messaging apps, such as WhatsApp. CT examination using tablets with high screen resolution has been reported to be successful, but this is still not recommended for primary reporting.2,8-10

Garcia et al. evaluated 149 cranial CT findings using iPad2 32 GB and iPhone 2 32 GB in 149 patients with a preliminary diagnosis of stroke. The authors used Osirix HD iPad/iPhone version 2.0 mobile software and found that they had similar or even higher performance compared to traditional commercial PACS applications.11 However, in the current study, the video recordings were transmitted through WhatsApp, rather than a high-resolution mobile PACS application, such as Osiris, which can explain the lower success rate we obtained. Another factor resulting in the difference between the two studies in the evaluation of images was the screen size of the tablets used (Samsung Galaxy Edge 7/Samsung Note 8 versus iPad/iPhone). In other previous studies, it was shown that iPad had similar diagnostic success to PACS stations that were connected to large LCD monitors in the detection of acute tuberculosis from lung roentgenograms12 and visualization of lung nodules in thorax CT.13

Sindhu et al. reported that iPad was successful in the evaluation of CT and MR images in emergency radiological interpretation.10 In a study conducted in Israel, Shreter et al. found that the radiologists’ cranial CT examinations on tablets were accurate in the evaluation of acute pathologies.14 These results indicate that such investigations were mostly carried out using tablets (often iPads) that usually have a larger screen size than mobile phones and mobile PACS applications were generally utilized.4 Evaluations undertaken with iPad present similar results to LCD monitors and show that the use of iPad 2 in the diagnostic process is feasible. However, in this study, instead of iPad, mobile phones with smaller screens were used, and PACS software was replaced with an instant messaging application, WhatsApp.

The literature contains studies that investigated the use of WhatsApp in the evaluation of radiological images. For example, in a study by Santso et al., the X-ray and CT images of tibial plateau fractures were evaluated using WhatsApp and the diagnostic compliance and interobserver agreement were reported to be around 80%.15

This study has certain limitations. First, interobserver agreement was investigated only in the evaluation of observers using mobile phones, not in the gold standard evaluation using medical monitors. Considering that the interobserver agreement for certain diagnoses may have also been lower in the use of medical monitors, the results cannot be interpreted to provide evidence of the weakness of mobile diagnoses. Observers having different mobile devices may also affect inter-observer agreement and cause underestimation of Kappa values. Besides, the video capturing technique, ambient light, camera lens, and screen features of the phones used for these evaluations widely vary, making it difficult to compare the results of different studies. In addition, in the current study, mobile instant messaging was only utilised for the evaluation of cranial CT images, and it is possible to obtain different results from the examination of other radiological imaging modalities.

CONCLUSION

In conclusion, although WhatsApp can be used in the evaluation of emergency cranial CT images, it is essential to note that some findings, especially those indicating fractures, subdural hematoma, and parenchymal contusion, can be overlooked. Although WhatsApp was found to have approximately 80% sensitivity in detecting normal cranial CT findings in emergency cases, but it should be kept in mind that pathological findings may be present in the remaining 20%.

ETHICAL APPROVAL:
The ethical approval from Ethical Committee of Adiyaman University Training and Research Hospital was obtained prior to initiation of the research work (approval number 2018/2-28).

PATIENTS’ CONSENT:
Written informed consents were not obtained from the patients because it was a retrospective study. Patient information was kept confidential.

CONFLICT OF INTEREST:
Authors declared no conflict of interest.

AUTHORS’ CONTRIBUTION:
II: Statistics, literature review, leading role in manuscript write-up.
AA: Study design, data collection, compiling, discussion, correspondence.
MŞ: Critical review.

REFERENCES

  1. Larson DB, Johnson LW, Schnell BM, Salisbury SR, Forman HP. National trends in CT use in the emergency department: 1995-2007. Radiology 2011; 258(1):164-73. doi: 10.1148/radiol.10100640.
  2. Grunert J, editor. Features and limitations of mobile tablet devices for viewing radiological images. Rofo 2015; 187(3):173-9. doi: 10.1055/s-0034-1385293.
  3. O’Sullivan DM, O’Sullivan E, O’Connor M, Lyons D, McManus J. WhatsApp Doc? BMJ Innovations 2017; 3(4):238-9. doi:10.1136/bmjinnov-2017-000239
  4. Panughpath SG, Kalyanpur A. Radiology and the mobile device: Radiology in motion. Indian J Radiol Imag 2012; 22(4):246-50. doi: 10.4103/0971-3026.111469.
  5. Sher K, Shah S, Kumar S. Etiologic patterns of ischaemic stroke in young adults. J Coll Physicians Surg Pak 2013; 23(7):472-5.
  6. Kankane VK, Jaiswal G, Gupta TK. Apply of WhatsApp: A quick, simple, smarty and cost competent method of communication in neurosurgery. Romanian Neurosurgery 2016; 30(2):306-12.
  7. Székely A, Talanow R, Bágyi P. Smartphones, tablets and mobile applications for radiology. Eur J Radiol 2013; 82(5):829-36. doi: 10.1016/j.ejrad.2012.11.034.
  8. Schlechtweg PM, Kammerer FJ, Seuss H, Uder M, Hammon M. Mobile image interpretation: diagnostic performance of CT exams displayed on a tablet computer in detecting abdominopelvic hemorrhage. J Digit Imaging 2016; 29(2):183-8. doi: 10.1007/s10278-015-9829-x.
  9. Laughlin PM, Neill SO, Fanning N, Garrigle AMM, Connor OJO, Wyse G, et al. Emergency CT brain: Preliminary interpretation with a tablet device: Image quality and diagnostic performance of the Apple iPad. Emergency Radiology 2012; 19(2):127-33.
  10. John S, Poh AC, Lim TC, Chan EH, Chong LR. The iPad tablet computer for mobile on-call radiology diagnosis? Auditing discrepancy in CT and MRI reporting. J Digit Imag 2012; 25(5):628-34. doi: 10.1007/s10278-012-9485-3.
  11. Garcia F, Criado J, Roch C, Padial J, Pastrana M, Crespo J. iPad 2 and iPhone 4: Is it feasible to assess acute stroke using an apple mobile device. Radiological Society of North America 2011 Scientific Assembly and Annual Meeting, Chicago IL; 2011.
  12. Abboud S, Weiss F, Siegel E, Jeudy J. TB or Not TB: Interreader and intrareader variability in screening diagnosis on an iPad versus a traditional display. J Am Coll Radiol 2013; 10(1):42-4. doi: 10.1016/j.jacr.2012.07.019.
  13. Faggioni L, Neri E, Sbragia P, Angeli S, Cini L, Bartolozzi C. Chest CT and the iPad2®: Preliminary 2D assessment of pulmonary nodules. RSNA 2011; 2011.
  14. Shreter R, Rozenberg R, Eran A. The utility of tablet computer technology for the on call interpretation of brain CT. RSNA 2011.
  15. Santos MR, Sado Júnior J, Sousa RM, Roriz OR. Repro-ducibility of schatzker classification through smartphone applications. Acta ortopedica brasileira 2016; 24(6): 309-11. doi.org/ 10.1590/1413-785220162406159078.