Skip to main content

Accuracy and tracing time of cephalometric analyses on a tablet or desktop computer

A prospective study

Abstract

Background

This prospective study aimed to evaluate the influence of the computer type (tablet or desktop) on accuracy and tracing time of cephalometric analyses.

Methods

Dental students used a web-based application specifically developed for this purpose to perform cephalometric analyses on tablet and desktop computers. Landmark locations and timestamps were exported to measure the accuracy, successful detection rate and tracing time. Reference landmarks were established by six experienced orthodontists. Statistical analysis included reliability assessment, descriptive statistics, and linear mixed effect models.

Results

Over a period of 8 semesters a total of 277 cephalometric analyses by 161 students were included. The interrater reliability of the orthodontists establishing the reference coordinates was excellent (ICC > 0.9). For the students, the mean landmark deviation was 2.05 mm and the successful detection rate for the clinically acceptable threshold of 2 mm suggested in the literature was 68.6%, with large variations among landmarks. No effect of the computer type on accuracy and tracing time of the cephalometric analyses could be found.

Conclusion

The use of tablet computers for cephalometric analyses can be recommended.

Peer Review reports

Background

The analyses of lateral cephalograms are a fundamental part of orthodontic diagnostics and treatment planning. They are used to determine the skeletal, dental and soft tissue relations, to evaluate treatment effects and to assess the vertebrae [1,2,3,4,5,6,7]. For this purpose, defined landmarks are placed on the radiographs. These can be anatomical, radiological or constructed points. Parameters such as angles, distances or ratios are measured between these landmarks and compared to standard values.

Standardized cephalometric radiographs were introduced into orthodontics by Broadbent and Hofrath in 1931 [8, 9]. The 22-item analysis used today in the Department of Orthodontics at the University of Münster is based on the analyses by Downs, Ricketts, Rakosi and Steiner [1, 2, 10, 11].

Originally, the analyses were performed manually by drawing the landmarks, angles, and distances on the analog lateral cephalogram by hand [12]. The greatest potential for error has always been in the localization of the landmarks [13]. As early as in the 1960s, computer-based systems were developed with the intention to enable faster and less error-prone cephalometric analyses. Landmark coordinates were initially transferred by hand using a drawing table [14] and later using digital reading systems [15], which were only slowly gaining acceptance due to their high cost [16]. Meanwhile, methods for digitizing radiographs were already developed [17], but until the 1990s these were qualitatively inferior to the use of digital readout systems [18]. Nowadays, direct digital x-ray technology eliminates the need for time-consuming and quality-reducing intermediate steps for viewing and tracing cephalograms on a computer. Furthermore, digital radiographs offer the option of changing the image in contrast, brightness and size, so that structures of different translucency can be viewed in detail. Another advantage of digital x-ray technology is the lower radiation dose for the patient [19].

For the diagnosis of digital radiographs a darkened room and a suitable viewing monitor are required. The use of tablet computers for radiographic analysis was already considered shortly after the introduction of the iPad (Apple, Cupertino, CA, USA) in 2010 [20]. Initial comparisons to conventional liquid-crystal displays (LCD) [21, 22] were promising, but the observer performance on iPads was found to be significantly lower than with calibrated monitors [23]. With the introduction of the high-resolution “Retina” display as part of the third-generation iPad in 2012, there was no longer a significant difference in comparison to calibrated viewing monitors [24] and the American Board of Radiology considered the iPad’s retina display adequate for examination in all specialties [25]. There was also no significant difference between tablet computers and viewing monitors in terms of reliability of landmark identification [26]. Finally, a 2015 systematic review found that the use of a tablet computer does not generally affect the interpretation of a radiograph [27].

In contrast to a PC with a viewing monitor, the use of a tablet computer allows for more flexible work. One can perform analyses directly in a darkened lecture hall, and even patient-side use is an option, since sterile packaging and disinfection of the device are possible [28, 29].

When using a tablet computer, inputs are made with the finger or a stylus directly on the touchscreen. The reproducibility of cephalometric analyses on tablet computers using a stylus and desktop computers using a mouse driven cursor has been studied previously, and no differences in measurements between the two modalities were found for any of the cephalometric parameters [30].

The aim of this study was to investigate the accuracy and tracing time of dental students when identifying landmarks on lateral cephalograms using a tablet or desktop computer. The null hypothesis was that the device used would have no effect on the accuracy or tracing time of landmark identification.

Methods

This prospective study received approval from the Ethics Commission of the Medical Faculty of the University of Münster, Germany (2021-060-f-S). The study took place at the Department of Orthodontics at the University Hospital Münster, Germany.

Software

A web-based application for performing cephalometric analyses of digital lateral cephalograms was developed.

The application was implemented with Typescript using the React frontend framework. Internationalisation for German and English was realised using the react-intl library to allow for future international use of the software. To import radiographs according to the Digital Imaging and Communications in Medicine (DICOM) standard, a lightweight parser was implemented.

The application allows the brightness, contrast and magnification of the cephalogram to be freely adjusted. The sequence in which the landmarks are placed is suggested by a list representation, but is freely choosable. Placed landmarks can be corrected at any time. To assist the examiner, a small schematic drawing showing the ideal position of the selected landmark and its definition is provided (Fig. 1, Table 1).

Fig. 1
figure 1

Placement of landmark Nasion in the web-based application. A small schematic drawing at the lower right edge shows the examiner the ideal position of the selected landmark and its definition

Table 1 Definitions of the Landmarks used in the 22-item cephalometric analysis of the University of Münster as shown in the software

The same web-based application was used on both the tablet and desktop computers. Therefore, a software-independent comparison of the cephalometric analysis performed with the two types of computers was possible.

To carry out the analyses, each student was provided with an iPad with Retina display (Apple, Cupertino, CA, USA) while the students used their own desktop computers.

Data acquisition

Of all lateral cephalograms taken at the Department of Orthodontics in 2012-2017, 30 were randomly selected using a random number generator [31]. To obtain the radiographs, the heads of all patients were aligned with the sagittal plane perpendicular to the X-rays and the Frankfurt plane parallel to the floor. The teeth were in maximum intercuspation and the lips closed. After anonymisation of the cephalograms the following exclusion criteria were applied: unerupted or missing incisors, unerupted or missing first molars, malposition of the head in the cephalostat, osteosynthesis plates in situ or a missing scale. Selection was made without regard to gender, type of occlusion or skeletal pattern. After application of the exclusion criteria, 26 radiographs remained. From these, three were finally selected using the random number generator.

Fig. 2
figure 2

Selection and allocation of the cephalograms with number of cephalograms (\(n_c\)), semesters (\(n_s\)), and analyses (\(n_a\))

One cephalogram was used to introduce the software to the students only. The other cephalograms (A, B) were analyzed by the students on the tablet and desktop computer accordingly. Two different cephalograms were used to avoid learning effects. The assignment of the cephalograms (A, B) to the computer type (tablet, desktop) was switched semester wise so that an influence of the cephalogram could be assessed separately from an influence of the device (Fig. 2).

Fig. 3
figure 3

Landmarks used in the 22-item cephalometric analysis of the University of Münster: Nasion (N), Basion (Ba), Orbitale (Or), Porion (P), Pterygoid point (Pt), Sella (S), Anterior nasal spine (Spa), Posterior nasal spine (Spp), A point (A), Condylion (Co), Condylar midpoint (DC), Anterior border of the Ramus (R1), Posterior border of the Ramus (R2), Semilunar incisure (R3), Lower border of the Ramus (R4), Ramus midpoint (Xi), Horizontal tangent point (hT), Menton (Me), Pogonion (Po), B Point (B), Suprapogonion (Pm), Constructed gnathion (Gnk), Upper Incisor edge (UpIe), Upper Incisor apex (UpIa), Lower Incisor edge (LoIe), Lower Incisor apex (LoIa), First Upper Molar mesial apex (1UpMma), First Upper Molar distal contact (1UpMdc), Apex nasi (Ap), Subnasal (Sn), Upper Lip (UpL), Lower Lip (LoL), Pogonion molle (Pom). Figure adapted from [32]

Eligible participants were dental students of one orthodontic course that is part of the clinical curriculum in the seventh semester at the University of Münster. All students received the same education on cephalometric anaylsis. The course consisted of a lecture on the history, landmarks, planes and measurements of cephalometry combined with practical exercises on manual landmark positioning. The course lasts four hours and is divided into five sessions. This is followed by a 45-minute software demonstration session. The cephalograms used in this study were not used in the teaching or during the demonstration to avoid a learning effect.

Each student performed the 22-item cephalometric analysis of the University of Münster on the tablet computer (using a finger) and desktop computer (using a mouse) in no particular order. The students were instructed to perform the analysis without interruption and in a darkened room. The landmarks used for the 22-item analysis can be found in Fig. 3.

The landmark locations as well as timestamps for the first and last landmark placement were exported from the software in JSON (JavaScript Object Notation) format and submitted for evaluation. The JSON files were pseudonymised and processed using a Python script. The pseudonym was generated from the plain name and a salt (a random string) using the cryptographic one-way function SHA3-256 and subsequent sorting and ranking.

Exclusion criteria for the submitted cephalometric analyses were use of a cephalogram other than the ones provided, incorrect assignment of the cephalogram to the device type, missing landmarks, and duplicate submissions.

To establish a reference, six experienced orthodontists performed the analysis for each image on a desktop computer with the calibrated medical viewing monitor RadiForce RX220 (EIZO, Hakusan, Ishikawa, Japan) in a darkened room. Mean values for each landmark position were used as the reference (\(x_{i_{ref}}\), \(y_{i_{ref}}\)).

Students’ accuracy was evaluated as the mean radial error (MRE) in mm (Eqs. 1 and 2), defined as the sum of all Euclidean distances (\(d_i\)) to the reference landmarks divided by the number of landmarks (\(l=33\)).

$$\begin{aligned} d_i = \sqrt{\left(x_{i_{stud}} - x_{i_{ref}}\right)^2 + \left(y_{i_{stud}} - y_{i_{ref}}\right)^2} \end{aligned}$$
(1)
$$\begin{aligned} MRE = \frac{\sum _{i=1}^{l}{d_i}}{l} \end{aligned}$$
(2)

Timestamps of the placement of the first and last landmark were recorded and the difference used as a measure of the students tracing time.

The resulting dataset contained the pseudonym of the student, the identifier of the image (A or B), the computer type used (tablet, desktop), the order identifier (0 if this is the students first analyses, 1 otherwise), the time required for identification of all landmarks in minutes and the students accuracy as defined above.

Statistical analysis

The reliability of the established reference coordinates was assessed with an intraclass correlation coefficient using a two-way mixed effects model for the absolute agreement of multiple raters (ICC(A,k) according to McGraw and Wong [33]) using the irr package [34] for R [35]. The level of reliability was defined according to Koo and Li [36]: poor reliability \(<0.5\), moderate reliability \(<0.75\), good reliability \(<0.9\), excellent reliability \(>0.9\).

Descriptive statistics were performed for the students accuracy, tracing time and successful detection rate. A deviation of 2 mm was considered clinically acceptable [37, 38].

Linear mixed effect analysis was performed to test the influence of the device on accuracy and tracing time. Computer type (tablet or desktop), cephalogram (A or B), gender of the student, and order of analysis were considered as fixed effects. A random intercept for subjects was also included. The significance of each fixed effect was tested by a likelihood ratio test of a model with that effect against a null model. In a second step, models with an increasing number of these significant effects were tested against the previous models. Finally, a model with all effects that showed a significant improvement was selected. The linear mixed effects analyses were executed using the lme4 package [39] for R [35].

Results

The study was conducted from 2018 to 2022 over a period of 8 semesters. During this period 303 analyses were submitted. Of these 26 analyses had to be excluded due to the exclusion criteria: 16 contained the wrong cephalogram, 5 had a screenshot of the provided cephalogram, 3 had missing landmarks, and 2 were invalid json files, ultimately resulting in 277 submissions with a total of 9141 landmarks being included in the study. The resulting study group consisted of 161 (108 female, 53 male) students.

The interrater reliability of the six orthodontists that established the reference coordinates (Table 2) was excellent (\(ICC~>~0.9\)).

Table 2 Reference coordinates for the landmarks as established by six orthodontists with the corresponding interrater reliabilities

Accuracy of students’ landmark identification

The mean landmark deviation of the students was 2.05 mm (SD = 2.63). The landmarks LoIe, UpIe, Ap, Sn, S and N were identified with the smallest deviation. The largest deviation was found for the landmark R4, Co, R3, P, Ba and R1. The deviations for all landmarks are listed in Table 3 and visualised in Fig. 4. The landmarks as placed by the students are shown in Fig. 5.

Table 3 Accuracy of students’ landmark identification evaluated as the mean radial error and the successful detection rate below different thresholds
Fig. 4
figure 4

Deviation of the students’ landmarks to the reference in mm

Fig. 5
figure 5

Positioning of the landmarks by the students on image B

The likelihood ratio tests showed a significant effect of the image (\({\chi }^2(1)~=~19.10\), \(p~<~.001\)) and students’ gender (\({\chi }^2(1)~=~5.54\), \(p~=~0.02\)) on the accuracy. The type of computer (\({\chi }^2(1)~=~0.98\), \(p~=~0.32\)) and the order in which the analyses were conducted (\({\chi }^2(1)~=~0.11\), \(p~=~0.75\)) had no significant effect. There was no significant interaction between image and gender (\({\chi }^2(1)~=~0.08\), \(p~=~0.78\)).

The resulting model suggested that image B was more difficult to analyse than image A, with an estimated effect of 0.21 mm, and that male students performed better than female students regardless of the image, with an estimated effect of 0.24 mm. The estimates and confidence intervals of the effects are shown in Table 4.

Table 4 Linear mixed effect model for accuracy (deviation in mm) and tracing time (in minutes per analysis)

Successful detection rate

The successful detection rate (SDR) for the clinically acceptable threshold of 2 mm was 68.6% over all landmarks. The SDR for 2 mm was greater than 90% for 8 landmarks and less than 35% for 4 landmarks. The SDRs for all landmarks and different thresholds are listed in Table 3.

Tracing time

The median tracing time for the students was 11.80 minutes per analysis (IQR 7.70–20.49), while for the orthodontists it was 5.15 minutes (IQR 4.27–7.24).

Regarding students’ tracing time the likelihood ratio tests showed a significant effect of the order in which the analyses were conducted (\({\chi }^2(1)~=~19.55\), \(p~<~.001\)). The image (\({\chi }^2(1)~=~0.08\), \(p~=~0.77\)), type of computer (\({\chi }^2(1)~=~1.53\), \(p~=~0.22\)) and gender (\({\chi }^2(1)~=~0.03\), \(p~=~0.86\)) had no significant effect.

The resulting model suggests that performing the analysis for the second time is faster with an estimated effect of 11.72 minutes. The estimates and confidence intervals of the effect are shown in Table 4.

Cephalometric measurements

Cephalometric measurements were calculated using both the reference landmarks and the landmarks placed by the students. Significant differences were only found for four of the 22 measurements (facial depth, mandibular plane, relative mandibulary length and relative maxillary length) as shown in Table 5.

Table 5 Cephalometric measurements calculated from the reference landmarks and those placed by the students. Descriptive statistics with mean (M) and standard deviation (SD) as well as the results of t tests (assuming heterogeneous variances)

Discussion

The present study focuses on the development and evaluation of a web-based application for performing cephalometric analyses of digital lateral cephalograms. The study results showed no influence of the type of computer (i.e. tablet or desktop) on the students’ accuracy or speed when performing the analysis.

Previous studies on app-based versus manual tracing showed no clinically relevant differences in tracing accuracy [40,41,42]. Recent studies comparing desktop computers to smartphones found comparable results on tracing accuracy [43, 44], but inconsistent results on tracing time [44, 45]. For tablet computers with pen-input, two studies found no significant difference from desktop-computer-based analyses [30, 46] and one study found that the mobile apps were inferior [47]. To our knowledge, there have been no studies comparing computers with touch-input (i.e. smartphone or tablet) with desktop computers, using the same application on both devices.

Most studies comparing the accuracy of tracing methods [30, 41,42,43,44,45, 47] used the cephalometric measurements as a measure of tracing accuracy, while one study from 2015 [46] as well as more recent studies covering neural network based approaches used the landmark location.

The advantage of using landmark locations is that they are easier to compare across studies, as the number of different - non-comparable - measurements that can be made with the same set of landmarks is naturally greater. In addition, angular measurements in cephalometry mask placement errors that occur when the landmark is misplaced along the arms of the measured angle.

The landmarks identified with the smallest deviation (LoIe, UpIe, Ap, Sn, S and N) are consistent with previous studies on the reliability of cephalometric landmarks [18, 48,49,50,51,52,53,54,55,56,57]. Regarding large deviations, the results are also in agreement with previous studies stating that the identification of landmarks in the petrous temporal region (i.e. Ba, Co and P) is difficult due to superimpositions and that the error is generally larger for landmarks along gradually curved surfaces (i.e. R1, R3 and R4) due to elliptical error distribution [53].

The results of the mixed linear effect model showed that image B was slightly more difficult to analyze, with an increase in mean deviation of 0.2 mm. This could be explained by more structures being superimposed in image B. It was also found that the gender of the students had a significant influence, with male students being more accurate by 0.2 mm.

In the study population, the gender distribution was unbalanced with 108 female and 53 male students. This imbalance is related to the higher prevalence of female students in dental education. In recent decades, the proportion of female students in dentistry has increased, which can be attributed to a higher application rate with comparable admission rates between the genders [58]. Considering the unequal gender distribution and the small effect size found, the gender-specific difference in accuracy should be interpreted with caution.

Regarding the tracing time, the results showed that the students perform the second analysis faster than the first one with a mean decrease of 11.72 minutes, indicating a learning effect. The fact that the students needed a median of 12 minutes for a cephalometric analysis, while the orthodontists were significantly faster with a median of 5 minutes, shows that the time needed decreases with increasing experience. The other effects considered (i.e. device and gender) had no significant influence on the tracing time.

To assess the clinical performance of the students, cephalometric measurements were calculated for both the reference and student landmarks (Table 5). The variability of the student measurements were comparable to that reported in previous studies ([41,42,43,44,45, 47]). A significant difference to the reference was only found for four of the 22 measurements (facial depth, mandibular plane, relative mandibular length and relative maxillary length).

In view of the progress made in the field of automated cephalometry, the question arises as to whether manual landmark positioning is still relevant. Although there has been great progress in the field of automated evaluation of cephalometric analyses in recent years with the availability of open annotated datasets [59] and the continuous development of various neural network architectures [60], recent studies that have evaluated cephalometric analyses by such AI-based systems and those performed by experienced orthodontists could only recommend the use of these systems under supervision [61]. On the other hand, the idea of collaboration between AI-based systems and students seems promising [62] and should be evaluated as an approach to support the teaching of cephalometry.

According to our results, using tablets for cephalometric analyses in orthodontic education must be considered an appropriate approach and can be recommended. Considering that teaching cephalometric landmark identification with a smartphone-based application has been shown to be at least equivalent to lecture-based instruction [63], a fully digital workflow seems feasible.

Strengths and limitations

The prospective nature and the large number of submitted cephalometric analyses can be seen as a strength of the present study. It provides valuable data on what can be expected from beginners in orthodontics in terms of accuracy and tracing time. However, this study has some limitations that need to be considered when interpreting the results.

The cephalograms chosen seemingly had different degrees of difficulty. The analysis of multiple cephalograms was conducted to minimise bias in the results with respect to landmarks that are particularly difficult to locate in the image. Due to the voluntary nature of the participation, the analysis of only two cephalograms per student was possible, resulting in a limited sample size. A larger sample size would increase the generalisability of the findings and provide more statistical power.

The study was conducted at the Department of Orthodontics at the University Hospital Münster, Germany. The findings may not be applicable to other universities with different curricula, as there may be variations in the expertise and techniques employed at different institutions.

The students who performed the cephalometric analyses were aware of the computer type (tablet or desktop) they were using. This lack of blinding could introduce bias and influence their performance.

The students were instructed to conduct the analysis without interruption and in a darkened room, but this could not be controlled and should be taken into account when interpreting the results. In addition, the timestamps were only registered for the entire session and not for individual landmarks, since the order of landmark placement and later corrections of their position could not be tracked.

Each student was provided with an iPad (Apple, Cupertino, CA, USA) while the students used their own desktop computers, which must be considered as another limitation of the present study, as it contributes to the heterogeneity of the desktop computer based analyses, also because it could not be guaranteed that the respective screens were suitable for x-ray diagnosis.

The overall accuracy of the students was low and the tracing time was high, which was to be expected as the students were taught cephalometry in the semester in which the study was conducted.

The study focused on the students’ accuracy and tracing time as outcome measures. While these measures provide insights into the performance of the web-based application, its clinical validity was not evaluated. Further research is required to identify and address any potential limitations introduced by the software itself and to assess its clinical validity.

Conclusions

No significant influence of the device used to perform a cephalometric analysis was found with regards to accuracy and speed. The use of tablet computers for cephalometric analyses in orthodontic education can be recommended.

Availability of data and materials

The data presented in this study are available on reasonable request from the corresponding author.

References

  1. Downs WB. The role of cephalometrics in orthodontic case analysis and diagnosis. Am J Orthod. 1952;38(3):162–82. https://doi.org/10.1016/0002-9416(52)90106-1.

    Article  Google Scholar 

  2. Steiner CC. Cephalometrics for you and me. Am J Orthod. 1953;39(10):729–55. https://doi.org/10.1016/0002-9416(53)90082-7.

    Article  Google Scholar 

  3. O’Reilly MT, Yanniello GJ. Mandibular growth changes and maturation of cervical vertebrae - a longitudinal cephalometric study. Angle Orthod. 1988;58(2):179–84. https://doi.org/10.1043/0003-3219(1988)058<0179:MGCAMO>2.0.CO;2.

  4. Baccetti T, Franchi L, McNamara JA. An improved version of the cervical vertebral maturation (CVM) method for the assessment of mandibular growth. Angle Orthod. 2002;72(4):316–23.

  5. Baccetti T, Franchi L, McNamara JA. The cervical vertebral maturation (CVM) method for the assessment of optimal treatment timing in dentofacial orthopedics. Semin Orthod. 2005;11(3):119–29. https://doi.org/10.1053/j.sodo.2005.04.005.

    Article  Google Scholar 

  6. Uysal T, Ramoglu SI, Basciftci FA, Sari Z. Chronologic age and skeletal maturation of the cervical vertebrae and hand-wrist: is there a relationship? Am J Orthod Dentofacial Orthop. 2006;130(5):622–8. https://doi.org/10.1016/j.ajodo.2005.01.031.

    Article  PubMed  Google Scholar 

  7. Segatto E, Segatto A, Braunitzer G, Kirschneck C, Fanghänel J, Danesh G, et al. Craniofacial and cervical morphology related to sagittal spinal posture in children and adolescents. Biomed Res Int. 2014;2014:1–8. https://doi.org/10.1155/2014/638238.

    Article  Google Scholar 

  8. Broadbent BH. A new x-ray technique and its application to orthodontia. Angle Orthod. 1931;1(2):45–66.

    Google Scholar 

  9. Hofrath H. Die Bedeutung der Röntgenfern- und Abstandsaufnahme für die Diagnostik der Kieferanomalien. Fortschr Orthodontik. 1931;1(2):232–58. https://doi.org/10.1007/BF02002578.

    Article  Google Scholar 

  10. Ricketts RM. A foundation for cephalometric communication. Am J Orthod. 1960;46(5):330–57. https://doi.org/10.1016/0002-9416(60)90047-6.

    Article  Google Scholar 

  11. Rakosi T. Atlas und Anleitung zur praktischen Fernröntgenanalyse. 2nd ed. München; Wien: Hanser; 1988.

    Google Scholar 

  12. Bevans CA. Scientific apparatus and laboratory methods: an illuminator to facilitate the tracing of x-rays. Science. 1987;1933(77):116. https://doi.org/10.1126/science.77.1987.116.

    Article  Google Scholar 

  13. Houston WJB, Maher RE, McElroy D, Sherriff M. Sources of error in measurements from cephalometric radiographs. Eur J Orthod. 1986;8(3):149–51. https://doi.org/10.1093/ejo/8.3.149.

    Article  CAS  PubMed  Google Scholar 

  14. Barrett MJ, Brown T, McNulty EC. A computer-based system of dental and cranio-facial measurement and analysis. Aust Dent J. 1968;13(3):207–12. https://doi.org/10.1111/j.1834-7819.1968.tb02272.x.

    Article  CAS  PubMed  Google Scholar 

  15. Bondevik O, Røsler M, Slagsvold O. The digital read-out system CM—1: an instrument for rational measuring on radiographic headplates and dental models. Eur J Orthod. 1981;3(1):1–8. https://doi.org/10.1093/ejo/3.1.1.

    Article  CAS  PubMed  Google Scholar 

  16. Sutton AJ, Spratley MH. A mechanical digitizer for use with cephalometric radiographs. Aust Dent J. 1981;26(4):232–5. https://doi.org/10.1111/j.1834-7819.1981.tb03964.x.

    Article  CAS  PubMed  Google Scholar 

  17. Meyers PH, Becker HC, Sweeney JW, Nice CM, Nettleton WJ. Evaluation of a computer-retrieved radiographic image. Radiology. 1963;81:201–6. https://doi.org/10.1148/81.2.201.

    Article  CAS  PubMed  Google Scholar 

  18. Macri V, Wenzel A. Reliability of landmark recording on film and digital lateral cephalograms. Eur J Orthod. 1993;15(2):137–48. https://doi.org/10.1093/ejo/15.2.137.

    Article  CAS  PubMed  Google Scholar 

  19. Seki K, Okano T. Exposure reduction in cephalography with a digital photostimulable phosphor imaging system. Dentomaxillofac Radiol. 1993;22(3):127–30. https://doi.org/10.1259/dmfr.22.3.8299830.

    Article  CAS  PubMed  Google Scholar 

  20. Frank E. iPad – toy or tool? Int J Comput Dent. 2010;13(4):341–9.

    PubMed  Google Scholar 

  21. McNulty JP, Ryan JT, Evanoff MG, Rainford LA. Flexible image evaluation. iPad versus secondary-class monitors for review of MR spinal emergency cases, a comparative study. Acad Radiol. 2012;19(8):1023–1028. https://doi.org/10.1016/j.acra.2012.02.021.

  22. Shintaku WH, Scarbecz M, Venturin JS. Evaluation of interproximal caries using the iPad 2 and a liquid crystal display monitor. Oral Surg Oral Med Oral Pathol Oral Radiol. 2012;113(5):e40–4. https://doi.org/10.1016/j.oooo.2011.11.008.

    Article  PubMed  Google Scholar 

  23. Yoshimura K, Nihashi T, Ikeda M, Ando Y, Kawai H, Kawakami K, et al. Comparison of liquid crystal display monitors calibrated with gray-scale standard display function and with \(\gamma\) 2.2 and iPad: Observer performance in detection of cerebral infarction on brain CT. AJR Am J Roentgenol. 2013;200(6):1304–1309. https://doi.org/10.2214/AJR.12.9096.

  24. Tewes S, Rodt T, Marquardt S, Evangelidou E, Wacker FK, Von Falck C. Evaluation of the use of a tablet computer with a high-resolution display for interpreting emergency CT scans. Rofo. 2013;185(11):1063–9. https://doi.org/10.1055/s-0033-1350155.

    Article  CAS  PubMed  Google Scholar 

  25. Toomey RJ, Rainford LA, Leong DL, Butler ML, Evanoff MG, Kavanagh EC, et al. Is the iPad suitable for image display at American Board of Radiology examinations? AJR Am J Roentgenol. 2014;203(5):1028–33. https://doi.org/10.2214/AJR.13.12274.

    Article  PubMed  Google Scholar 

  26. Tadinada A, Mahdian M, Sheth S, Chandhoke TK, Gopalakrishna A, Potluri A, et al. The reliability of tablet computers in depicting maxillofacial radiographic landmarks. Imaging Sci Dent. 2015;45(3):175–80. https://doi.org/10.5624/isd.2015.45.3.175.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Caffery LJ, Armfield NR, Smith AC. Radiological interpretation of images displayed on tablet computers: A systematic review. Br J Radiol. 2015;88(1050). https://doi.org/10.1259/bjr.20150191.

  28. Murphy AD, Belcher HJCR. A novel method for sterile intra-operative iPad use. J Plast Reconstr Aesthet Surg. 2012;65(3):403–4. https://doi.org/10.1016/j.bjps.2011.08.037.

    Article  CAS  PubMed  Google Scholar 

  29. Howell V, Thoppil A, Mariyaselvam M, Jones R, Young H, Sharma S, et al. Disinfecting the iPad: evaluating effective methods. J Hosp Infect. 2014;87(2):77–83. https://doi.org/10.1016/j.jhin.2014.01.012.

    Article  CAS  PubMed  Google Scholar 

  30. Goracci C, Ferrari M. Reproducibility of measurements in tablet-assisted, PC-aided, and manual cephalometric analysis. Angle Orthod. 2014;84(3):437–42. https://doi.org/10.2319/061513-451.1.

    Article  PubMed  Google Scholar 

  31. Haahr M. Random.org: true random number service. 1998–2018. https://www.random.org. Accessed 9 Feb 2018.

  32. Stamm T, Kanemeier M, Dirksen D, Middelberg C, Hohoff A, Kleinheinz J, et al. The position of the virtual hinge axis in relation to the maxilla in digital orthognathic surgery planning—a k-means cluster analysis. J Clin Med. 2023;12(10):3582. https://doi.org/10.3390/jcm12103582.

    Article  PubMed  PubMed Central  Google Scholar 

  33. McGraw KO, Wong SP. Forming inferences about some intraclass correlation coefficients. Psychol Methods. 1996;1(1):30–46. https://doi.org/10.1037/1082-989X.1.1.30.

    Article  Google Scholar 

  34. Gamer M, Lemon J, Singh IFP. irr: various coefficients of interrater reliability and agreement. 2019. R package version 0.84.1. https://CRAN.R-project.org/package=irr. Accessed 6 May 2023.

  35. R Core Team. R: A language and environment for statistical computing. Vienna; 2023. https://www.R-project.org/. Accessed 6 May 2023.

  36. Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med. 2016;15(2):155–63. https://doi.org/10.1016/j.jcm.2016.02.012.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Proffit WR, Turvey TA, Phillips C. The hierarchy of stability and predictability in orthognathic surgery with rigid fixation: an update and extension. Head Face Med. 2007;3(1):21. https://doi.org/10.1186/1746-160X-3-21.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Hwang HW, Moon JH, Kim MG, Donatelli RE, Lee SJ. Evaluation of automated cephalometric analysis based on the latest deep learning method. Angle Orthod. 2021;91(3):329–35. https://doi.org/10.2319/021220-100.1.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Bates D, Mächler M, Bolker B, Walker S. Fitting linear mixed-effects models using lme4. J Stat Softw. 2015;67(1):1–48. https://doi.org/10.18637/jss.v067.i01.

    Article  Google Scholar 

  40. Erkan M, Gurel HG, Nur M, Demirel B. Reliability of four different computerized cephalometric analysis programs. Eur J Orthod. 2012;34(3):318–21. https://doi.org/10.1093/ejo/cjr008.

    Article  PubMed  Google Scholar 

  41. Meric P, Naoumova J. Web-based fully automated cephalometric analysis: comparisons between app-aided, computerized, and manual tracings. Turk J Orthod. 2020;33(3):142–9. https://doi.org/10.5152/TurkJOrthod.2020.20062.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Zamrik OM, İşeri H. The reliability and reproducibility of an android cephalometric smartphone application in comparison with the conventional method. Angle Orthod. 2021;91(2):236–42. https://doi.org/10.2319/042320-345.1.

    Article  PubMed  Google Scholar 

  43. Livas C, Delli K, Spijkervet FKL, Vissink A, Dijkstra PU. Concurrent validity and reliability of cephalometric analysis using smartphone apps and computer software. Angle Orthod. 2019;89(6):889–96. https://doi.org/10.2319/021919-124.1.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Chugh VK, Bhatia NK, Shastri D, Shankar SP, Singh S, Sardana R. Interobserver and intraobserver reliability of cephalometric measurements performed on smartphone-based application and computer-based imaging software: a comparative study. Turk J Orthod. 2023;36(2):94–100. https://doi.org/10.4274/TurkJOrthod.2022.2022.60.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Sayar G, Kilinc DD. Manual tracing versus smartphone application (app) tracing: a comparative study. Acta Odontol Scand. 2017;75(8):588–94. https://doi.org/10.1080/00016357.2017.1364420.

    Article  PubMed  Google Scholar 

  46. Cutrera A, Barbato E, Maiorana F, Giordano D, Leonardi R. Reproducibility and speed of landmarking process in cephalometric analysis using two input devices: mouse-driven cursor versus pen. Ann Stomatol (Roma). 2015;6(2):47–52.

    PubMed  PubMed Central  Google Scholar 

  47. Aksakallı S, Yılancı H, Görükmez E, Ramoğlu Sİ. Reliability assessment of orthodontic apps for cephalometrics. Turk J Orthod. 2016;29(4):98–102. https://doi.org/10.5152/TurkJOrthod.2016.1618.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Richardson A. An investigation into the reproducibility of some points, planes, and lines used in cephalometric analysis. Am J Orthod. 1966;52(9):637–51. https://doi.org/10.1016/0002-9416(66)90212-0.

    Article  CAS  PubMed  Google Scholar 

  49. Baumrind S, Frantz RC. The reliability of head film measurements. Am J Orthod. 1971;60(2):111–27. https://doi.org/10.1016/0002-9416(71)90028-5.

    Article  CAS  PubMed  Google Scholar 

  50. Midtgård J, Björk G, Linder-Aronson S. Reproducibility of cephalometric landmarks and errors of measurements of cephalometric cranial distances. Angle Orthod. 1974;44(1):56–61. https://doi.org/10.1043/0003-3219(1974)044<0056:ROCLAE>2.0.CO;2.

    Article  PubMed  Google Scholar 

  51. Broch J, Slagsvold O, Røsler M. Error in landmark identification in lateral radiographic headplates. Eur J Orthod. 1981;3(1):9–13. https://doi.org/10.1093/ejo/3.1.9.

  52. Stabrun AE, Danielsen K. Precision in cephalometric landmark indentification. Eur J Orthod. 1982;4(3):185–96. https://doi.org/10.1093/ejo/4.3.185.

    Article  CAS  PubMed  Google Scholar 

  53. Chate RAC. Cephalometric landmark identification within the petrous temporal region. Br J Orthod. 1987;14(1):33–41. https://doi.org/10.1179/bjo.14.1.33.

    Article  MathSciNet  CAS  PubMed  Google Scholar 

  54. Savage AW, Showfety KJ, Yancey J. Repeated measures analysis of geometrically constructed and directly determined cephalometric points. Am J Orthod Dentofacial Orthop. 1987;91(4):295–9. https://doi.org/10.1016/0889-5406(87)90169-7.

    Article  CAS  PubMed  Google Scholar 

  55. Hagemann K, Vollmer D, Niegel T, Ehmer U, Reuter I. Prospective study on the reproducibility of cephalometric landmarks on conventional and digital lateral headfilms. J Orofac Orthop. 2000;61(2):91–9. https://doi.org/10.1007/BF01300351.

    Article  CAS  PubMed  Google Scholar 

  56. Schulze RKW, Gloede MB, Doll GM. Landmark identification on direct digital versus film-based cephalometric radiographs: a human skull study. Am J Orthod Dentofacial Orthop. 2002;122(6):635–42. https://doi.org/10.1067/mod.2002.129191.

    Article  PubMed  Google Scholar 

  57. Chen YJ, Chen SK, Yao JCC, Chang HF. The effects of differences in landmark identification on the cephalometric measurements in traditional versus digitized cephalometry. Angle Orthod. 2004;74(2):155–61. https://doi.org/10.1043/0003-3219(2004)074<0155:TEODIL>2.0.CO;2.

    Article  PubMed  Google Scholar 

  58. Kæ rsgaard JLBn, Christensen MK, Sø ndergaard PY, Naukkarinen J. Gender differences in dentistry: a qualitative study on students’ intrinsic and extrinsic motivations for entering dentistry at higher education. Eur J Dent Educ. 2021;25(3):495–505. https://doi.org/10.1111/eje.12625.

  59. Wang CW, Huang CT, Lee JH, Li CH, Chang SW, Siao MJ, et al. A benchmark for comparison of dental radiography analysis algorithms. Med Image Anal. 2016;31:63–76. https://doi.org/10.1016/j.media.2016.02.004.

    Article  CAS  PubMed  Google Scholar 

  60. Moon JH, Hwang HW, Yu Y, Kim MG, Donatelli RE, Lee SJ. How much deep learning is enough for automatic identification to be reliable? Angle Orthod. 2020;90(6):823–30. https://doi.org/10.2319/021920-116.1.

    Article  PubMed  PubMed Central  Google Scholar 

  61. Kunz F, Stellzig-Eisenhauer A, Widmaier LM, Zeman F, Boldt J. Assessment of the quality of different commercial providers using artificial intelligence for automated cephalometric analysis compared to human orthodontic experts. J Orofac Orthop. 2023. https://doi.org/10.1007/s00056-023-00491-1.

    Article  PubMed  Google Scholar 

  62. Le VNT, Kang J, Oh IS, Kim JG, Yang YM, Lee DW. Effectiveness of human-artificial intelligence collaboration in cephalometric landmark detection. J Pers Med. 2022;12(3):387. https://doi.org/10.3390/jpm12030387.

    Article  PubMed  PubMed Central  Google Scholar 

  63. Golshah A, Dehdar F, Imani MM, Nikkerdar N. Efficacy of smartphone-based mobile learning versus lecture-based learning for instruction of cephalometric landmark identification. BMC Med Educ. 2020;20(1):287. https://doi.org/10.1186/s12909-020-02201-6.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

The authors are grateful to S. Rohde and L. Stadtmann for their contribution to the establishment of the reference landmarks.

Funding

Open Access funding enabled and organized by Projekt DEAL. This article was supported by the Open Access Publication Fund of the University of Münster. Besides this, the research received no external funding.

Author information

Authors and Affiliations

Authors

Contributions

MK, CM and TS designed the study. MK, CM, TS, and JQS acquired the data. MK, FA and JQS performed the statistical analysis. CM and AH contributed to the interpretation of the data. MK created the new software used in the work. MK and JQS wrote the manuscript. CM, TS, FA and AH critically reviewed the manuscript. AH supervised the study. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Moritz Kanemeier.

Ethics declarations

Ethics approval and consent to participate

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Commission of the Medical Faculty of the University of Münster, Germany (2021-060-f-S). Informed consent was obtained from all participants in this study.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kanemeier, M., Middelberg, C., Stamm, T. et al. Accuracy and tracing time of cephalometric analyses on a tablet or desktop computer. Head Face Med 20, 9 (2024). https://doi.org/10.1186/s13005-024-00413-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13005-024-00413-5

Keywords