Advertisement

Automatic Detection of Periapical Osteolytic Lesions on Cone-beam Computed Tomography Using Deep Convolutional Neuronal Networks

Open AccessPublished:August 08, 2022DOI:https://doi.org/10.1016/j.joen.2022.07.013

      Abstract

      Introduction

      Cone-beam computed tomography (CBCT) is an essential diagnostic tool in oral radiology. Radiolucent periapical lesions (PALs) represent the most frequent jaw lesions. However, the description, interpretation, and documentation of radiological findings, especially incidental findings, are time-consuming and resource-intensive, requiring a high degree of expertise. To improve quality, dentists may use artificial intelligence in the form of deep learning tools. This study was conducted to develop and validate a deep convolutional neuronal network for the automated detection of osteolytic PALs in CBCT data sets.

      Methods

      CBCT data sets from routine clinical operations (maxilla, mandible, or both) performed from January to October 2020 were retrospectively screened and selected. A 2-step approach was used for automatic PAL detection. First, tooth localization and identification were performed using the SpatialConfiguration-Net based on heatmap regression. Second, binary segmentation of lesions was performed using a modified U-Net architecture. A total of 144 CBCT images were used to train and test the networks. The method was evaluated using the 4-fold cross-validation technique.

      Results

      The success detection rate of the tooth localization network ranged between 72.6% and 97.3%, whereas the sensitivity and specificity values of lesion detection were 97.1% and 88.0%, respectively.

      Conclusions

      Although PALs showed variations in appearance, size, and shape in the CBCT data set and a high imbalance existed between teeth with and without PALs, the proposed fully automated method provided excellent results compared with related literature.

      Key Words

      Deep learning is becoming an increasingly powerful instrument in dentistry, intending to reduce the professional's workload and also improve the efficiency and precision concerning the interpretation of findings.
      Cone-beam computed tomography (CBCT) is widely used in dentists' daily routine for various indications, and its use continues to expand
      • Jacobs R.
      • Salmon B.
      • Codari M.
      • et al.
      Cone beam computed tomography in implant dentistry: recommendations for clinical use.
      . CBCT enables a significantly more precise radiological analysis. Studies have reported that in the case of periapical lesions (PALs), CBCT could detect 60.9% and 91.3% of lesions compared to 39.5% and 69.5% by conventional periapical radiography, respectively
      • Lofthag-Hansen S.
      • Huumonen S.
      • Gröndahl K.
      • et al.
      Limited cone-beam CT and intraoral radiography for the diagnosis of periapical pathology.
      ,
      • Estrela C.
      • Bueno M.R.
      • Azevedo B.C.
      • et al.
      A new periapical index based on cone beam computed tomography.
      . However, the interpretation and documentation of findings, especially incidental findings, are time-consuming and resource-intensive and require a high degree of expertise
      • Jacobs R.
      • Salmon B.
      • Codari M.
      • et al.
      Cone beam computed tomography in implant dentistry: recommendations for clinical use.
      ,
      • Harvey S.
      • Patel S.
      Guidelines and template for reporting on CBCT scans.
      . Furthermore, in a study by Leonardi Dutra et al
      • Leonardi Dutra K.
      • Haas L.
      • Porporatti A.L.
      • et al.
      Diagnostic accuracy of cone-beam computed tomography and conventional radiography on apical periodontitis: a systematic review and meta-analysis.
      , rather varying intraobserver and interobserver agreements were reported based on kappa values of 0.23–0.72 and 0.38–0.86, respectively. In the case of misinterpretation, both local and systemic effects may substantially influence patients' oral and general state of health
      • Sasaki H.
      • Hirai K.
      • Martins C.M.
      • et al.
      Interrelationship between periapical lesion and systemic metabolic disorders.
      . To improve treatment quality and patient safety, artificial intelligence, more precisely deep learning tools with the application of different types of neuronal networks, has been used to improve various medical fields
      • Ebrahimighahnavieh M.A.
      • Luo S.
      • Chiong R.
      Deep learning to detect Alzheimer's disease from neuroimaging: a systematic literature review.
      ,
      • Stern D.
      • Payer C.
      • Urschler M.
      Automated age estimation from MRI volumes of the hand.
      . Deep learning is a tool inspired by the neuronal meshes of the human brain and is the most promising technology in the field of artificial intelligence currently
      • Trask A.W.
      Grokking deep learning.
      . Moreover, different artificial convolutional neuronal networks (CNNs) have been developed, among which AlexNet
      • Krizhevsky A.
      • Sutskever I.
      • Hinton G.E.
      ImageNet classification with deep convolutional neural networks.
      , GoogLeNet
      • Szegedy C.
      • Liu W.
      • Jia Y.
      • et al.
      Going deeper with convolutions.
      , and U-Net
      • Ronneberger O.
      • Fischer P.
      • Brox T.
      U-net: Convolutional Networks for biomedical image segmentation.
      are the most applied networks, which vary in technical aspects, potential applications, and performance. In comparison, SpatialConfiguration-Net (SCN)
      • Payer C.
      • Stern D.
      • Bischof H.
      • et al.
      Regressing heatmaps for multiple landmark localization using CNNs.
      ,
      • Payer C.
      • Stern D.
      • Bischof H.
      • et al.
      Integrating spatial configuration into heatmap regression based CNNs for landmark localization.
      is less widespread. In general, such networks consist of artificial neurons representing a mathematical function of biological neurons. CNNs were specially constructed for spatially structured images, as required for radiological diagnostics. Because of the presence of more than 1 hidden layer, they resemble the so-called deep CNNs
      • Krizhevsky A.
      • Sutskever I.
      • Hinton G.E.
      ImageNet classification with deep convolutional neural networks.
      . CNNs are generally trained in a supervised manner, where the training data consist of training instances and the corresponding ground truth labels with the target to obtain accurate predictions on new, previously unseen data
      • Trask A.W.
      Grokking deep learning.
      . Deep learning is becoming an increasingly more consequential instrument in dentistry
      • Hung K.
      • Montalvao C.
      • Tanaka R.
      • et al.
      The use and performance of artificial intelligence applications in dental and maxillofacial radiology: a systematic review.
      , aimed at reducing the workload of professionals in a time of constantly growing amounts of data, as well as the rise of efficiency concerning the reading and reporting of those data and an increase in the precision concerning the interpretation of findings
      • Schwendicke F.
      • Samek W.
      • Krois J.
      Artificial intelligence in dentistry: chances and challenges.
      . Various dental applications with promising results have been documented in cariology
      • Zheng L.
      • Wang H.
      • Mei L.
      • et al.
      Artificial intelligence in digital cariology: a new tool for the diagnosis of deep caries and pulpitis using convolutional neural networks.
      , endodontology
      • Umer F.
      • Habib S.
      Critical analysis of artificial intelligence in endodontics: a scoping review.
      , periodontology
      • Chang H.J.
      • Lee S.J.
      • Yong T.H.
      • et al.
      Deep learning hybrid method to automatically diagnose periodontal bone loss and stage periodontitis.
      , orthodontics
      • Khanagar S.B.
      • Al-Ehaideb A.
      • Vishwanathaiah S.
      • et al.
      Scope and performance of artificial intelligence technology in orthodontic diagnosis, treatment planning, and clinical decision-making–a systematic review.
      , or forensic dentistry
      • Khanagar S.B.
      • Vishwanathaiah S.
      • Naik S.
      • et al.
      Application and performance of artificial intelligence technology in forensic odontology–a systematic review.
      . Although enface or intraoral photographs are used for gingivitis detection
      • Alalharith D.M.
      • Alharthi H.M.
      • Alghamdi W.M.
      • et al.
      A deep learning-based approach for the detection of early signs of gingivitis in orthodontic patients using faster region-based convolutional neural networks.
      or facial feature classification
      • Schwendicke F.
      • Golla T.
      • Dreher M.
      • et al.
      Convolutional neural networks for dental image diagnostics: a scoping review.
      , deep learning in dentistry is primarily related to radiological imaging
      • Hung K.
      • Montalvao C.
      • Tanaka R.
      • et al.
      The use and performance of artificial intelligence applications in dental and maxillofacial radiology: a systematic review.
      ,
      • Schwendicke F.
      • Samek W.
      • Krois J.
      Artificial intelligence in dentistry: chances and challenges.
      . To date, most research has been focusing on image segmentation and anatomic landmark detection
      • Hung K.
      • Montalvao C.
      • Tanaka R.
      • et al.
      The use and performance of artificial intelligence applications in dental and maxillofacial radiology: a systematic review.
      ,
      • Wang C.W.
      • Huang C.T.
      • Lee J.H.
      • et al.
      A benchmark for comparison of dental radiography analysis algorithms.
      . For instance, Wang et al
      • Wang C.W.
      • Huang C.T.
      • Lee J.H.
      • et al.
      A benchmark for comparison of dental radiography analysis algorithms.
      summarized a 2-mm success detection rate of 74.84% for 100 cephalometric 2-dimensional images. Meanwhile, Payer et al
      • Payer C.
      • Stern D.
      • Bischof H.
      • et al.
      Integrating spatial configuration into heatmap regression based CNNs for landmark localization.
      and Thaler et al
      • Thaler F.
      • Payer C.
      • Urschler M.
      • et al.
      Modeling annotation uncertainty with Gaussian heatmaps in landmark localization.
      reported improved success detection rates (2-mm success detection rate, 76.00%–89.76%).
      In the fields of oral radiology and pathology, literature concerning the detection of jaw lesions is rather modest. There are some reports on the automatic detection and differentiation of cystic jaw lesions
      • Lee J.H.
      • Kim D.H.
      • Jeong S.N.
      Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network.
      ,
      • Abdolali F.
      • Zoroofi R.A.
      • Otake Y.
      • et al.
      Automatic segmentation of maxillofacial cysts in cone beam CT images.
      . For instance, Lee et al
      • Lee J.H.
      • Kim D.H.
      • Jeong S.N.
      Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network.
      included CBCT volumes in addition to panoramic radiographs for evaluating the automated detection of radicular, kerato, and dentigerous cysts in their research. Regarding the detection of PALs in CBCT data sets, to the best of our knowledge, only Setzer et al
      • Setzer F.C.
      • Shi K.J.
      • Zhang Z.
      • et al.
      Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images.
      , Zheng et al
      • Zheng Z.
      • Yan H.
      • Setzer F.C.
      • et al.
      Anatomically constrained deep learning for automating dental CBCT segmentation and lesion detection.
      , and Orhan et al
      • Orhan K.
      • Bayrakdar I.S.
      • Ezhov M.
      • et al.
      Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans.
      developed the CNN approach concerning this issue. Despite promising methods, certain limitations can be found in each of them.
      Therefore, the aim of this study was to develop and validate an optimized fully automated deep CNN for the automated detection and segmentation of PALs in routine 3-dimensional (3D) CBCT data sets.

      Materials and Methods

      Ethical approval was provided by the local commission (review board number 33-048 ex 20/21).

      CBCT Data Set Selection

      CBCT volumes from routine clinical operations performed for different diagnostic indications (ie, implant planning, radiological assessment of impacted teeth, assessment of odontogenic tumors or other lesions, and orthodontic reasons) from January to October 2020 were retrospectively screened and selected according to the following criteria with subsequent anonymization:
      • 1.
        Field of view with a representation of the entire dental arch (upper jaw, lower jaw, or both)
      • 2.
        No edentulism
      • 3.
        Completed root development
      • 4.
        An acceptable degree of scatter and/or artifacts (exclusion of clinically insufficient interpretable data sets, ie, severe metal artifacts inhibiting individual crown visualization, and ghost effects/double images due to long-motion artifacts)
      All CBCT investigations were conducted using Planmeca ProMax 3D Max (Planmeca, Helsinki, Finland) with a field of view of 10.0 × 5.9 cm or 10.0 × 9.3 cm, covering at least 1 complete dental arch, with a 200-μm voxel size (96 kV, 5.6–9.0 mA, 12 s), which is labeled as “normal” mode by the manufacturer. An initial data set screening was performed on an MDNC-2221 monitor (resolution 1600 × 1200; size 432 × 324 mm; 59.9 Hz; Barco Control Rooms GmbH, Karlsruhe, Germany) using the Planmeca Romexis software (Planmeca, Helsinki, Finland).

      CNN Architecture

      The method for automated PAL detection was developed, trained, and validated in a multistage process using neural networks (Fig. 1). Due to current graphics processing unit memory limitations, 3D CBCT images cannot be directly processed at their original resolution. Moreover, reducing the resolution would lead to the loss of details in images, which would primarily affect the detection of small lesions. Therefore, the SCN
      • Payer C.
      • Stern D.
      • Bischof H.
      • et al.
      Integrating spatial configuration into heatmap regression based CNNs for landmark localization.
      was first trained to automatically detect all teeth present in an input image at a low resolution. Then, based on the detected tooth coordinates, the tooth regions were automatically cropped from the input image at a high resolution and fed into the modified U-Net for segmentation. Finally, the modified U-Net was trained to produce segmentation maps for all extracted tooth regions from the input image in the original, high resolution.
      Figure thumbnail gr1
      Figure 1Schematic diagram of the method. A 3D cone-beam computed tomography (CBCT) volume of a complete dental arch was used as input to the SpatialConfiguration-Net (SCN). The network was trained to predict the coordinates of each tooth in an input volume, represented by the yellow points. A gray-dashed arrow represents a cropping procedure performed on all detected teeth. For simplicity, only the cropping of one detected tooth is shown in this figure. A cropped region of the detected tooth, where the center of the region corresponds to the predicted coordinates of the tooth, was used as an input to the U-Net. Then, the network was trained to produce a segmented volume, in which the detected lesion was visualized with a red label. Details of the SCN architecture have been described by Payer et al
      • Payer C.
      • Stern D.
      • Bischof H.
      • et al.
      Integrating spatial configuration into heatmap regression based CNNs for landmark localization.
      , and we modified the U-Net architecture proposed by Ronneberg et al
      • Ronneberger O.
      • Fischer P.
      • Brox T.
      U-net: Convolutional Networks for biomedical image segmentation.
      such that it is optimized for 3D segmentation.
      The SCN is a landmark localization network consisting of 2 components—local appearance and spatial configuration—trained to produce heatmap images of teeth in an end-to-end manner. The predicted coordinates were generated directly from the predicted heatmap images, where a predicted coordinate for a specific tooth was defined as a coordinate with the highest intensity value in a predicted heatmap image for that tooth. Next, the ground truth data were generated using the open-source program Medical Imaging Interaction Toolkit, where each missing tooth or a tooth that could not be detected in the CBCT image was annotated with the coordinate (−1, −1, −1). Otherwise, the coordinates corresponded to the tooth locations in the 3D CBCT images.
      The semi-automatic weighted total variation
      • Bredies K.
      • Kunisch K.
      • Pock T.
      Total generalized variation.
      framework was used to create the ground truth segmentation of lesions for training the modified U-Net. Several contours in the image region affected by a PAL were marked as an input. Based on these input contours, the total variation software segmented the affected regions automatically over iterations using the energy functional parameters α = 50, β = 15, and γ = 0.55. Such segmentation was followed by a manual verification and, if necessary, an adaption in the open-source software program ITK-SNAP
      • Yushkevich P.A.
      • Piven J.
      • Hazlett H.C.
      • et al.
      User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability.
      by an oral surgeon specializing in oral radiology. The presence of a PAL was defined as an interruption of the lamina dura, with a periapical radiolucency diameter of >0.5 mm, according to the periapical index score described by Estrela et al
      • Estrela C.
      • Bueno M.R.
      • Azevedo B.C.
      • et al.
      A new periapical index based on cone beam computed tomography.
      .
      The ground truth segmentation images used for training the modified U-Net were cropped together with the original images for each present tooth and fed into the modified U-Net. Considering an average size of a tooth and a variation in the tooth detection step, each image was resampled to an isotropic voxel size of 0.4 mm using trilinear interpolation and cropped to a size of [64, 64, 64]. The center of a cropped image resembled the coordinate of a particular tooth predicted by the SCN. Moreover, each image was translated in the direction of the root tips by fixed factors, ensuring that the entire roots of a tooth are visible in the cropped image.
      The modified U-Net architecture consisted of 5 levels. Each convolutional layer had a kernel size of [3, 3, 3] and 16 filter outputs. Two convolutional operations followed by a downsampling layer were applied in the contracting path, where average pooling was performed. In the expansive path, each upsampling layer performed trilinear interpolation followed by 2 convolutional operations. A dropout of 0.3 after each convolutional layer was included. All layers had a ReLu activation function, except the last one, where the sigmoid activation function was removed to obtain logits instead of probabilities as the output of the network. The voxels in each output image lay in the range [−∞, +∞]. A threshold value of 0 was applied to an output image to reproduce a final prediction segmentation map. All voxels with a negative value in an output image belonged to the background, whereas all voxels with a nonnegative value belonged to the foreground in the final prediction segmentation map. In addition, bounding cuboids were used to perform tooth- or lesion-based evaluations. The network was trained with the focal loss function
      • Lin T.Y.
      • Goyal P.
      • Girshick R.
      • et al.
      Focal loss for dense object detection.
      , whereas focal Tversky loss
      • Abraham N.
      • Khan N.M.
      A Novel focal Tversky loss function with improved attention U-Net for lesion segmentation.
      and combo loss function
      • Taghanaki S.A.
      • Zheng Y.
      • Zhou S.K.
      • et al.
      Combo loss: handling input and output imbalance in multi-organ segmentation.
      were also applied for comparison purposes. All 3 loss functions were specifically designed to deal with class imbalance problems. The parameters of the loss functions were set as follows: as the lesion voxels in each cropped image occupied <10% of a volume, causing a high imbalance between positive and negative voxels, the weighting factors α and β were set to 0.9 in the corresponding loss functions to control the class weights. The parameter γ, which controls the downweighting of examples of the majority class, was treated as a hyperparameter and set to 2.0 in the focal loss and focal Tversky loss functions. In the combo loss function, γ was set to 0.5 to control the contribution of separate loss functions.

      Training and Validation

      A total of 144 CBCT volumes were selected, comprising 54 scans of only the lower jaw, 74 of only the upper jaw, and 16 of both jaws, which resulted in a total of 2128 teeth (206 with the presence of PALs and 1922 without the presence of PALs) that were used for training and testing procedures.
      Both networks were trained and tested using the 4-fold cross-validation technique. For tooth localization using SCN, 144 3D CBCT scans were divided into 4 folds with 36 volumes each. Thus, in each run, a different fold with 36 volumes was used for testing, and the 3 remaining folds were used for training the SCN. Concerning the detection of PALs using the U-Net, the 2128 cropped tooth scans were divided into 4 folds, over which 206 osteolytic lesions were distributed uniformly. In each run, a different fold with 532 tooth volumes was used for testing, and the 3 remaining folds were used for training the U-Net. Approximately 10% of teeth were affected by a PAL in each fold. The training and testing of the model were performed using an Intel(R) CPU 920 with an NVIDIA GeForce GTX TITAN X. The CNNs were run on the operating system Ubuntu 20.04 with Python 3.7 and TensorFlow 1.15.0. The inference time of the tooth localization network for a CBCT volume was approximately 15 s, whereas the inference time of the segmentation network, including the cropping procedure, was approximately 2–3 minutes.

      Evaluation Metrics and Statistical Analysis

      Regarding tooth detection, the point-to-point error metric (PE) was used to calculate the Euclidean distance between ground truth and predicted landmarks. In addition, the image-to-point error metric, defined as the mean value of the point-to-point errors of all landmarks in a scan, was calculated. The accuracy of the tooth detection network was defined as the percentage of correctly identified landmarks over all predicted landmarks
      • Payer C.
      • Stern D.
      • Bischof H.
      • et al.
      Integrating spatial configuration into heatmap regression based CNNs for landmark localization.
      ,
      • Glocker B.
      • Zikic D.
      • Konukoglu E.
      • et al.
      Vertebrae localization in pathological spine CT via dense classification from sparse annotations.
      . A landmark was identified correctly if the closest ground truth landmark was the correct one and the PE to the ground truth landmark was less than a specified radius r.
      The overlap between the ground truth and predicted segmentation maps was evaluated using the dice score (DSC), true positive rate (TPR), and false negative rate (FNR). The TPR was defined as the overlap only in the area of the ground truth lesion, and the FNR was defined as 1−TPR. Concerning the precision of lesion detection, the sensitivity and specificity metrics were defined as True Positive (TP)/(TP + False Negative [FN]) and True Negative (TN)/(TN + False Positive [FP]), respectively, to calculate the number of teeth with/without lesions that were correctly predicted. A predicted segmentation map for a tooth with a lesion was classified as a TP if the DSC between the ground truth and predicted segmentation map was >0; otherwise, the predicted segmentation map was classified as an FN. A predicted segmentation map for a tooth without a lesion was classified as a TN if all voxels in the predicted segmentation map were 0; otherwise, the predicted segmentation map was classified as an FP.

      Results

      Both the SCN for automatic tooth localization and the modified U-Net for lesion detection/segmentation networks were evaluated over 4 cross-validations.
      Using a radius of 6 mm for defining the correctly identified landmarks, the SCN achieved an average accuracy of 97.3%. More specifically, for 97.3% of predicted teeth, the closest ground truth landmark was the correct one, and the PE to the ground truth landmark was <6 mm. For radii of 4, 3, and 2 mm, the achieved average accuracy values were 94.7%, 89.1%, and 72.6%, respectively. In addition, a mean PE of 1.74 ± 1.44 mm and a mean image-to-point error metric of 0.81 ± 0.44 mm were achieved. For the modified U-Net trained with focal loss, the obtained mean values of sensitivity and specificity metrics were 0.97 ± 0.03 and 0.88 ± 0.04, respectively. Specifically, 97.1% of lesions were detected (TP), 2.9% of lesions were missed (FN), 88% of teeth without lesions were predicted as TNs, and 12% of teeth without lesions were predicted as FPs. For segmentation, the corresponding mean values obtained for the overlapping metrics TPR, FNR, and DSC were 80.9% ± 3.21%, 19.1% ± 3.21%, and 66.5% ± 3.28%, respectively. Figure 2 shows examples for the segmentation results of different lesions.
      Figure thumbnail gr2
      Figure 2Visualization of lesion detection/segmentation results for a selected cross-section. The first column shows four examples of test images with lesions. The last column shows the overlap between the corresponding ground truth and prediction segmentation maps. Dice score (DSC) values can range between 0, which represents no overlap, and 1, which represents a perfect overlap. High DSC values of 86.7% and 79.2% were obtained for the test images in the first and second rows, respectively. Lower DSC values of 53.7% and 46.9% were obtained for the test images in the third and fourth rows, respectively. For images with smaller lesions, the differences between predicted and ground truth segmentation maps had a larger impact on the DSC value.
      For comparison, when focal Tversky loss was used instead of focal loss, the sensitivity and specificity of the detection results were 92.2% ± 5.06% and 88.5% ± 0.79%, respectively. The segmentation results of focal Tversky loss in terms of TPR, FNR, and DSC were 82.6% ± 7.02%, 17.4% ± 7.02%, and 70% ± 3.56%, respectively. For combo loss, the sensitivity and specificity of the detection results were 85.4% ± 3.87% and 93.7% ± 2.09%, respectively. The segmentation results of combo loss in terms of TPR, FNR, and DSC were 70% ± 3.43%, 30% ± 3.43%, and 70% ± 3.56%, respectively.

      Discussion

      This study evaluated a deep CNN algorithm for the automated detection and segmentation of PALs in CBCT volumes. In our opinion, the application of deep learning for detecting periapical radiolucent lesions is indeed of high clinical relevance. Moreover, we believe that it could essentially support dentists with their reporting work to save time, improve early detection, prevent overlooking lesions as incidental findings within large CBCT volumes, and increase safety for both dentists and patients. We are also convinced that both identification and segmentation of volumes are beneficial in dental surgery, for example, for surgery planning or healing assessment of lesions. This, in turn, could help with treatment planning decisions, for example, regarding nonsurgical versus surgical retreatment versus extraction. In addition, when the accuracy of artificial intelligence–based techniques is confirmed to be reliable, they can be used in education and training. Thus, recent graduates and inexperienced dentists could obtain immediate feedback from the deep CNN without the need of direct supervision by senior staff members.
      When we compare related literature
      • Lee J.H.
      • Kim D.H.
      • Jeong S.N.
      Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network.
      ,
      • Setzer F.C.
      • Shi K.J.
      • Zhang Z.
      • et al.
      Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images.
      • Zheng Z.
      • Yan H.
      • Setzer F.C.
      • et al.
      Anatomically constrained deep learning for automating dental CBCT segmentation and lesion detection.
      • Orhan K.
      • Bayrakdar I.S.
      • Ezhov M.
      • et al.
      Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans.
      with our work, the study conducted by Lee et al
      • Lee J.H.
      • Kim D.H.
      • Jeong S.N.
      Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network.
      is least assimilable and rather not representative of our research question regarding PAL detection. The authors
      • Lee J.H.
      • Kim D.H.
      • Jeong S.N.
      Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network.
      included a large number of 986 CBCT scans in addition to 1140 panoramic radiographs from 247 patients, but their focus was on different odontogenic jaw cysts. As is the nature of such pathologies, radiolucencies were broadly extended, which is different from PALs and due to this reason leads to incommensurable results. Moreover, in their study, all images were cropped and resized and finally fed as 2-dimensional slices into GoogLeNet Inception-v3, resulting in 96.1% sensitivity and 77.1% specificity
      • Lee J.H.
      • Kim D.H.
      • Jeong S.N.
      Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network.
      .
      However, consistent with our approach are the studies conducted by Setzer et al
      • Setzer F.C.
      • Shi K.J.
      • Zhang Z.
      • et al.
      Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images.
      , Zheng et al
      • Zheng Z.
      • Yan H.
      • Setzer F.C.
      • et al.
      Anatomically constrained deep learning for automating dental CBCT segmentation and lesion detection.
      , and Orhan et al
      • Orhan K.
      • Bayrakdar I.S.
      • Ezhov M.
      • et al.
      Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans.
      . Both Setzer et al
      • Setzer F.C.
      • Shi K.J.
      • Zhang Z.
      • et al.
      Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images.
      and Zheng et al
      • Zheng Z.
      • Yan H.
      • Setzer F.C.
      • et al.
      Anatomically constrained deep learning for automating dental CBCT segmentation and lesion detection.
      used 20 selected 3D CBCT images, where each image resembled a limited field of view of 40 mm in diameter and height with a voxel size of 125 μm, containing roots with at least one PAL. Orhan et al
      • Orhan K.
      • Bayrakdar I.S.
      • Ezhov M.
      • et al.
      Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans.
      used volumes from 109 patients, including 153 PAL images, with varying field of views from 50 to 100 mm in diameter and a voxel size ranging from 75 to 200 μm. Nevertheless, we obtained our CBCT volumes from 144 patients, including a cross section of 206 PALs different in shape and size. CBCTs were taken for different indications, and not only endodontic reasons, including at least 1 jaw, and had a resolution of 200 μm, which is, concerning the labeling mode and case number, rather comparable with the study of Orhan et al
      • Orhan K.
      • Bayrakdar I.S.
      • Ezhov M.
      • et al.
      Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans.
      and quite different from those of Setzer et al
      • Setzer F.C.
      • Shi K.J.
      • Zhang Z.
      • et al.
      Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images.
      and Zheng et al
      • Zheng Z.
      • Yan H.
      • Setzer F.C.
      • et al.
      Anatomically constrained deep learning for automating dental CBCT segmentation and lesion detection.
      .
      Regarding the technical approach of Setzer et al
      • Setzer F.C.
      • Shi K.J.
      • Zhang Z.
      • et al.
      Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images.
      and Zheng et al
      • Zheng Z.
      • Yan H.
      • Setzer F.C.
      • et al.
      Anatomically constrained deep learning for automating dental CBCT segmentation and lesion detection.
      , as ground truth, PALs, and surrounding structures were semiautomatically segmented, followed by manual revision.
      Finally, 5 representative slices rather than the actual 3D image were used to create a volume sent as input data to feed a 3D U-Net-based network. Moreover, to address the technical challenge of utilizing 3D images in a neural network, Setzer et al
      • Setzer F.C.
      • Shi K.J.
      • Zhang Z.
      • et al.
      Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images.
      performed 5-fold cross-validation with four images in each of the 1 testing and 4 training groups, whereas 4-fold cross-validation with 5 images in 1 testing and 3 training groups was performed by Zheng et al
      • Zheng Z.
      • Yan H.
      • Setzer F.C.
      • et al.
      Anatomically constrained deep learning for automating dental CBCT segmentation and lesion detection.
      . Thus, Setzer et al
      • Setzer F.C.
      • Shi K.J.
      • Zhang Z.
      • et al.
      Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images.
      reported 93% sensitivity and 88% specificity concerning lesion detection, whereas Zheng et al
      • Zheng Z.
      • Yan H.
      • Setzer F.C.
      • et al.
      Anatomically constrained deep learning for automating dental CBCT segmentation and lesion detection.
      reported 84% sensitivity.
      Conceptually, the approach in this study is most similar to that reported by Orhan et al
      • Orhan K.
      • Bayrakdar I.S.
      • Ezhov M.
      • et al.
      Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans.
      . In their test assembly, CBCT images were uploaded into a deep CNN following a 2-step approach with first tooth detection and second lesion detection, resulting in 92.8% sensitivity. However, in their work, the imbalanced ratio between teeth with and without PALs, as well as the variations in the volume of PALs and the surrounding region, was not considered, which may have hampered their results.
      In the present study, we developed and validated an optimized, fully automated deep CNN for the automated detection and segmentation of PALs with a sensitivity of 97.1%, which is an increased sensitivity in comparison with related works
      • Setzer F.C.
      • Shi K.J.
      • Zhang Z.
      • et al.
      Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images.
      • Zheng Z.
      • Yan H.
      • Setzer F.C.
      • et al.
      Anatomically constrained deep learning for automating dental CBCT segmentation and lesion detection.
      • Orhan K.
      • Bayrakdar I.S.
      • Ezhov M.
      • et al.
      Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans.
      , and a specificity of 88%, which is equal to that obtained by Setzer et al
      • Setzer F.C.
      • Shi K.J.
      • Zhang Z.
      • et al.
      Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images.
      who used different data sets than those used in our study. To detect PALs in CBCT scans with a large field of view in a fully automated manner, our end-to-end approach takes a 3D CBCT volume as an input and provides a detected and segmented PAL region as an output. This was essential for handling the small PAL volumes as the first step can be conducted on a low image resolution, whereas in the second step, when PALs are detected and segmented, the region with high resolution is cropped around detected teeth.
      In our work, special attention was given to the amortization of class imbalances because only a small portion of 3D CBCT images was occupied by lesions. Thus, class imbalances represent a common problem in medical image detection/segmentation tasks, which are not considered by conventional loss functions, such as cross-entropy loss, dice loss, and mean squared error
      • Johnson J.M.
      • Khoshgoftaar T.M.
      Survey on deep learning with class imbalance.
      ,
      • Yeung M.
      • Sala E.
      • Schoenlieb C.B.
      • et al.
      Unified focal loss: generalising dice and cross entropy-based losses to handle class imbalances medical image segmentation.
      . The imbalance between 10% of teeth affected by PALs and 90% of unaffected teeth in the training and testing sets had to be addressed. Moreover, the class imbalance owing to a PAL occupying only a small image region in the cropped CBCT image of a tooth was presented. When the parameters of 3 different loss functions (focal loss, focal Tversky loss, and combo loss) were employed, examples of the minority class, considered difficult to detect, and examples of the majority class, considered easy to detect, could be reweighed. Respectively, it was demonstrated that the focal loss function dealt best with the class imbalance problem of the data set, achieving excellent results in PAL detection in CBCT scans. The segmentation results of our method are consistent with those reported by Setzer et al
      • Setzer F.C.
      • Shi K.J.
      • Zhang Z.
      • et al.
      Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images.
      . However, the test data sets differ in resolution and volume compared with the presented work. By observing the predictions for individual lesions, we noticed that the mis-segmentation was primarily related to PALs with a very small or large volume. In images with very small lesions, minor differences between predicted and ground truth segmentation maps had a large impact on the DSC value. Moreover, there were a few images in which the network unexpectedly did not predict a lesion. This may be due to the poor contrast in CBCT images and the varying shape and appearance of lesions, which are considered as challenging factors in general.
      Regarding the limitations of our data-driven approach, the modest case number of 144 volumes has to be taken under consideration. In contrast, owing to the extended field of view, we included 2128 teeth, which is an overall remarkable number and far exceeds the case number mentioned in related studies
      • Setzer F.C.
      • Shi K.J.
      • Zhang Z.
      • et al.
      Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images.
      ,
      • Zheng Z.
      • Yan H.
      • Setzer F.C.
      • et al.
      Anatomically constrained deep learning for automating dental CBCT segmentation and lesion detection.
      . Another limitation of this study was the usage of the supervised learning approach
      • Schwendicke F.
      • Samek W.
      • Krois J.
      Artificial intelligence in dentistry: chances and challenges.
      , which made ground truth annotation of PALs necessary. This was done semiautomatically with an optional manual adaption by a single expert. Regarding the advanced knowledge of this person in oral radiology and structure segmentation, a multieye principle, in contrast to Setzer et al
      • Setzer F.C.
      • Shi K.J.
      • Zhang Z.
      • et al.
      Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images.
      , or a second check from the same investigator, in contrast to Orhan et al
      • Orhan K.
      • Bayrakdar I.S.
      • Ezhov M.
      • et al.
      Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans.
      , was not applied. Moreover, similar to previous studies
      • Setzer F.C.
      • Shi K.J.
      • Zhang Z.
      • et al.
      Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images.
      • Zheng Z.
      • Yan H.
      • Setzer F.C.
      • et al.
      Anatomically constrained deep learning for automating dental CBCT segmentation and lesion detection.
      • Orhan K.
      • Bayrakdar I.S.
      • Ezhov M.
      • et al.
      Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans.
      , exclusively adult teeth with finished apex development were included in our study. Healthy immature teeth showing a radicular radiolucency due to the presence of the root developing apical papilla may not have been successfully differentiated from diseased teeth with PALs. The lesion segmentation in our method depends on tooth localization. In the case of tooth misdetection, the follow-up segmentation of the lesion is not feasible. However, to minimize the risk of misdetection, we used a highly accurate deep learning approach for landmark localization
      • Payer C.
      • Stern D.
      • Bischof H.
      • et al.
      Integrating spatial configuration into heatmap regression based CNNs for landmark localization.
      with a success rate of 97%.
      In conclusion, the proposed fully automated method provided excellent results, although we included larger field of view volumes with a different resolution compared with related studies
      • Setzer F.C.
      • Shi K.J.
      • Zhang Z.
      • et al.
      Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images.
      • Zheng Z.
      • Yan H.
      • Setzer F.C.
      • et al.
      Anatomically constrained deep learning for automating dental CBCT segmentation and lesion detection.
      • Orhan K.
      • Bayrakdar I.S.
      • Ezhov M.
      • et al.
      Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans.
      . Moreover, PALs showed variations in appearance, size, and shape, and there was a high imbalance between positive and negative class examples. Therefore, we cautiously conclude that our method may be acceptable for testing it under clinical conditions. This should hopefully lead soon to the next step of applying the developed approach within a prospective study. Unfortunately, in the present situation, direct comparability to the related literature is not possible because other discussed approaches
      • Setzer F.C.
      • Shi K.J.
      • Zhang Z.
      • et al.
      Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images.
      • Zheng Z.
      • Yan H.
      • Setzer F.C.
      • et al.
      Anatomically constrained deep learning for automating dental CBCT segmentation and lesion detection.
      • Orhan K.
      • Bayrakdar I.S.
      • Ezhov M.
      • et al.
      Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans.
      , and our methods were not evaluated on the same database. Nevertheless, we strongly believe that our results will provide a sufficient basis for future developments, such as those concerning the inclusion of immature teeth.

      Acknowledgments

      All authors gave their final approval and agree to be accountable for all aspects of the work.
      The authors would like to thank Enago (www.enago.com/) for the English language review.
      This study did not receive any funding.
      The authors deny any conflicts of interest related to this study

      References

        • Jacobs R.
        • Salmon B.
        • Codari M.
        • et al.
        Cone beam computed tomography in implant dentistry: recommendations for clinical use.
        BMC Oral Health. 2018; 18: 88
        • Lofthag-Hansen S.
        • Huumonen S.
        • Gröndahl K.
        • et al.
        Limited cone-beam CT and intraoral radiography for the diagnosis of periapical pathology.
        Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 2007; 103: 114-119
        • Estrela C.
        • Bueno M.R.
        • Azevedo B.C.
        • et al.
        A new periapical index based on cone beam computed tomography.
        J Endod. 2008; 34: 1325-1331
        • Harvey S.
        • Patel S.
        Guidelines and template for reporting on CBCT scans.
        Br Dent J. 2020; 228: 15-18
        • Leonardi Dutra K.
        • Haas L.
        • Porporatti A.L.
        • et al.
        Diagnostic accuracy of cone-beam computed tomography and conventional radiography on apical periodontitis: a systematic review and meta-analysis.
        J Endod. 2016; 42: 356-364
        • Sasaki H.
        • Hirai K.
        • Martins C.M.
        • et al.
        Interrelationship between periapical lesion and systemic metabolic disorders.
        Curr Pharm Des. 2016; 22: 2204-2215
        • Ebrahimighahnavieh M.A.
        • Luo S.
        • Chiong R.
        Deep learning to detect Alzheimer's disease from neuroimaging: a systematic literature review.
        Comput Methods Programs Biomed. 2020; 187: 105242
        • Stern D.
        • Payer C.
        • Urschler M.
        Automated age estimation from MRI volumes of the hand.
        Med Image Anal. 2019; 58: 101538
        • Trask A.W.
        Grokking deep learning.
        Manning Publications, Shelter Island, NY2019
        • Krizhevsky A.
        • Sutskever I.
        • Hinton G.E.
        ImageNet classification with deep convolutional neural networks.
        in: Pereira F. Burges C.J.C. Bottou L. Weinberger K.Q. In: Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS'12). 1. Curran Associates Inc. Red Hook, 2012: 1097-1105
        • Szegedy C.
        • Liu W.
        • Jia Y.
        • et al.
        Going deeper with convolutions.
        in: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). 2015 Jun 7-12. Boston, MA. IEEE, New York2015: 1-9
        • Ronneberger O.
        • Fischer P.
        • Brox T.
        U-net: Convolutional Networks for biomedical image segmentation.
        in: Navab N. Hornegger J. Wells W. Frangi A. International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2015). 2015 Nov. 9351. Springer International Publishing, Cham2015: 234-241
        • Payer C.
        • Stern D.
        • Bischof H.
        • et al.
        Regressing heatmaps for multiple landmark localization using CNNs.
        in: Ourselin S. Joskowicz L. Sabuncu M.R. Medical Image Computing and Computer-Assisted Intervention (MICCAI 2016). 2016 Oct 17-21. Athens, Greek. 9901. Springer International Publishing, Cham2016: 230-238
        • Payer C.
        • Stern D.
        • Bischof H.
        • et al.
        Integrating spatial configuration into heatmap regression based CNNs for landmark localization.
        Med Image Anal. 2019; 54: 207-219
        • Hung K.
        • Montalvao C.
        • Tanaka R.
        • et al.
        The use and performance of artificial intelligence applications in dental and maxillofacial radiology: a systematic review.
        Dento Maxillo Fac Radiol. 2020; 49: 20190107
        • Schwendicke F.
        • Samek W.
        • Krois J.
        Artificial intelligence in dentistry: chances and challenges.
        J Dent Res. 2020; 99: 769-774
        • Zheng L.
        • Wang H.
        • Mei L.
        • et al.
        Artificial intelligence in digital cariology: a new tool for the diagnosis of deep caries and pulpitis using convolutional neural networks.
        Ann Transl Med. 2021; 9: 763
        • Umer F.
        • Habib S.
        Critical analysis of artificial intelligence in endodontics: a scoping review.
        J Endod. 2022; 48: 152-160
        • Chang H.J.
        • Lee S.J.
        • Yong T.H.
        • et al.
        Deep learning hybrid method to automatically diagnose periodontal bone loss and stage periodontitis.
        Sci Rep. 2020; 10: 7531
        • Khanagar S.B.
        • Al-Ehaideb A.
        • Vishwanathaiah S.
        • et al.
        Scope and performance of artificial intelligence technology in orthodontic diagnosis, treatment planning, and clinical decision-making–a systematic review.
        J Dent Sci. 2021; 16: 482-492
        • Khanagar S.B.
        • Vishwanathaiah S.
        • Naik S.
        • et al.
        Application and performance of artificial intelligence technology in forensic odontology–a systematic review.
        Leg Med (Tokyo). 2021; 48: 101826
        • Alalharith D.M.
        • Alharthi H.M.
        • Alghamdi W.M.
        • et al.
        A deep learning-based approach for the detection of early signs of gingivitis in orthodontic patients using faster region-based convolutional neural networks.
        Int J Environ Res Public Health. 2020; 17: 8447
        • Schwendicke F.
        • Golla T.
        • Dreher M.
        • et al.
        Convolutional neural networks for dental image diagnostics: a scoping review.
        J Dent. 2019; 91: 103226
        • Wang C.W.
        • Huang C.T.
        • Lee J.H.
        • et al.
        A benchmark for comparison of dental radiography analysis algorithms.
        Med Image Anal. 2016; 31: 63-76
        • Thaler F.
        • Payer C.
        • Urschler M.
        • et al.
        Modeling annotation uncertainty with Gaussian heatmaps in landmark localization.
        J Mach Learn Biomed Imaging. 2021; 14: 1-27
        • Lee J.H.
        • Kim D.H.
        • Jeong S.N.
        Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network.
        Oral Dis. 2020; 26: 152-158
        • Abdolali F.
        • Zoroofi R.A.
        • Otake Y.
        • et al.
        Automatic segmentation of maxillofacial cysts in cone beam CT images.
        Comput Biol Med. 2016; 72: 108-119
        • Setzer F.C.
        • Shi K.J.
        • Zhang Z.
        • et al.
        Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images.
        J Endod. 2020; 46: 987-993
        • Zheng Z.
        • Yan H.
        • Setzer F.C.
        • et al.
        Anatomically constrained deep learning for automating dental CBCT segmentation and lesion detection.
        IEEE Trans Autom Sci Eng. 2020; 18: 603-614
        • Orhan K.
        • Bayrakdar I.S.
        • Ezhov M.
        • et al.
        Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans.
        Int Endod J. 2020; 53: 680-689
        • Bredies K.
        • Kunisch K.
        • Pock T.
        Total generalized variation.
        SIAM J Imaging Sci. 2010; 3: 492-526
        • Yushkevich P.A.
        • Piven J.
        • Hazlett H.C.
        • et al.
        User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability.
        Neuroimage. 2006; 31: 1116-1128
        • Lin T.Y.
        • Goyal P.
        • Girshick R.
        • et al.
        Focal loss for dense object detection.
        IEEE Trans Pattern Anal Mach Intell. 2020; 42: 318-327
        • Abraham N.
        • Khan N.M.
        A Novel focal Tversky loss function with improved attention U-Net for lesion segmentation.
        in: Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), 2019 Apr 8-11. Venice, Italy. IEEE, New York2019: 683-687
        • Taghanaki S.A.
        • Zheng Y.
        • Zhou S.K.
        • et al.
        Combo loss: handling input and output imbalance in multi-organ segmentation.
        Comput Med Imaging Graph. 2019; 75: 24-33
        • Glocker B.
        • Zikic D.
        • Konukoglu E.
        • et al.
        Vertebrae localization in pathological spine CT via dense classification from sparse annotations.
        Med Image Comput Assist Interv. 2013; 16: 262-270
        • Johnson J.M.
        • Khoshgoftaar T.M.
        Survey on deep learning with class imbalance.
        J Big Data. 2019; 6: 1-54
        • Yeung M.
        • Sala E.
        • Schoenlieb C.B.
        • et al.
        Unified focal loss: generalising dice and cross entropy-based losses to handle class imbalances medical image segmentation.
        Comput Med Imaging Graph. 2022; 95: 102026