Abstract—Age estimation is a complex task in forensic dentistry especially if the bodies have started to decompose. However, when the task involves Manually examining, the accuracy can decrease due varying experience of the experts, the results of different experts may also vary. To improve speed and accuracy of the age estimation process using forensic dentistry, researchers have proposed Convolutional Neural Network for Dental Age and Sex Network estimation (DASNET). However, pooling and scalar outputs of CNNs could not allow to get the equivariance due to the dental extraction complexity from panoramic images including jaws, teeth, lesions and carries. So, a deep auto-encoder decoder architecture has been developed by the authors, which estimates the age based on semantic and structural feature representation. The age ranges are chosen based on the structural variation of the jaw in these particular age ranges as compared to each other. The authors have proposed a Convolution Long Short Term Memory (ConvLSTM) to capture the correlation of features and generates high level representation of features. For the representation of the generated features, authors have utilized “Atrous pyramid convolution” to produce a multiscale representation. The authors have proposed a combination of multi-scale and multi-level architecture for age estimation. First comes the first sub-part of the model that is the multi-level architecture, it is used for the extraction of hidden features. After that, the output is fed to second subpart which is the multi-scale architecture that enriches the model representation capability in encoding structural and shape characteristics. The propose techniques successfully reduces mean error to 0.75 years, as opposed to 0.93 years of the DASNET.
Index Terms—age estimation, forensic dentistry, DASNET, CNNs, ConvLSTM, Atrous pyramid convolution
Cite: Sultan Alkaabi and Salman Yussof, "Multi-level Multi-scale Deep Feature Encoding for Chronological Age Estimation from OPG Images," Journal of Image and Graphics, Vol. 10, No. 4, pp. 151-157, December 2022.
Copyright © 2022 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.
Copyright © 2012-2023 Journal of Image and Graphics, All Rights Reserved