Abstract
Considering factors such as illumination, camera quality variations, and background-specific variations, identifying a face using a smartphone-based facial image capture application is challenging. Face Image Quality Assessment refers to the process of taking a face image as input and producing some form of “quality” estimate as an output. Typically, quality assessment techniques use deep learning methods to categorize images. The models used in deep learning are shown as black boxes. This raises the question of the trustworthiness of the models. Several explainability techniques have gained importance in building this trust. Explainability techniques provide visual evidence of the active regions within an image on which the deep learning model makes a prediction. Here, we developed a technique for reliable prediction of facial images before medical analysis and security operations. A combination of gradient-weighted class activation mapping and local interpretable model-agnostic explanations were used to explain the model. This approach has been implemented in the preselection of facial images for skin feature extraction, which is important in critical medical science applications. We demonstrate that the use of combined explanations provides better visual explanations for the model, where both the saliency map and perturbation-based explainability techniques verify predictions.
| Original language | English |
|---|---|
| Pages (from-to) | 558-573 |
| Number of pages | 16 |
| Journal | Journal of Information Processing Systems |
| Volume | 20 |
| Issue number | 4 |
| DOIs | |
| State | Published - Aug 2024 |
Keywords
- Explainable Deep Learning
- Face Image Quality Assessment
- Image Classification
- MobileNet
- Transfer Learning