AΚtransU-Net: Transformer-Equipped U-Net Model for Improved Actinic Keratosis Detection in Clinical Photography
<b>Background:</b> Integrating artificial intelligence into clinical photography offers great potential for monitoring skin conditions such as actinic keratosis (AK) and skin field cancerization. Identifying the extent of AK lesions often requires more than analyzing lesion morphology—it...
Saved in:
Main Authors: | , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-07-01
|
Series: | Diagnostics |
Subjects: | |
Online Access: | https://www.mdpi.com/2075-4418/15/14/1752 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | <b>Background:</b> Integrating artificial intelligence into clinical photography offers great potential for monitoring skin conditions such as actinic keratosis (AK) and skin field cancerization. Identifying the extent of AK lesions often requires more than analyzing lesion morphology—it also depends on contextual cues, such as surrounding photodamage. This highlights the need for models that can combine fine-grained local features with a comprehensive global view. <b>Methods:</b> To address this challenge, we propose AKTransU-net, a hybrid U-net-based architecture. The model incorporates Transformer blocks to enrich feature representations, which are passed through ConvLSTM modules within the skip connections. This configuration allows the network to maintain semantic coherence and spatial continuity in AK detection. This global awareness is critical when applying the model to whole-image detection via tile-based processing, where continuity across tile boundaries is essential for accurate and reliable lesion segmentation. <b>Results:</b> The effectiveness of AKTransU-net was demonstrated through comparative evaluations with state-of-the-art segmentation models. A proprietary annotated dataset of 569 clinical photographs from 115 patients with actinic keratosis was used to train and evaluate the models. From each photograph, crops of 512 × 512 pixels were extracted using translation lesion boxes that encompassed lesions in different positions and captured different contexts. AKtransU-net exhibited a more robust context awareness and achieved a median Dice score of 65.13%, demonstrating significant progress in whole-image assessments. <b>Conclusions:</b> Transformer-driven context modeling offers a promising approach for robust AK lesion monitoring, supporting its application in real-world clinical settings where accurate, context-aware analysis is crucial for managing skin field cancerization. |
---|---|
ISSN: | 2075-4418 |