TY - JOUR
T1 - Toward Efficient Cancer Detection on Mobile Devices
AU - Lee, Janghyeon
AU - Park, Jongyoul
AU - Lee, Yongkeun
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2025
Y1 - 2025
N2 - Recent advancements in deep learning for cancer detection have achieved impressive accuracy, yet high computational costs and latency remain significant barriers for practical deployment on resource-constrained devices, such as smartphones and IoT platforms. This study focuses on optimizing MobileNetV1 and MobileNetV2 models to achieve more efficient, real-time cancer type identification. Through optimization strategies including selective layer unfreezing, pruning, and quantization, we demonstrate significant improvements in model size, inference time, and efficiency. For MobileNetV1, model size was reduced from 13.1 MB to 3.23 MB, and inference time was cut from 23 ms to 14 ms, with an F1 score above 0.99. For MobileNetV2, the model size was reduced from 9.41 MB to 2.82 MB, with inference times reduced from 24 ms to 13 ms, while maintaining a high F1 score of 0.98. The efficiency scores for MobileNetV1 and MobileNetV2 were 0.984 and 0.994, respectively, significantly outperforming other state-of-the-art neural networks such as VGG16 (efficiency score: 0.458), ResNet50 (0.418), and DenseNet121 (0.731). These findings demonstrate that with appropriate optimizations, MobileNet models can be deployed on edge devices, achieving high accuracy (above 95%), fast inference times (under one second), and superior efficiency, making them ideal candidates for real-time cancer detection in resource-constrained environments like mobile and IoT devices.
AB - Recent advancements in deep learning for cancer detection have achieved impressive accuracy, yet high computational costs and latency remain significant barriers for practical deployment on resource-constrained devices, such as smartphones and IoT platforms. This study focuses on optimizing MobileNetV1 and MobileNetV2 models to achieve more efficient, real-time cancer type identification. Through optimization strategies including selective layer unfreezing, pruning, and quantization, we demonstrate significant improvements in model size, inference time, and efficiency. For MobileNetV1, model size was reduced from 13.1 MB to 3.23 MB, and inference time was cut from 23 ms to 14 ms, with an F1 score above 0.99. For MobileNetV2, the model size was reduced from 9.41 MB to 2.82 MB, with inference times reduced from 24 ms to 13 ms, while maintaining a high F1 score of 0.98. The efficiency scores for MobileNetV1 and MobileNetV2 were 0.984 and 0.994, respectively, significantly outperforming other state-of-the-art neural networks such as VGG16 (efficiency score: 0.458), ResNet50 (0.418), and DenseNet121 (0.731). These findings demonstrate that with appropriate optimizations, MobileNet models can be deployed on edge devices, achieving high accuracy (above 95%), fast inference times (under one second), and superior efficiency, making them ideal candidates for real-time cancer detection in resource-constrained environments like mobile and IoT devices.
KW - cancer
KW - Digital pathology
KW - efficient neural networks
KW - machine learning
KW - MobileNet
UR - http://www.scopus.com/inward/record.url?scp=85218757501&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2025.3543838
DO - 10.1109/ACCESS.2025.3543838
M3 - Article
AN - SCOPUS:85218757501
SN - 2169-3536
VL - 13
SP - 34613
EP - 34626
JO - IEEE Access
JF - IEEE Access
ER -