Pub Date : 2025-11-12DOI: 10.1016/j.slast.2025.100364
Jamien Lim, Tal Murthy
{"title":"Literature highlights column: From the literature: Life Sciences Discovery and Technology Highlights.","authors":"Jamien Lim, Tal Murthy","doi":"10.1016/j.slast.2025.100364","DOIUrl":"10.1016/j.slast.2025.100364","url":null,"abstract":"","PeriodicalId":54248,"journal":{"name":"SLAS Technology","volume":" ","pages":"100364"},"PeriodicalIF":3.7,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145515029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-28DOI: 10.1016/j.slast.2025.100323
Muhammad Zeeshan Tahir , Xingzheng Lyu , Muhammad Nasir , Wengan He , Abeer Aljohani , Sanyuan Zhang
Diabetic Retinopathy (DR) is a complication of diabetes that can cause vision impairment and lead to permanent blindness if left undiagnosed. The increasing number of diabetic patients, coupled with a shortage of ophthalmologists, highlights the urgent need for automated screening tools for early DR diagnosis. Among the earliest and most detectable signs of DR are microaneurysms (MAs). However, detecting MAs in fundus images remains challenging due to several factors, including image quality limitations, the subtle appearance of MA features, and the wide variability in color, shape, and texture. To address these challenges, we propose a novel preprocessing pipeline that enhances the overall image quality, facilitating feature learning and improving the detection of subtle MA features in low-quality fundus images. Building on this preprocessing technique, we further develop a lightweight Attention U-Net model that significantly reduces the number of model parameters while achieving superior performance. By incorporating an attention mechanism, the model focuses on the subtle features of MAs, leading to more precise segmentation results. We evaluated our method on the IDRID dataset, achieving a sensitivity of 0.81 and specificity of 0.99, outperforming existing MA segmentation models. To validate its generalizability, we tested it on the E-Ophtha dataset, where it achieved a sensitivity of 0.59 and specificity of 0.99. Despite its lightweight design, our model demonstrates robust performance under challenging conditions such as noise and varying lighting, making it a promising tool for clinical applications and large-scale DR screening.
{"title":"Efficient microaneurysm segmentation in retinal images via a lightweight Attention U-Net for early DR diagnosis","authors":"Muhammad Zeeshan Tahir , Xingzheng Lyu , Muhammad Nasir , Wengan He , Abeer Aljohani , Sanyuan Zhang","doi":"10.1016/j.slast.2025.100323","DOIUrl":"10.1016/j.slast.2025.100323","url":null,"abstract":"<div><div>Diabetic Retinopathy (DR) is a complication of diabetes that can cause vision impairment and lead to permanent blindness if left undiagnosed. The increasing number of diabetic patients, coupled with a shortage of ophthalmologists, highlights the urgent need for automated screening tools for early DR diagnosis. Among the earliest and most detectable signs of DR are microaneurysms (MAs). However, detecting MAs in fundus images remains challenging due to several factors, including image quality limitations, the subtle appearance of MA features, and the wide variability in color, shape, and texture. To address these challenges, we propose a novel preprocessing pipeline that enhances the overall image quality, facilitating feature learning and improving the detection of subtle MA features in low-quality fundus images. Building on this preprocessing technique, we further develop a lightweight Attention U-Net model that significantly reduces the number of model parameters while achieving superior performance. By incorporating an attention mechanism, the model focuses on the subtle features of MAs, leading to more precise segmentation results. We evaluated our method on the IDRID dataset, achieving a sensitivity of 0.81 and specificity of 0.99, outperforming existing MA segmentation models. To validate its generalizability, we tested it on the E-Ophtha dataset, where it achieved a sensitivity of 0.59 and specificity of 0.99. Despite its lightweight design, our model demonstrates robust performance under challenging conditions such as noise and varying lighting, making it a promising tool for clinical applications and large-scale DR screening.</div></div>","PeriodicalId":54248,"journal":{"name":"SLAS Technology","volume":"34 ","pages":"Article 100323"},"PeriodicalIF":3.7,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144750722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-08-09DOI: 10.1016/j.slast.2025.100338
Wenbo Zhao , Yu Wang
Optic disc and cup segmentation is a crucial subfield of computer vision, playing a pivotal role in automated pathological image analysis. It enables precise, efficient, and automated diagnosis of ocular conditions, significantly aiding clinicians in real-world medical applications. However, due to the scarcity of medical segmentation data and the insufficient integration of global contextual information, the segmentation accuracy remains suboptimal. This issue becomes particularly pronounced in optic disc and cup cases with complex anatomical structures and ambiguous boundaries. In order to address these limitations, this paper introduces a self-supervised training strategy integrated with a newly designed network architecture to improve segmentation accuracy. Specifically,we initially propose a non-local dual deformable convolutional block,which aims to capture the irregular image patterns( boundary). Secondly,we modify the traditional vision transformer and design an adaptive K-Nearest Neighbors(KNN) transformation block to extract the global semantic context from images. Finally,an initialization strategy based on self-supervised training is proposed to reduce the burden on the network on labeled data. Comprehensive experimental evaluations demonstrate the effectiveness of our proposed method, which outperforms previous networks and achieves state-of-the-art performance,with IOU scores of 0.9577 for the optic disc and 0.8399 for the optic cup on the REFUGE dataset.
{"title":"Self-supervised disc and cup segmentation via non-local deformable convolution and adaptive transformer","authors":"Wenbo Zhao , Yu Wang","doi":"10.1016/j.slast.2025.100338","DOIUrl":"10.1016/j.slast.2025.100338","url":null,"abstract":"<div><div>Optic disc and cup segmentation is a crucial subfield of computer vision, playing a pivotal role in automated pathological image analysis. It enables precise, efficient, and automated diagnosis of ocular conditions, significantly aiding clinicians in real-world medical applications. However, due to the scarcity of medical segmentation data and the insufficient integration of global contextual information, the segmentation accuracy remains suboptimal. This issue becomes particularly pronounced in optic disc and cup cases with complex anatomical structures and ambiguous boundaries. In order to address these limitations, this paper introduces a self-supervised training strategy integrated with a newly designed network architecture to improve segmentation accuracy. Specifically,we initially propose a non-local dual deformable convolutional block,which aims to capture the irregular image patterns(<span><math><mrow><mi>i</mi><mo>.</mo><mi>e</mi><mo>.</mo></mrow></math></span> boundary). Secondly,we modify the traditional vision transformer and design an adaptive K-Nearest Neighbors(KNN) transformation block to extract the global semantic context from images. Finally,an initialization strategy based on self-supervised training is proposed to reduce the burden on the network on labeled data. Comprehensive experimental evaluations demonstrate the effectiveness of our proposed method, which outperforms previous networks and achieves state-of-the-art performance,with IOU scores of 0.9577 for the optic disc and 0.8399 for the optic cup on the REFUGE dataset.</div></div>","PeriodicalId":54248,"journal":{"name":"SLAS Technology","volume":"34 ","pages":"Article 100338"},"PeriodicalIF":3.7,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144823210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-16DOI: 10.1016/j.slast.2025.100331
Ahsan Bilal Tariq , Muhammad Zaheer Sajid , Nauman Ali khan , Muhammad Fareed Hamid , Anwaar UlHaq , Jarrar Amjad
Due to challenges such as illumination variability, noise, and visual distortions, machine learning (ML) and deep learning (DL) approaches for skin disease evaluation remain complex. Traditional methods often neglect these issues, leading to skewed predictions and poor performance. This research leverages a diverse dataset and robust image processing techniques to enhance diagnostic accuracy under such demanding conditions. We propose Dermo-Transfer, a novel architecture that combines MobileNet with dense blocks and residual connections to improve skin disease severity classification by addressing problems such as vanishing gradients and overfitting. Our method incorporates multi-scale Retinex, gamma correction, and histogram equalization to enhance image quality and visibility. Furthermore, a quantum support vector machine (QSVM) classifier is employed to improve classification performance, providing confidence scores and effectively handling multi-class problems. The proposed approach significantly enhances diagnostic accuracy and outperforms previous models. Dermo-Transfer not only improves pattern recognition and classification accuracy but also robustly handles varying image quality and lighting conditions. Dermo-Transfer was trained on 77,314 images covering skin conditions such as molluscum, warts, eczema, psoriasis, lichen planus, seborrheic keratoses, atopic dermatitis, melanoma, basal cell carcinoma (BCC), melanocytic nevi (NV), benign keratosis, and other benign tumors. The Dermo-Transfer classification method achieved accuracies of 99 %, 98.5 %, 97.5 %, and 89 % across four datasets, demonstrating its effectiveness and potential utility for clinical diagnostics. Additionally, Dermo-Transfer outperformed SkinLesNet and MobileNet V2-LSTM in terms of classification accuracy. Experimental results also highlight how IoT devices and mobile applications can enhance the computational efficiency and practical deployment of the Dermo-Transfer model.
{"title":"An integrated deep learning framework using adaptive enhanced vision fusion and modified mobilenet architecture for precision classification of skin diseases with enhanced diagnostic performance","authors":"Ahsan Bilal Tariq , Muhammad Zaheer Sajid , Nauman Ali khan , Muhammad Fareed Hamid , Anwaar UlHaq , Jarrar Amjad","doi":"10.1016/j.slast.2025.100331","DOIUrl":"10.1016/j.slast.2025.100331","url":null,"abstract":"<div><div>Due to challenges such as illumination variability, noise, and visual distortions, machine learning (ML) and deep learning (DL) approaches for skin disease evaluation remain complex. Traditional methods often neglect these issues, leading to skewed predictions and poor performance. This research leverages a diverse dataset and robust image processing techniques to enhance diagnostic accuracy under such demanding conditions. We propose Dermo-Transfer, a novel architecture that combines MobileNet with dense blocks and residual connections to improve skin disease severity classification by addressing problems such as vanishing gradients and overfitting. Our method incorporates multi-scale Retinex, gamma correction, and histogram equalization to enhance image quality and visibility. Furthermore, a quantum support vector machine (QSVM) classifier is employed to improve classification performance, providing confidence scores and effectively handling multi-class problems. The proposed approach significantly enhances diagnostic accuracy and outperforms previous models. Dermo-Transfer not only improves pattern recognition and classification accuracy but also robustly handles varying image quality and lighting conditions. Dermo-Transfer was trained on 77,314 images covering skin conditions such as molluscum, warts, eczema, psoriasis, lichen planus, seborrheic keratoses, atopic dermatitis, melanoma, basal cell carcinoma (BCC), melanocytic nevi (NV), benign keratosis, and other benign tumors. The Dermo-Transfer classification method achieved accuracies of 99 %, 98.5 %, 97.5 %, and 89 % across four datasets, demonstrating its effectiveness and potential utility for clinical diagnostics. Additionally, Dermo-Transfer outperformed SkinLesNet and MobileNet V2-LSTM in terms of classification accuracy. Experimental results also highlight how IoT devices and mobile applications can enhance the computational efficiency and practical deployment of the Dermo-Transfer model.</div></div>","PeriodicalId":54248,"journal":{"name":"SLAS Technology","volume":"34 ","pages":"Article 100331"},"PeriodicalIF":3.7,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144669002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-08-05DOI: 10.1016/j.slast.2025.100335
Rodrigo Moreno, Jonas Jensen, Shahbaz Tareq Bandesha, Simone Peters, Andres Faina, Kasper Stoy
Liquid-liquid extraction (LLE) is an essential operation in many laboratory experiments. However, most automatic LLE devices concentrate on detecting the liquid-liquid interface at one moment in the process, usually at separation, and pay little attention to the state of the liquids as they settle. In this paper, we present an LLE device with a moving optical sensor and light source that move along a vessel instead of the mixture moving relative to the sensor. Analyzing the patterns of light intensity with explainable automatic detection algorithms, the interface can be detected at different positions in the vessel with an error below 2 mm and monitored during the settling process. The device is tested using a mixture of clear oil and water and two extraction steps in a battery interface material synthesis process. Results show that the setup is able to detect interfaces at different positions along the vessel, even with changes in diameter. By monitoring the settling process, we also found that the biggest change in the signal detected occurs around the liquid-liquid interface position, and we also use this information to corroborate it. The recording of sensor measurements at different positions over time can be used to detect different properties of the liquids, which improves control over the process and could also alleviate reproducibility problems in areas of chemistry in which it is costly to repeat procedures.
{"title":"Movable optical sensor for automatic detection and monitoring of liquid-liquid interfaces.","authors":"Rodrigo Moreno, Jonas Jensen, Shahbaz Tareq Bandesha, Simone Peters, Andres Faina, Kasper Stoy","doi":"10.1016/j.slast.2025.100335","DOIUrl":"10.1016/j.slast.2025.100335","url":null,"abstract":"<p><p>Liquid-liquid extraction (LLE) is an essential operation in many laboratory experiments. However, most automatic LLE devices concentrate on detecting the liquid-liquid interface at one moment in the process, usually at separation, and pay little attention to the state of the liquids as they settle. In this paper, we present an LLE device with a moving optical sensor and light source that move along a vessel instead of the mixture moving relative to the sensor. Analyzing the patterns of light intensity with explainable automatic detection algorithms, the interface can be detected at different positions in the vessel with an error below 2 mm and monitored during the settling process. The device is tested using a mixture of clear oil and water and two extraction steps in a battery interface material synthesis process. Results show that the setup is able to detect interfaces at different positions along the vessel, even with changes in diameter. By monitoring the settling process, we also found that the biggest change in the signal detected occurs around the liquid-liquid interface position, and we also use this information to corroborate it. The recording of sensor measurements at different positions over time can be used to detect different properties of the liquids, which improves control over the process and could also alleviate reproducibility problems in areas of chemistry in which it is costly to repeat procedures.</p>","PeriodicalId":54248,"journal":{"name":"SLAS Technology","volume":" ","pages":"100335"},"PeriodicalIF":3.7,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144776907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-08-10DOI: 10.1016/j.slast.2025.100341
Kerstin Thurow
{"title":"Life sciences and society – an alphabetical journey","authors":"Kerstin Thurow","doi":"10.1016/j.slast.2025.100341","DOIUrl":"10.1016/j.slast.2025.100341","url":null,"abstract":"","PeriodicalId":54248,"journal":{"name":"SLAS Technology","volume":"34 ","pages":"Article 100341"},"PeriodicalIF":3.7,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144838578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-23DOI: 10.1016/j.slast.2025.100334
Jing Yang , Muhammad Abubakar Siddique , Hafeez Ullah , Ghulam Gilanie , Lip Yee Por , Samah Alshathri , Walid El-Shafai , Haya Aldossary , Thippa Reddy Gadekallu
Brain tumors pose a significant risk to human life, making accurate grading essential for effective treatment planning and improved survival rates. Magnetic Resonance Imaging (MRI) plays a crucial role in this process. The objective of this study was to develop an automated brain tumor grading system utilizing deep learning techniques. A dataset comprising 293 MRI scans from patients was obtained from the Department of Radiology at Bahawal Victoria Hospital in Bahawalpur, Pakistan. The proposed approach integrates a specialized Convolutional Neural Network (CNN) with pre-trained models to classify brain tumors into low-grade (LGT) and high-grade (HGT) categories with high accuracy. To assess the model's robustness, experiments were conducted using various methods: (1) raw MRI slices, (2) MRI segments containing only the tumor area, (3) feature-extracted slices derived from the original images through the proposed CNN architecture, and (4) feature-extracted slices from tumor area-only segmented images using the proposed CNN. The MRI slices and the features extracted from them were labeled using machine learning models, including Support Vector Machine (SVM) and CNN architectures based on transfer learning, such as MobileNet, Inception V3, and ResNet-50. Additionally, a custom model was specifically developed for this research. The proposed model achieved an impressive peak accuracy of 99.45 %, with classification accuracies of 99.56 % for low-grade tumors and 99.49 % for high-grade tumors, surpassing traditional methods. These results not only enhance the accuracy of brain tumor grading but also improve computational efficiency by reducing processing time and the number of iterations required.
{"title":"BrainCNN: Automated brain tumor grading from magnetic resonance images using a convolutional neural network-based customized model","authors":"Jing Yang , Muhammad Abubakar Siddique , Hafeez Ullah , Ghulam Gilanie , Lip Yee Por , Samah Alshathri , Walid El-Shafai , Haya Aldossary , Thippa Reddy Gadekallu","doi":"10.1016/j.slast.2025.100334","DOIUrl":"10.1016/j.slast.2025.100334","url":null,"abstract":"<div><div>Brain tumors pose a significant risk to human life, making accurate grading essential for effective treatment planning and improved survival rates. Magnetic Resonance Imaging (MRI) plays a crucial role in this process. The objective of this study was to develop an automated brain tumor grading system utilizing deep learning techniques. A dataset comprising 293 MRI scans from patients was obtained from the Department of Radiology at Bahawal Victoria Hospital in Bahawalpur, Pakistan. The proposed approach integrates a specialized Convolutional Neural Network (CNN) with pre-trained models to classify brain tumors into low-grade (LGT) and high-grade (HGT) categories with high accuracy. To assess the model's robustness, experiments were conducted using various methods: (1) raw MRI slices, (2) MRI segments containing only the tumor area, (3) feature-extracted slices derived from the original images through the proposed CNN architecture, and (4) feature-extracted slices from tumor area-only segmented images using the proposed CNN. The MRI slices and the features extracted from them were labeled using machine learning models, including Support Vector Machine (SVM) and CNN architectures based on transfer learning, such as MobileNet, Inception V3, and ResNet-50. Additionally, a custom model was specifically developed for this research. The proposed model achieved an impressive peak accuracy of 99.45 %, with classification accuracies of 99.56 % for low-grade tumors and 99.49 % for high-grade tumors, surpassing traditional methods. These results not only enhance the accuracy of brain tumor grading but also improve computational efficiency by reducing processing time and the number of iterations required.</div></div>","PeriodicalId":54248,"journal":{"name":"SLAS Technology","volume":"34 ","pages":"Article 100334"},"PeriodicalIF":3.7,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144719153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}