首页 > 最新文献

SLAS Technology最新文献

英文 中文
Life sciences and accountability. 生命科学与责任。
IF 3.7 4区 医学 Q3 BIOCHEMICAL RESEARCH METHODS Pub Date : 2025-11-25 DOI: 10.1016/j.slast.2025.100369
Kerstin Thurow
{"title":"Life sciences and accountability.","authors":"Kerstin Thurow","doi":"10.1016/j.slast.2025.100369","DOIUrl":"10.1016/j.slast.2025.100369","url":null,"abstract":"","PeriodicalId":54248,"journal":{"name":"SLAS Technology","volume":" ","pages":"100369"},"PeriodicalIF":3.7,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145642712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Literature highlights column: From the literature: Life Sciences Discovery and Technology Highlights. 文献亮点栏目:来自文献:生命科学发现与技术亮点。
IF 3.7 4区 医学 Q3 BIOCHEMICAL RESEARCH METHODS Pub Date : 2025-11-12 DOI: 10.1016/j.slast.2025.100364
Jamien Lim, Tal Murthy
{"title":"Literature highlights column: From the literature: Life Sciences Discovery and Technology Highlights.","authors":"Jamien Lim, Tal Murthy","doi":"10.1016/j.slast.2025.100364","DOIUrl":"10.1016/j.slast.2025.100364","url":null,"abstract":"","PeriodicalId":54248,"journal":{"name":"SLAS Technology","volume":" ","pages":"100364"},"PeriodicalIF":3.7,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145515029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient microaneurysm segmentation in retinal images via a lightweight Attention U-Net for early DR diagnosis 应用轻型Attention U-Net对视网膜图像进行有效的微动脉瘤分割,用于早期DR诊断
IF 3.7 4区 医学 Q3 BIOCHEMICAL RESEARCH METHODS Pub Date : 2025-10-01 Epub Date: 2025-07-28 DOI: 10.1016/j.slast.2025.100323
Muhammad Zeeshan Tahir , Xingzheng Lyu , Muhammad Nasir , Wengan He , Abeer Aljohani , Sanyuan Zhang
Diabetic Retinopathy (DR) is a complication of diabetes that can cause vision impairment and lead to permanent blindness if left undiagnosed. The increasing number of diabetic patients, coupled with a shortage of ophthalmologists, highlights the urgent need for automated screening tools for early DR diagnosis. Among the earliest and most detectable signs of DR are microaneurysms (MAs). However, detecting MAs in fundus images remains challenging due to several factors, including image quality limitations, the subtle appearance of MA features, and the wide variability in color, shape, and texture. To address these challenges, we propose a novel preprocessing pipeline that enhances the overall image quality, facilitating feature learning and improving the detection of subtle MA features in low-quality fundus images. Building on this preprocessing technique, we further develop a lightweight Attention U-Net model that significantly reduces the number of model parameters while achieving superior performance. By incorporating an attention mechanism, the model focuses on the subtle features of MAs, leading to more precise segmentation results. We evaluated our method on the IDRID dataset, achieving a sensitivity of 0.81 and specificity of 0.99, outperforming existing MA segmentation models. To validate its generalizability, we tested it on the E-Ophtha dataset, where it achieved a sensitivity of 0.59 and specificity of 0.99. Despite its lightweight design, our model demonstrates robust performance under challenging conditions such as noise and varying lighting, making it a promising tool for clinical applications and large-scale DR screening.
糖尿病视网膜病变(DR)是糖尿病的一种并发症,如果不及时诊断,可导致视力损害并导致永久性失明。糖尿病患者数量的增加,加上眼科医生的短缺,凸显了对早期DR诊断的自动筛查工具的迫切需求。微动脉瘤(MAs)是DR最早和最易检测的症状之一。然而,由于几个因素,包括图像质量限制、MA特征的微妙外观以及颜色、形状和纹理的广泛变化,检测眼底图像中的MAs仍然具有挑战性。为了解决这些挑战,我们提出了一种新的预处理管道,以提高整体图像质量,促进特征学习并改进对低质量眼底图像中细微MA特征的检测。在此预处理技术的基础上,我们进一步开发了一个轻量级的注意力U-Net模型,该模型显著减少了模型参数的数量,同时实现了卓越的性能。通过加入注意机制,该模型关注MAs的细微特征,从而获得更精确的分割结果。我们在IDRID数据集上评估了我们的方法,获得了0.81的灵敏度和0.99的特异性,优于现有的MA分割模型。为了验证其普遍性,我们在E-Ophtha数据集上对其进行了测试,其灵敏度为0.59,特异性为0.99。尽管其设计轻巧,但我们的模型在具有挑战性的条件下表现出强大的性能,例如噪音和不同的照明,使其成为临床应用和大规模DR筛查的有前途的工具。
{"title":"Efficient microaneurysm segmentation in retinal images via a lightweight Attention U-Net for early DR diagnosis","authors":"Muhammad Zeeshan Tahir ,&nbsp;Xingzheng Lyu ,&nbsp;Muhammad Nasir ,&nbsp;Wengan He ,&nbsp;Abeer Aljohani ,&nbsp;Sanyuan Zhang","doi":"10.1016/j.slast.2025.100323","DOIUrl":"10.1016/j.slast.2025.100323","url":null,"abstract":"<div><div>Diabetic Retinopathy (DR) is a complication of diabetes that can cause vision impairment and lead to permanent blindness if left undiagnosed. The increasing number of diabetic patients, coupled with a shortage of ophthalmologists, highlights the urgent need for automated screening tools for early DR diagnosis. Among the earliest and most detectable signs of DR are microaneurysms (MAs). However, detecting MAs in fundus images remains challenging due to several factors, including image quality limitations, the subtle appearance of MA features, and the wide variability in color, shape, and texture. To address these challenges, we propose a novel preprocessing pipeline that enhances the overall image quality, facilitating feature learning and improving the detection of subtle MA features in low-quality fundus images. Building on this preprocessing technique, we further develop a lightweight Attention U-Net model that significantly reduces the number of model parameters while achieving superior performance. By incorporating an attention mechanism, the model focuses on the subtle features of MAs, leading to more precise segmentation results. We evaluated our method on the IDRID dataset, achieving a sensitivity of 0.81 and specificity of 0.99, outperforming existing MA segmentation models. To validate its generalizability, we tested it on the E-Ophtha dataset, where it achieved a sensitivity of 0.59 and specificity of 0.99. Despite its lightweight design, our model demonstrates robust performance under challenging conditions such as noise and varying lighting, making it a promising tool for clinical applications and large-scale DR screening.</div></div>","PeriodicalId":54248,"journal":{"name":"SLAS Technology","volume":"34 ","pages":"Article 100323"},"PeriodicalIF":3.7,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144750722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-supervised disc and cup segmentation via non-local deformable convolution and adaptive transformer 基于非局部可变形卷积和自适应变压器的自监督圆盘杯分割。
IF 3.7 4区 医学 Q3 BIOCHEMICAL RESEARCH METHODS Pub Date : 2025-10-01 Epub Date: 2025-08-09 DOI: 10.1016/j.slast.2025.100338
Wenbo Zhao , Yu Wang
Optic disc and cup segmentation is a crucial subfield of computer vision, playing a pivotal role in automated pathological image analysis. It enables precise, efficient, and automated diagnosis of ocular conditions, significantly aiding clinicians in real-world medical applications. However, due to the scarcity of medical segmentation data and the insufficient integration of global contextual information, the segmentation accuracy remains suboptimal. This issue becomes particularly pronounced in optic disc and cup cases with complex anatomical structures and ambiguous boundaries. In order to address these limitations, this paper introduces a self-supervised training strategy integrated with a newly designed network architecture to improve segmentation accuracy. Specifically,we initially propose a non-local dual deformable convolutional block,which aims to capture the irregular image patterns(i.e. boundary). Secondly,we modify the traditional vision transformer and design an adaptive K-Nearest Neighbors(KNN) transformation block to extract the global semantic context from images. Finally,an initialization strategy based on self-supervised training is proposed to reduce the burden on the network on labeled data. Comprehensive experimental evaluations demonstrate the effectiveness of our proposed method, which outperforms previous networks and achieves state-of-the-art performance,with IOU scores of 0.9577 for the optic disc and 0.8399 for the optic cup on the REFUGE dataset.
视盘杯分割是计算机视觉的一个重要分支,在病理图像自动分析中起着举足轻重的作用。它能够精确、高效和自动地诊断眼部疾病,极大地帮助临床医生在现实世界的医疗应用。然而,由于医学分割数据的缺乏和对全局上下文信息的整合不足,分割精度仍然不理想。这个问题变得特别明显视盘和杯病例复杂的解剖结构和模糊的边界。为了解决这些限制,本文引入了一种与新设计的网络架构相结合的自监督训练策略,以提高分割精度。具体来说,我们最初提出了一种非局部对偶可变形卷积块,其目的是捕获不规则的图像模式(即。边界)。其次,对传统的视觉变换进行改进,设计自适应k近邻变换块,从图像中提取全局语义上下文;最后,提出了一种基于自监督训练的初始化策略,以减轻网络对标记数据的负担。综合实验评估证明了我们提出的方法的有效性,该方法优于以前的网络,并达到了最先进的性能,在REFUGE数据集上视盘的IOU得分为0.9577,光学杯的IOU得分为0.8399。
{"title":"Self-supervised disc and cup segmentation via non-local deformable convolution and adaptive transformer","authors":"Wenbo Zhao ,&nbsp;Yu Wang","doi":"10.1016/j.slast.2025.100338","DOIUrl":"10.1016/j.slast.2025.100338","url":null,"abstract":"<div><div>Optic disc and cup segmentation is a crucial subfield of computer vision, playing a pivotal role in automated pathological image analysis. It enables precise, efficient, and automated diagnosis of ocular conditions, significantly aiding clinicians in real-world medical applications. However, due to the scarcity of medical segmentation data and the insufficient integration of global contextual information, the segmentation accuracy remains suboptimal. This issue becomes particularly pronounced in optic disc and cup cases with complex anatomical structures and ambiguous boundaries. In order to address these limitations, this paper introduces a self-supervised training strategy integrated with a newly designed network architecture to improve segmentation accuracy. Specifically,we initially propose a non-local dual deformable convolutional block,which aims to capture the irregular image patterns(<span><math><mrow><mi>i</mi><mo>.</mo><mi>e</mi><mo>.</mo></mrow></math></span> boundary). Secondly,we modify the traditional vision transformer and design an adaptive K-Nearest Neighbors(KNN) transformation block to extract the global semantic context from images. Finally,an initialization strategy based on self-supervised training is proposed to reduce the burden on the network on labeled data. Comprehensive experimental evaluations demonstrate the effectiveness of our proposed method, which outperforms previous networks and achieves state-of-the-art performance,with IOU scores of 0.9577 for the optic disc and 0.8399 for the optic cup on the REFUGE dataset.</div></div>","PeriodicalId":54248,"journal":{"name":"SLAS Technology","volume":"34 ","pages":"Article 100338"},"PeriodicalIF":3.7,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144823210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An integrated deep learning framework using adaptive enhanced vision fusion and modified mobilenet architecture for precision classification of skin diseases with enhanced diagnostic performance 基于自适应增强视觉融合和改进MobileNet架构的集成深度学习框架用于皮肤病的精确分类,提高诊断性能。
IF 3.7 4区 医学 Q3 BIOCHEMICAL RESEARCH METHODS Pub Date : 2025-10-01 Epub Date: 2025-07-16 DOI: 10.1016/j.slast.2025.100331
Ahsan Bilal Tariq , Muhammad Zaheer Sajid , Nauman Ali khan , Muhammad Fareed Hamid , Anwaar UlHaq , Jarrar Amjad
Due to challenges such as illumination variability, noise, and visual distortions, machine learning (ML) and deep learning (DL) approaches for skin disease evaluation remain complex. Traditional methods often neglect these issues, leading to skewed predictions and poor performance. This research leverages a diverse dataset and robust image processing techniques to enhance diagnostic accuracy under such demanding conditions. We propose Dermo-Transfer, a novel architecture that combines MobileNet with dense blocks and residual connections to improve skin disease severity classification by addressing problems such as vanishing gradients and overfitting. Our method incorporates multi-scale Retinex, gamma correction, and histogram equalization to enhance image quality and visibility. Furthermore, a quantum support vector machine (QSVM) classifier is employed to improve classification performance, providing confidence scores and effectively handling multi-class problems. The proposed approach significantly enhances diagnostic accuracy and outperforms previous models. Dermo-Transfer not only improves pattern recognition and classification accuracy but also robustly handles varying image quality and lighting conditions. Dermo-Transfer was trained on 77,314 images covering skin conditions such as molluscum, warts, eczema, psoriasis, lichen planus, seborrheic keratoses, atopic dermatitis, melanoma, basal cell carcinoma (BCC), melanocytic nevi (NV), benign keratosis, and other benign tumors. The Dermo-Transfer classification method achieved accuracies of 99 %, 98.5 %, 97.5 %, and 89 % across four datasets, demonstrating its effectiveness and potential utility for clinical diagnostics. Additionally, Dermo-Transfer outperformed SkinLesNet and MobileNet V2-LSTM in terms of classification accuracy. Experimental results also highlight how IoT devices and mobile applications can enhance the computational efficiency and practical deployment of the Dermo-Transfer model.
由于光照可变性、噪声和视觉扭曲等挑战,用于皮肤病评估的机器学习(ML)和深度学习(DL)方法仍然很复杂。传统方法往往忽略了这些问题,导致预测偏差和表现不佳。本研究利用多样化的数据集和强大的图像处理技术来提高在这种苛刻条件下的诊断准确性。我们提出了Dermo-Transfer,这是一种将MobileNet与密集块和残余连接相结合的新架构,通过解决梯度消失和过拟合等问题来改善皮肤病严重程度分类。我们的方法结合了多尺度Retinex、伽玛校正和直方图均衡化来提高图像质量和可见性。此外,采用量子支持向量机(QSVM)分类器提高分类性能,提供置信度分数并有效处理多类问题。该方法显著提高了诊断的准确性,并优于以往的模型。Dermo-Transfer不仅提高了模式识别和分类精度,而且对不同的图像质量和光照条件也有很强的处理能力。Dermo-Transfer对77314张图像进行了训练,这些图像涵盖了软疣、疣、湿疹、牛皮癣、扁平苔藓、脂溢性角化病、特应性皮炎、黑色素瘤、基底细胞癌(BCC)、黑素细胞痣(NV)、良性角化病和其他良性肿瘤等皮肤病。Dermo-Transfer分类方法在四个数据集上的准确率分别为99%、98.5%、97.5%和89%,证明了其在临床诊断中的有效性和潜在效用。此外,在分类精度方面,Dermo-Transfer优于SkinLesNet和MobileNet V2-LSTM。实验结果还强调了物联网设备和移动应用如何提高Dermo-Transfer模型的计算效率和实际部署。
{"title":"An integrated deep learning framework using adaptive enhanced vision fusion and modified mobilenet architecture for precision classification of skin diseases with enhanced diagnostic performance","authors":"Ahsan Bilal Tariq ,&nbsp;Muhammad Zaheer Sajid ,&nbsp;Nauman Ali khan ,&nbsp;Muhammad Fareed Hamid ,&nbsp;Anwaar UlHaq ,&nbsp;Jarrar Amjad","doi":"10.1016/j.slast.2025.100331","DOIUrl":"10.1016/j.slast.2025.100331","url":null,"abstract":"<div><div>Due to challenges such as illumination variability, noise, and visual distortions, machine learning (ML) and deep learning (DL) approaches for skin disease evaluation remain complex. Traditional methods often neglect these issues, leading to skewed predictions and poor performance. This research leverages a diverse dataset and robust image processing techniques to enhance diagnostic accuracy under such demanding conditions. We propose Dermo-Transfer, a novel architecture that combines MobileNet with dense blocks and residual connections to improve skin disease severity classification by addressing problems such as vanishing gradients and overfitting. Our method incorporates multi-scale Retinex, gamma correction, and histogram equalization to enhance image quality and visibility. Furthermore, a quantum support vector machine (QSVM) classifier is employed to improve classification performance, providing confidence scores and effectively handling multi-class problems. The proposed approach significantly enhances diagnostic accuracy and outperforms previous models. Dermo-Transfer not only improves pattern recognition and classification accuracy but also robustly handles varying image quality and lighting conditions. Dermo-Transfer was trained on 77,314 images covering skin conditions such as molluscum, warts, eczema, psoriasis, lichen planus, seborrheic keratoses, atopic dermatitis, melanoma, basal cell carcinoma (BCC), melanocytic nevi (NV), benign keratosis, and other benign tumors. The Dermo-Transfer classification method achieved accuracies of 99 %, 98.5 %, 97.5 %, and 89 % across four datasets, demonstrating its effectiveness and potential utility for clinical diagnostics. Additionally, Dermo-Transfer outperformed SkinLesNet and MobileNet V2-LSTM in terms of classification accuracy. Experimental results also highlight how IoT devices and mobile applications can enhance the computational efficiency and practical deployment of the Dermo-Transfer model.</div></div>","PeriodicalId":54248,"journal":{"name":"SLAS Technology","volume":"34 ","pages":"Article 100331"},"PeriodicalIF":3.7,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144669002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Life sciences discovery and technology highlights 生命科学发现和技术亮点。
IF 3.7 4区 医学 Q3 BIOCHEMICAL RESEARCH METHODS Pub Date : 2025-10-01 Epub Date: 2025-08-06 DOI: 10.1016/j.slast.2025.100340
Tal Murthy , Jamien Lim
{"title":"Life sciences discovery and technology highlights","authors":"Tal Murthy ,&nbsp;Jamien Lim","doi":"10.1016/j.slast.2025.100340","DOIUrl":"10.1016/j.slast.2025.100340","url":null,"abstract":"","PeriodicalId":54248,"journal":{"name":"SLAS Technology","volume":"34 ","pages":"Article 100340"},"PeriodicalIF":3.7,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144800921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Movable optical sensor for automatic detection and monitoring of liquid-liquid interfaces. 用于液-液界面自动检测和监测的可移动光学传感器。
IF 3.7 4区 医学 Q3 BIOCHEMICAL RESEARCH METHODS Pub Date : 2025-10-01 Epub Date: 2025-08-05 DOI: 10.1016/j.slast.2025.100335
Rodrigo Moreno, Jonas Jensen, Shahbaz Tareq Bandesha, Simone Peters, Andres Faina, Kasper Stoy

Liquid-liquid extraction (LLE) is an essential operation in many laboratory experiments. However, most automatic LLE devices concentrate on detecting the liquid-liquid interface at one moment in the process, usually at separation, and pay little attention to the state of the liquids as they settle. In this paper, we present an LLE device with a moving optical sensor and light source that move along a vessel instead of the mixture moving relative to the sensor. Analyzing the patterns of light intensity with explainable automatic detection algorithms, the interface can be detected at different positions in the vessel with an error below 2 mm and monitored during the settling process. The device is tested using a mixture of clear oil and water and two extraction steps in a battery interface material synthesis process. Results show that the setup is able to detect interfaces at different positions along the vessel, even with changes in diameter. By monitoring the settling process, we also found that the biggest change in the signal detected occurs around the liquid-liquid interface position, and we also use this information to corroborate it. The recording of sensor measurements at different positions over time can be used to detect different properties of the liquids, which improves control over the process and could also alleviate reproducibility problems in areas of chemistry in which it is costly to repeat procedures.

液液萃取(LLE)是许多实验室实验中必不可少的操作。然而,大多数自动LLE设备专注于检测过程中某一时刻的液-液界面,通常是在分离时,而很少关注液体沉淀时的状态。在本文中,我们提出了一种具有移动光学传感器和光源的LLE装置,该装置沿着容器移动,而不是相对于传感器移动的混合物。利用可解释的自动检测算法分析光强模式,可以在容器的不同位置检测界面,误差小于2mm,并在沉降过程中进行监测。该装置在电池界面材料合成过程中使用清油和水的混合物以及两个提取步骤进行测试。结果表明,该装置能够检测沿容器不同位置的界面,即使直径发生变化。通过对沉降过程的监测,我们还发现检测到的信号变化最大的发生在液-液界面位置附近,我们也利用这一信息对其进行了印证。随着时间的推移,传感器在不同位置的测量记录可以用来检测液体的不同性质,这可以改善对过程的控制,也可以减轻重复过程成本高的化学领域的可重复性问题。
{"title":"Movable optical sensor for automatic detection and monitoring of liquid-liquid interfaces.","authors":"Rodrigo Moreno, Jonas Jensen, Shahbaz Tareq Bandesha, Simone Peters, Andres Faina, Kasper Stoy","doi":"10.1016/j.slast.2025.100335","DOIUrl":"10.1016/j.slast.2025.100335","url":null,"abstract":"<p><p>Liquid-liquid extraction (LLE) is an essential operation in many laboratory experiments. However, most automatic LLE devices concentrate on detecting the liquid-liquid interface at one moment in the process, usually at separation, and pay little attention to the state of the liquids as they settle. In this paper, we present an LLE device with a moving optical sensor and light source that move along a vessel instead of the mixture moving relative to the sensor. Analyzing the patterns of light intensity with explainable automatic detection algorithms, the interface can be detected at different positions in the vessel with an error below 2 mm and monitored during the settling process. The device is tested using a mixture of clear oil and water and two extraction steps in a battery interface material synthesis process. Results show that the setup is able to detect interfaces at different positions along the vessel, even with changes in diameter. By monitoring the settling process, we also found that the biggest change in the signal detected occurs around the liquid-liquid interface position, and we also use this information to corroborate it. The recording of sensor measurements at different positions over time can be used to detect different properties of the liquids, which improves control over the process and could also alleviate reproducibility problems in areas of chemistry in which it is costly to repeat procedures.</p>","PeriodicalId":54248,"journal":{"name":"SLAS Technology","volume":" ","pages":"100335"},"PeriodicalIF":3.7,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144776907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Life sciences and society – an alphabetical journey 生命科学与社会——按字母顺序排列的旅程。
IF 3.7 4区 医学 Q3 BIOCHEMICAL RESEARCH METHODS Pub Date : 2025-10-01 Epub Date: 2025-08-10 DOI: 10.1016/j.slast.2025.100341
Kerstin Thurow
{"title":"Life sciences and society – an alphabetical journey","authors":"Kerstin Thurow","doi":"10.1016/j.slast.2025.100341","DOIUrl":"10.1016/j.slast.2025.100341","url":null,"abstract":"","PeriodicalId":54248,"journal":{"name":"SLAS Technology","volume":"34 ","pages":"Article 100341"},"PeriodicalIF":3.7,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144838578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Life sciences and artificial intelligence 生命科学与人工智能。
IF 3.7 4区 医学 Q3 BIOCHEMICAL RESEARCH METHODS Pub Date : 2025-10-01 Epub Date: 2025-08-11 DOI: 10.1016/j.slast.2025.100342
Kerstin Thurow
{"title":"Life sciences and artificial intelligence","authors":"Kerstin Thurow","doi":"10.1016/j.slast.2025.100342","DOIUrl":"10.1016/j.slast.2025.100342","url":null,"abstract":"","PeriodicalId":54248,"journal":{"name":"SLAS Technology","volume":"34 ","pages":"Article 100342"},"PeriodicalIF":3.7,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144849637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BrainCNN: Automated brain tumor grading from magnetic resonance images using a convolutional neural network-based customized model BrainCNN:使用基于卷积神经网络的定制模型从磁共振图像中自动分级脑肿瘤。
IF 3.7 4区 医学 Q3 BIOCHEMICAL RESEARCH METHODS Pub Date : 2025-10-01 Epub Date: 2025-07-23 DOI: 10.1016/j.slast.2025.100334
Jing Yang , Muhammad Abubakar Siddique , Hafeez Ullah , Ghulam Gilanie , Lip Yee Por , Samah Alshathri , Walid El-Shafai , Haya Aldossary , Thippa Reddy Gadekallu
Brain tumors pose a significant risk to human life, making accurate grading essential for effective treatment planning and improved survival rates. Magnetic Resonance Imaging (MRI) plays a crucial role in this process. The objective of this study was to develop an automated brain tumor grading system utilizing deep learning techniques. A dataset comprising 293 MRI scans from patients was obtained from the Department of Radiology at Bahawal Victoria Hospital in Bahawalpur, Pakistan. The proposed approach integrates a specialized Convolutional Neural Network (CNN) with pre-trained models to classify brain tumors into low-grade (LGT) and high-grade (HGT) categories with high accuracy. To assess the model's robustness, experiments were conducted using various methods: (1) raw MRI slices, (2) MRI segments containing only the tumor area, (3) feature-extracted slices derived from the original images through the proposed CNN architecture, and (4) feature-extracted slices from tumor area-only segmented images using the proposed CNN. The MRI slices and the features extracted from them were labeled using machine learning models, including Support Vector Machine (SVM) and CNN architectures based on transfer learning, such as MobileNet, Inception V3, and ResNet-50. Additionally, a custom model was specifically developed for this research. The proposed model achieved an impressive peak accuracy of 99.45 %, with classification accuracies of 99.56 % for low-grade tumors and 99.49 % for high-grade tumors, surpassing traditional methods. These results not only enhance the accuracy of brain tumor grading but also improve computational efficiency by reducing processing time and the number of iterations required.
脑肿瘤对人类生命构成重大威胁,因此准确的分级对于有效的治疗计划和提高生存率至关重要。磁共振成像(MRI)在这一过程中起着至关重要的作用。本研究的目的是利用深度学习技术开发一个自动脑肿瘤分级系统。数据集包括来自巴基斯坦巴哈瓦尔布尔Bahawal Victoria医院放射科的293例患者的MRI扫描。该方法将专用卷积神经网络(CNN)与预训练模型相结合,以高精度地将脑肿瘤分为低级别(LGT)和高级别(HGT)两类。为了评估模型的鲁棒性,我们使用了不同的方法进行实验:(1)原始MRI切片,(2)仅包含肿瘤区域的MRI片段,(3)通过本文提出的CNN架构从原始图像中提取特征切片,以及(4)使用本文提出的CNN从仅包含肿瘤区域的分割图像中提取特征切片。使用机器学习模型对MRI切片和从中提取的特征进行标记,包括基于迁移学习的支持向量机(SVM)和CNN架构,如MobileNet、Inception V3和ResNet-50。此外,还专门为本研究开发了一个定制模型。该模型达到了令人印象深刻的99.45%的峰值准确率,对低级别肿瘤的分类准确率为99.56%,对高级别肿瘤的分类准确率为99.49%,超过了传统方法。这些结果不仅提高了脑肿瘤分级的准确性,而且通过减少处理时间和所需的迭代次数提高了计算效率。
{"title":"BrainCNN: Automated brain tumor grading from magnetic resonance images using a convolutional neural network-based customized model","authors":"Jing Yang ,&nbsp;Muhammad Abubakar Siddique ,&nbsp;Hafeez Ullah ,&nbsp;Ghulam Gilanie ,&nbsp;Lip Yee Por ,&nbsp;Samah Alshathri ,&nbsp;Walid El-Shafai ,&nbsp;Haya Aldossary ,&nbsp;Thippa Reddy Gadekallu","doi":"10.1016/j.slast.2025.100334","DOIUrl":"10.1016/j.slast.2025.100334","url":null,"abstract":"<div><div>Brain tumors pose a significant risk to human life, making accurate grading essential for effective treatment planning and improved survival rates. Magnetic Resonance Imaging (MRI) plays a crucial role in this process. The objective of this study was to develop an automated brain tumor grading system utilizing deep learning techniques. A dataset comprising 293 MRI scans from patients was obtained from the Department of Radiology at Bahawal Victoria Hospital in Bahawalpur, Pakistan. The proposed approach integrates a specialized Convolutional Neural Network (CNN) with pre-trained models to classify brain tumors into low-grade (LGT) and high-grade (HGT) categories with high accuracy. To assess the model's robustness, experiments were conducted using various methods: (1) raw MRI slices, (2) MRI segments containing only the tumor area, (3) feature-extracted slices derived from the original images through the proposed CNN architecture, and (4) feature-extracted slices from tumor area-only segmented images using the proposed CNN. The MRI slices and the features extracted from them were labeled using machine learning models, including Support Vector Machine (SVM) and CNN architectures based on transfer learning, such as MobileNet, Inception V3, and ResNet-50. Additionally, a custom model was specifically developed for this research. The proposed model achieved an impressive peak accuracy of 99.45 %, with classification accuracies of 99.56 % for low-grade tumors and 99.49 % for high-grade tumors, surpassing traditional methods. These results not only enhance the accuracy of brain tumor grading but also improve computational efficiency by reducing processing time and the number of iterations required.</div></div>","PeriodicalId":54248,"journal":{"name":"SLAS Technology","volume":"34 ","pages":"Article 100334"},"PeriodicalIF":3.7,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144719153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
SLAS Technology
全部 Geobiology Appl. Clay Sci. Geochim. Cosmochim. Acta J. Hydrol. Org. Geochem. Carbon Balance Manage. Contrib. Mineral. Petrol. Int. J. Biometeorol. IZV-PHYS SOLID EART+ J. Atmos. Chem. Acta Oceanolog. Sin. Acta Geophys. ACTA GEOL POL ACTA PETROL SIN ACTA GEOL SIN-ENGL AAPG Bull. Acta Geochimica Adv. Atmos. Sci. Adv. Meteorol. Am. J. Phys. Anthropol. Am. J. Sci. Am. Mineral. Annu. Rev. Earth Planet. Sci. Appl. Geochem. Aquat. Geochem. Ann. Glaciol. Archaeol. Anthropol. Sci. ARCHAEOMETRY ARCT ANTARCT ALP RES Asia-Pac. J. Atmos. Sci. ATMOSPHERE-BASEL Atmos. Res. Aust. J. Earth Sci. Atmos. Chem. Phys. Atmos. Meas. Tech. Basin Res. Big Earth Data BIOGEOSCIENCES Geostand. Geoanal. Res. GEOLOGY Geosci. J. Geochem. J. Geochem. Trans. Geosci. Front. Geol. Ore Deposits Global Biogeochem. Cycles Gondwana Res. Geochem. Int. Geol. J. Geophys. Prospect. Geosci. Model Dev. GEOL BELG GROUNDWATER Hydrogeol. J. Hydrol. Earth Syst. Sci. Hydrol. Processes Int. J. Climatol. Int. J. Earth Sci. Int. Geol. Rev. Int. J. Disaster Risk Reduct. Int. J. Geomech. Int. J. Geog. Inf. Sci. Isl. Arc J. Afr. Earth. Sci. J. Adv. Model. Earth Syst. J APPL METEOROL CLIM J. Atmos. Oceanic Technol. J. Atmos. Sol. Terr. Phys. J. Clim. J. Earth Sci. J. Earth Syst. Sci. J. Environ. Eng. Geophys. J. Geog. Sci. Mineral. Mag. Miner. Deposita Mon. Weather Rev. Nat. Hazards Earth Syst. Sci. Nat. Clim. Change Nat. Geosci. Ocean Dyn. Ocean and Coastal Research npj Clim. Atmos. Sci. Ocean Modell. Ocean Sci. Ore Geol. Rev. OCEAN SCI J Paleontol. J. PALAEOGEOGR PALAEOCL PERIOD MINERAL PETROLOGY+ Phys. Chem. Miner. Polar Sci. Prog. Oceanogr. Quat. Sci. Rev. Q. J. Eng. Geol. Hydrogeol. RADIOCARBON Pure Appl. Geophys. Resour. Geol. Rev. Geophys. Sediment. Geol.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1