首页 > 最新文献

Complex & Intelligent Systems最新文献

英文 中文
MAAN: multi-scale atrous attention network for skin lesion segmentation 基于多尺度亚属性关注网络的皮肤病变分割
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-10 DOI: 10.1007/s40747-025-02186-z
Yang Lian, Ruizhi Han, Shiyuan Han, Defu Qiu, Jin Zhou
Skin cancer research is essential to finding new treatments and improving survival rates in computer-aided medicine. Within this research, the accurate segmentation of skin lesion images is an important step for both early diagnosis and personalized treatment strategies. However, while current popular Transformer-based models have achieved competitive segmentation results, they often ignore the computational complexity and the high costs associated with their training. In this paper, we propose a lightweight network, a multi-scale atrous attention network for skin lesion segmentation (MAAN). Firstly, we optimize the residual basic block by constructing a dual-path framework with both high and low-resolution paths, which reduces the number of parameters while maintaining effective feature extraction capability. Secondly, to better capture the information in the skin lesion images and further improve the model performance, we design an adaptive multi-scale atrous attention module at the final stage of the low-resolution path. The experiments conducted on the ISIC 2017 and ISIC2018 datasets show that the proposed model MAAN achieves mIoU of 85.20 and 85.67% respectively, outperforming recent MHorNet while maintaining only 0.37M parameters and 0.23G FLOPs computational complexity. Additionally, through ablation studies, we demonstrate that the AMAA module can work as a plug-and-play module for performance improvement on CNN-based methods.
皮肤癌研究对于寻找新的治疗方法和提高计算机辅助医学的存活率至关重要。在本研究中,皮肤病变图像的准确分割是早期诊断和个性化治疗策略的重要步骤。然而,虽然目前流行的基于transformer的模型已经取得了有竞争力的分割结果,但它们往往忽略了与它们的训练相关的计算复杂性和高成本。在本文中,我们提出了一种轻量级的网络,即多尺度的皮肤病变分割亚属性关注网络(MAAN)。首先,通过构建高分辨率和低分辨率的双路径框架对残差基本块进行优化,在减少参数数量的同时保持有效的特征提取能力;其次,为了更好地捕获皮肤病变图像中的信息,进一步提高模型性能,我们在低分辨率路径的最后阶段设计了自适应多尺度属性关注模块。在ISIC 2017和ISIC2018数据集上进行的实验表明,该模型MAAN的mIoU分别达到85.20和85.67%,优于现有的MHorNet,同时仅保持0.37M参数和0.23G FLOPs的计算复杂度。此外,通过烧蚀研究,我们证明AMAA模块可以作为一个即插即用模块,用于改进基于cnn的方法的性能。
{"title":"MAAN: multi-scale atrous attention network for skin lesion segmentation","authors":"Yang Lian, Ruizhi Han, Shiyuan Han, Defu Qiu, Jin Zhou","doi":"10.1007/s40747-025-02186-z","DOIUrl":"https://doi.org/10.1007/s40747-025-02186-z","url":null,"abstract":"Skin cancer research is essential to finding new treatments and improving survival rates in computer-aided medicine. Within this research, the accurate segmentation of skin lesion images is an important step for both early diagnosis and personalized treatment strategies. However, while current popular Transformer-based models have achieved competitive segmentation results, they often ignore the computational complexity and the high costs associated with their training. In this paper, we propose a lightweight network, a multi-scale atrous attention network for skin lesion segmentation (MAAN). Firstly, we optimize the residual basic block by constructing a dual-path framework with both high and low-resolution paths, which reduces the number of parameters while maintaining effective feature extraction capability. Secondly, to better capture the information in the skin lesion images and further improve the model performance, we design an adaptive multi-scale atrous attention module at the final stage of the low-resolution path. The experiments conducted on the ISIC 2017 and ISIC2018 datasets show that the proposed model MAAN achieves mIoU of 85.20 and 85.67% respectively, outperforming recent MHorNet while maintaining only 0.37M parameters and 0.23G FLOPs computational complexity. Additionally, through ablation studies, we demonstrate that the AMAA module can work as a plug-and-play module for performance improvement on CNN-based methods.","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"22 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145711460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive exploration and temporal attention in reinforcement learning for autonomous air combat decision making 自主空战决策强化学习中的自适应探索和时间注意
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-09 DOI: 10.1007/s40747-025-02189-w
Xiang Wu, Junzhe Jiang, Zhihong Chen, Shaojie Wu, Chenghong Ye, Xueyun Chen
{"title":"Adaptive exploration and temporal attention in reinforcement learning for autonomous air combat decision making","authors":"Xiang Wu, Junzhe Jiang, Zhihong Chen, Shaojie Wu, Chenghong Ye, Xueyun Chen","doi":"10.1007/s40747-025-02189-w","DOIUrl":"https://doi.org/10.1007/s40747-025-02189-w","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"5 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145711165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Motion-temporal calibration network for continuous sign language recognition 连续手语识别的运动-时间校正网络
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-09 DOI: 10.1007/s40747-025-02156-5
Hongguan Hu, Jianjun Peng, Zhidong Xiao, Li Guo, Yi Hu, Di Wu
Continuous Sign Language Recognition (CSLR) is fundamental to bridging the communication gap between hearing-impaired individuals and the broader society. The primary challenge lies in effectively modeling the complex spatial-temporal dynamic features in sign language videos. Current approaches typically employ independent processing strategies for motion feature extraction and temporal modeling, which impedes the unified modeling of action continuity and semantic integrity in sign language sequences. To address these limitations, we propose the Motion-Temporal Calibration Network (MTCNet), a novel framework for continuous sign language recognition that integrates dynamic feature enhancement and temporal calibration. The framework consists of two key innovative modules. First, the Cross-Frame Motion Refinement (CFMR) module implements an inter-frame differential attention mechanism combined with residual learning strategies, enabling precise motion feature modeling and effective enhancement of dynamic information between adjacent frames. Second, the Temporal-Channel Adaptive Recalibration (TCAR) module utilizes adaptive convolution kernel design and a dual-branch feature extraction architecture, facilitating joint optimization in both temporal and channel dimensions. In experimental evaluations, our method demonstrates competitive performance on the widely-used PHOENIX-2014 and PHOENIX-2014-T datasets, achieving results comparable to leading unimodal approaches. Moreover, it achieves state-of-the-art performance on the Chinese Sign Language (CSL) dataset. Through comprehensive ablation studies and quantitative analysis, we validate the effectiveness of our proposed method in fine-grained dynamic feature modeling and long-term dependency capture while maintaining computational efficiency.
持续手语识别(CSLR)是弥合听障人士与更广泛社会之间沟通差距的基础。手语视频中复杂的时空动态特征是手语视频研究面临的主要挑战。目前的方法通常采用独立的处理策略进行动作特征提取和时间建模,这阻碍了手势语言序列动作连续性和语义完整性的统一建模。为了解决这些限制,我们提出了运动-时间校准网络(MTCNet),这是一个集成了动态特征增强和时间校准的连续手语识别框架。该框架由两个关键的创新模块组成。首先,跨帧运动细化(CFMR)模块实现了帧间差分注意机制,结合残差学习策略,实现了精确的运动特征建模和相邻帧间动态信息的有效增强。其次,时间通道自适应再校准(TCAR)模块采用自适应卷积核设计和双分支特征提取架构,促进了时间和通道维度的联合优化。在实验评估中,我们的方法在广泛使用的PHOENIX-2014和PHOENIX-2014- t数据集上展示了具有竞争力的性能,取得了与领先的单峰方法相当的结果。此外,它在中国手语(CSL)数据集上达到了最先进的性能。通过综合消融研究和定量分析,我们验证了该方法在保持计算效率的同时,在细粒度动态特征建模和长期依赖捕获方面的有效性。
{"title":"Motion-temporal calibration network for continuous sign language recognition","authors":"Hongguan Hu, Jianjun Peng, Zhidong Xiao, Li Guo, Yi Hu, Di Wu","doi":"10.1007/s40747-025-02156-5","DOIUrl":"https://doi.org/10.1007/s40747-025-02156-5","url":null,"abstract":"Continuous Sign Language Recognition (CSLR) is fundamental to bridging the communication gap between hearing-impaired individuals and the broader society. The primary challenge lies in effectively modeling the complex spatial-temporal dynamic features in sign language videos. Current approaches typically employ independent processing strategies for motion feature extraction and temporal modeling, which impedes the unified modeling of action continuity and semantic integrity in sign language sequences. To address these limitations, we propose the Motion-Temporal Calibration Network (MTCNet), a novel framework for continuous sign language recognition that integrates dynamic feature enhancement and temporal calibration. The framework consists of two key innovative modules. First, the Cross-Frame Motion Refinement (CFMR) module implements an inter-frame differential attention mechanism combined with residual learning strategies, enabling precise motion feature modeling and effective enhancement of dynamic information between adjacent frames. Second, the Temporal-Channel Adaptive Recalibration (TCAR) module utilizes adaptive convolution kernel design and a dual-branch feature extraction architecture, facilitating joint optimization in both temporal and channel dimensions. In experimental evaluations, our method demonstrates competitive performance on the widely-used PHOENIX-2014 and PHOENIX-2014-T datasets, achieving results comparable to leading unimodal approaches. Moreover, it achieves state-of-the-art performance on the Chinese Sign Language (CSL) dataset. Through comprehensive ablation studies and quantitative analysis, we validate the effectiveness of our proposed method in fine-grained dynamic feature modeling and long-term dependency capture while maintaining computational efficiency.","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"134 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145704005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic RBFN with vector attention-guided feature selection for spam detection in social media 基于矢量注意力引导特征选择的动态RBFN社交媒体垃圾邮件检测
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-09 DOI: 10.1007/s40747-025-02148-5
E Elakkiya, Sumalatha Saleti, Arunkumar Balakrishnan
{"title":"Dynamic RBFN with vector attention-guided feature selection for spam detection in social media","authors":"E Elakkiya, Sumalatha Saleti, Arunkumar Balakrishnan","doi":"10.1007/s40747-025-02148-5","DOIUrl":"https://doi.org/10.1007/s40747-025-02148-5","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"11 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145704006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedMGKD: a multi-granularity trusted knowledge distillation framework for edge personalized federated learning FedMGKD:用于边缘个性化联邦学习的多粒度可信知识蒸馏框架
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-09 DOI: 10.1007/s40747-025-02142-x
Ping Zhang, Wenlong Lu, Xiaoyu Zhou, An Bao
{"title":"FedMGKD: a multi-granularity trusted knowledge distillation framework for edge personalized federated learning","authors":"Ping Zhang, Wenlong Lu, Xiaoyu Zhou, An Bao","doi":"10.1007/s40747-025-02142-x","DOIUrl":"https://doi.org/10.1007/s40747-025-02142-x","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"38 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145704007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fishing vessel behavior pattern recognition using AIS sub-trajectory prototype learning based on Gramian Angular Field 基于格拉曼角场的AIS子轨迹原型学习的渔船行为模式识别
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-08 DOI: 10.1007/s40747-025-02187-y
Songtao Hu, Guanyu Chen, Rui Zhou, Xinghan Qin, Xiaokang Wang
{"title":"Fishing vessel behavior pattern recognition using AIS sub-trajectory prototype learning based on Gramian Angular Field","authors":"Songtao Hu, Guanyu Chen, Rui Zhou, Xinghan Qin, Xiaokang Wang","doi":"10.1007/s40747-025-02187-y","DOIUrl":"https://doi.org/10.1007/s40747-025-02187-y","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"11 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145697070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Organfit: a multi-scale convolutional model with ellipse fitting for organoid identification Organfit:用于类器官识别的椭圆拟合多尺度卷积模型
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-06 DOI: 10.1007/s40747-025-02177-0
Le Tong, Xinran Li, Tao Shu, Xun Deng, Feng Tan, Zemin Kuang, Yu-An Huang, Zhuhong You, Lun Hu, Pengwei Hu, Wei Du
{"title":"Organfit: a multi-scale convolutional model with ellipse fitting for organoid identification","authors":"Le Tong, Xinran Li, Tao Shu, Xun Deng, Feng Tan, Zemin Kuang, Yu-An Huang, Zhuhong You, Lun Hu, Pengwei Hu, Wei Du","doi":"10.1007/s40747-025-02177-0","DOIUrl":"https://doi.org/10.1007/s40747-025-02177-0","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"10 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145697022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FieldVMC: an asynchronous model and platform for self-organising morphogenesis of artificial structures fielddvmc:人工结构自组织形态发生的异步模型和平台
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-05 DOI: 10.1007/s40747-025-02141-y
Angela Cortecchia, Giovanni Ciatto, Roberto Casadei, Danilo Pianini
{"title":"FieldVMC: an asynchronous model and platform for self-organising morphogenesis of artificial structures","authors":"Angela Cortecchia, Giovanni Ciatto, Roberto Casadei, Danilo Pianini","doi":"10.1007/s40747-025-02141-y","DOIUrl":"https://doi.org/10.1007/s40747-025-02141-y","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"36 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic segmentation assisted deep ensemble feature learning model for skin-cancer detection and classification: SDENet 语义分割辅助的皮肤癌检测与分类深度集成特征学习模型:SDENet
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-05 DOI: 10.1007/s40747-025-02179-y
Ch. Srilakshmi, N. Ramakrishnaiah, E. Laxmi Lydia
The last few years have witnessed rapid increase in skin cancer caused mortality rate. Despite innovations and growth in vision-computing and artificial intelligence technologies, the complex shapes, sizes, textural patterns and ambiguous edges limits the reliability of existing approaches. Nevertheless, unlike traditional approaches the deep learning methods have performed superior; yet, the demands for the superior skin-lesion segmentation, ROI-specific feature extraction and learning can’t be ruled out. Moreover, it requires addressing class-imbalance problems as well to avoid skewed learning and prediction. Considering it as motivation, in this paper a novel and robust semantic segmentation assisted deep ensemble feature learning environment for skin-cancer detection and classification (SDENet) is proposed. The proposed SDENet model is targeted to perform multi-class skin-cancer classification. To achieve it, the SDENet at first performs standard pre-processing followed by synthetic minority over-sampling (SMOTE) to alleviate class-imbalance problem. Subsequently, it performs firefly heuristic algorithm based Fuzzy C-means clustering to segment skin-lesions (say, ROI), which is followed by ROI-specific deep spatio-textural ensemble feature extraction and fusion (DeS-TEFF). Specifically, SDENet makes use of the AlexNet deep network, DenseNet121 and Gray level co-occurrence matrix (GLCM) feature extraction methods. Here, AlexNet serves high-dimensional information rich features, while DenseNet121 yields layer-wise learning and feature reuse driven feature-set. Performing horizontal concatenation over the AlexNet, DenseNet121 and GLCM features, the principal component analysis (PCA) feature selection was performed, which helped to avoid local minima and convergence. The selected features were normalized by means of the z-score normalization so as to avoid over-fitting problems. Finally, the normalized features were trained and classified by using Heterogenous Ensemble Classifier, embodying SVM, DT, Random Forest, Extra Tree Classifier and XGBoost classifiers. The maximum voting ensemble-based classification over HAM10000 dataset exhibited the average accuracy of 98.97%, precision 99.38%, recall 98.94% and F-Measure 0.99, confirming its superiority over other existing approaches for real-time skin cancer diagnosis purposes.
最近几年,皮肤癌引起的死亡率迅速上升。尽管视觉计算和人工智能技术不断创新和发展,但复杂的形状、大小、纹理模式和模糊的边缘限制了现有方法的可靠性。然而,与传统方法不同,深度学习方法表现得更优越;然而,也不能排除对更好的皮肤病变分割、roi特征提取和学习的需求。此外,它还需要解决阶级失衡问题,以避免学习和预测的偏差。以语义分割为动机,提出了一种新的鲁棒语义分割辅助深度集成特征学习环境(SDENet)用于皮肤癌检测与分类。提出的SDENet模型旨在进行多类皮肤癌分类。为了实现这一目标,SDENet首先执行标准预处理,然后进行合成少数过采样(SMOTE)来缓解类不平衡问题。随后,采用基于萤火虫启发式算法的模糊c均值聚类对皮肤病变(如ROI)进行分割,然后对ROI进行深度空间纹理集成特征提取与融合(DeS-TEFF)。具体来说,SDENet使用了AlexNet深度网络、DenseNet121和灰度共生矩阵(GLCM)特征提取方法。在这里,AlexNet提供高维信息丰富的功能,而DenseNet121提供分层学习和功能重用驱动的功能集。在AlexNet、DenseNet121和GLCM特征上进行水平拼接,进行主成分分析(PCA)特征选择,有助于避免局部最小值和收敛。选取的特征通过z-score归一化进行归一化,避免出现过拟合问题。最后,使用异构集成分类器对归一化特征进行训练和分类,包括SVM、DT、Random Forest、Extra Tree Classifier和XGBoost分类器。在HAM10000数据集上,基于投票集合的最大分类平均准确率为98.97%,精密度为99.38%,召回率为98.94%,F-Measure为0.99,证实了其在实时皮肤癌诊断方面优于其他现有方法。
{"title":"Semantic segmentation assisted deep ensemble feature learning model for skin-cancer detection and classification: SDENet","authors":"Ch. Srilakshmi, N. Ramakrishnaiah, E. Laxmi Lydia","doi":"10.1007/s40747-025-02179-y","DOIUrl":"https://doi.org/10.1007/s40747-025-02179-y","url":null,"abstract":"The last few years have witnessed rapid increase in skin cancer caused mortality rate. Despite innovations and growth in vision-computing and artificial intelligence technologies, the complex shapes, sizes, textural patterns and ambiguous edges limits the reliability of existing approaches. Nevertheless, unlike traditional approaches the deep learning methods have performed superior; yet, the demands for the superior skin-lesion segmentation, ROI-specific feature extraction and learning can’t be ruled out. Moreover, it requires addressing class-imbalance problems as well to avoid skewed learning and prediction. Considering it as motivation, in this paper a novel and robust semantic segmentation assisted deep ensemble feature learning environment for skin-cancer detection and classification (SDENet) is proposed. The proposed SDENet model is targeted to perform multi-class skin-cancer classification. To achieve it, the SDENet at first performs standard pre-processing followed by synthetic minority over-sampling (SMOTE) to alleviate class-imbalance problem. Subsequently, it performs firefly heuristic algorithm based Fuzzy C-means clustering to segment skin-lesions (say, ROI), which is followed by ROI-specific deep spatio-textural ensemble feature extraction and fusion (DeS-TEFF). Specifically, SDENet makes use of the AlexNet deep network, DenseNet121 and Gray level co-occurrence matrix (GLCM) feature extraction methods. Here, AlexNet serves high-dimensional information rich features, while DenseNet121 yields layer-wise learning and feature reuse driven feature-set. Performing horizontal concatenation over the AlexNet, DenseNet121 and GLCM features, the principal component analysis (PCA) feature selection was performed, which helped to avoid local minima and convergence. The selected features were normalized by means of the z-score normalization so as to avoid over-fitting problems. Finally, the normalized features were trained and classified by using Heterogenous Ensemble Classifier, embodying SVM, DT, Random Forest, Extra Tree Classifier and XGBoost classifiers. The maximum voting ensemble-based classification over HAM10000 dataset exhibited the average accuracy of 98.97%, precision 99.38%, recall 98.94% and F-Measure 0.99, confirming its superiority over other existing approaches for real-time skin cancer diagnosis purposes.","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"100 1 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A real-time mobile solution for shoe try-on using foot pose estimation and 3D processing techniques 利用足部姿态估计和3D处理技术,为鞋子试穿提供实时移动解决方案
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-05 DOI: 10.1007/s40747-025-02188-x
Nguyen Hoang Vu, Tran Van Duc, Pham Quang Tien, Nguyen Thi Ngoc Anh, Nguyen Tien Dat
{"title":"A real-time mobile solution for shoe try-on using foot pose estimation and 3D processing techniques","authors":"Nguyen Hoang Vu, Tran Van Duc, Pham Quang Tien, Nguyen Thi Ngoc Anh, Nguyen Tien Dat","doi":"10.1007/s40747-025-02188-x","DOIUrl":"https://doi.org/10.1007/s40747-025-02188-x","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"69 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Complex & Intelligent Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1