首页 > 最新文献

Journal of Intelligent & Fuzzy Systems最新文献

英文 中文
Implementation of a quantum machine learning model for the categorization and analysis of COVID-19 cases 实现用于COVID-19病例分类和分析的量子机器学习模型
4区 计算机科学 Q1 Mathematics Pub Date : 2023-10-28 DOI: 10.3233/jifs-233633
Heba Kadry, Ahmed H. Samak, Sara Ghorashi, Sarah M. Alhammad, Abdulwahab Abukwaik, Ahmed I. Taloba, Elnomery A. Zanaty
Coronavirus is a new pathogen that causes both the upper and lower respiratory systems. The global COVID-19 pandemic’s size, rate of transmission, and the number of deaths is all steadily rising. COVID-19 instances could be detected and analyzed using Computed Tomography scanning. For the identification of lung infection, chest CT imaging has the advantages of speedy detection, relatively inexpensive, and high sensitivity. Due to the obvious minimal information available and the complicated image features, COVID-19 identification is a difficult process. To address this problem, modified-Deformed Entropy (QDE) algorithm for CT image scanning is suggested. To enhance the number of training samples for effective testing and training, the suggested method utilizes QDE to generate CT images. The retrieved features are used to classify the results. Rapid innovations in quantum mechanics had prompted researchers to use Quantum Machine Learning (QML) to test strategies for improvement. Furthermore, the categorization of corona diagnosed, and non-diagnosed pictures is accomplished through Quanvolutional Neural Network (QNN). To determine the suggested techniques, the results are related with other methods. For processing the COVID-19 imagery, the study relates QNN with other existing methods. On comparing with other models, the suggested technique produced improved outcomes. Also, with created COVID-19 CT images, the suggested technique outperforms previous state-of-the-art image synthesis techniques, indicating possibilities for different machine learning techniques such as cognitive segmentation and classification. As a result of the improved model training/testing, the image classification results are more accurate.
冠状病毒是一种可以引起上呼吸道和下呼吸道疾病的新型病原体。全球COVID-19大流行的规模、传播速度和死亡人数都在稳步上升。可以使用计算机断层扫描检测和分析COVID-19实例。对于肺部感染的鉴别,胸部CT成像具有检测速度快、价格相对低廉、灵敏度高的优点。由于可用信息明显很少,图像特征复杂,因此COVID-19识别是一个困难的过程。针对这一问题,提出了一种基于改进变形熵(QDE)的CT图像扫描算法。为了增加有效测试和训练的训练样本数量,该方法利用QDE生成CT图像。检索到的特征用于对结果进行分类。量子力学的快速创新促使研究人员使用量子机器学习(QML)来测试改进策略。此外,利用量子神经网络(QNN)对诊断和未诊断的冠状病毒图像进行分类。为了确定建议的技术,结果与其他方法相关联。对于COVID-19图像的处理,本研究将QNN与其他现有方法联系起来。与其他模型相比,建议的技术产生了更好的结果。此外,对于创建的COVID-19 CT图像,建议的技术优于以前最先进的图像合成技术,这表明了认知分割和分类等不同机器学习技术的可能性。经过改进的模型训练/测试,图像分类结果更加准确。
{"title":"Implementation of a quantum machine learning model for the categorization and analysis of COVID-19 cases","authors":"Heba Kadry, Ahmed H. Samak, Sara Ghorashi, Sarah M. Alhammad, Abdulwahab Abukwaik, Ahmed I. Taloba, Elnomery A. Zanaty","doi":"10.3233/jifs-233633","DOIUrl":"https://doi.org/10.3233/jifs-233633","url":null,"abstract":"Coronavirus is a new pathogen that causes both the upper and lower respiratory systems. The global COVID-19 pandemic’s size, rate of transmission, and the number of deaths is all steadily rising. COVID-19 instances could be detected and analyzed using Computed Tomography scanning. For the identification of lung infection, chest CT imaging has the advantages of speedy detection, relatively inexpensive, and high sensitivity. Due to the obvious minimal information available and the complicated image features, COVID-19 identification is a difficult process. To address this problem, modified-Deformed Entropy (QDE) algorithm for CT image scanning is suggested. To enhance the number of training samples for effective testing and training, the suggested method utilizes QDE to generate CT images. The retrieved features are used to classify the results. Rapid innovations in quantum mechanics had prompted researchers to use Quantum Machine Learning (QML) to test strategies for improvement. Furthermore, the categorization of corona diagnosed, and non-diagnosed pictures is accomplished through Quanvolutional Neural Network (QNN). To determine the suggested techniques, the results are related with other methods. For processing the COVID-19 imagery, the study relates QNN with other existing methods. On comparing with other models, the suggested technique produced improved outcomes. Also, with created COVID-19 CT images, the suggested technique outperforms previous state-of-the-art image synthesis techniques, indicating possibilities for different machine learning techniques such as cognitive segmentation and classification. As a result of the improved model training/testing, the image classification results are more accurate.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136231939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classifying drivers of deforestation by using the deep learning based poly-highway forest convolution network 基于深度学习的多公路森林卷积网络对毁林驱动因素进行分类
4区 计算机科学 Q1 Mathematics Pub Date : 2023-10-28 DOI: 10.3233/jifs-233534
D. Abdus Subhahan, C.N.S. Vinoth Kumar
The worldwide deforestation rate worsens year after year, ultimately resulting in a variety of severe implications for both mankind and the environment. In order to track the success of forest preservation activities, it is crucial to establish a reliable forest monitoring system. Changes in forest status are extremely difficult to manually annotate due to the tiny size and subtlety of the borders involved, particularly in regions abutting residential areas. Previous forest monitoring systems failed because they relied on low-resolution satellite images and drone-based data, both of which have inherent limitations. Most government organizations still use manual annotation, which is a slow, laborious, and costly way to keep tabs on data. The purpose of this research is to find a solution to these problems by building a poly-highway forest convolution network using deep learning to automatically detect forest borders so that changes over time may be monitored. Here initially the data was curated using the dynamic decomposed kalman filter. Then the data can be augmented. Afterward the augmented image features can be fused using the multimodal discriminant centroid feature clustering. Then the selected area can be segmented using the iterative initial seeded algorithm (IISA). Finally, the level and the driver of deforestation can be classified using the poly-highway forest convolution network (PHFCN). The whole experimentation was carried out in a dataset of 6048 Landsat-8 satellite sub-images under MATLAB environment. From the result obtained the suggested methodology express satisfied performance than other existing mechanisms.
世界范围内的森林砍伐率逐年恶化,最终对人类和环境造成各种严重影响。为了跟踪森林保护活动的成功,建立一个可靠的森林监测系统是至关重要的。森林状况的变化非常难以手工标注,因为涉及的边界很小且很微妙,特别是在毗邻居民区的地区。以前的森林监测系统失败了,因为它们依赖于低分辨率的卫星图像和基于无人机的数据,这两者都有固有的局限性。大多数政府机构仍然使用手动注释,这是一种缓慢、费力且昂贵的数据记录方式。本研究的目的是通过使用深度学习建立一个多公路森林卷积网络来自动检测森林边界,从而监测森林边界随时间的变化,从而找到解决这些问题的方法。这里最初使用动态分解卡尔曼滤波器对数据进行整理。然后可以对数据进行扩充。然后利用多模态判别质心特征聚类对增强图像特征进行融合。然后利用迭代初始播种算法(IISA)对所选区域进行分割。最后,利用多公路森林卷积网络(PHFCN)对森林砍伐的程度和驱动因素进行分类。整个实验以6048张Landsat-8卫星子图像为数据集,在MATLAB环境下进行。结果表明,所提出的方法比现有的方法具有更满意的性能。
{"title":"Classifying drivers of deforestation by using the deep learning based poly-highway forest convolution network","authors":"D. Abdus Subhahan, C.N.S. Vinoth Kumar","doi":"10.3233/jifs-233534","DOIUrl":"https://doi.org/10.3233/jifs-233534","url":null,"abstract":"The worldwide deforestation rate worsens year after year, ultimately resulting in a variety of severe implications for both mankind and the environment. In order to track the success of forest preservation activities, it is crucial to establish a reliable forest monitoring system. Changes in forest status are extremely difficult to manually annotate due to the tiny size and subtlety of the borders involved, particularly in regions abutting residential areas. Previous forest monitoring systems failed because they relied on low-resolution satellite images and drone-based data, both of which have inherent limitations. Most government organizations still use manual annotation, which is a slow, laborious, and costly way to keep tabs on data. The purpose of this research is to find a solution to these problems by building a poly-highway forest convolution network using deep learning to automatically detect forest borders so that changes over time may be monitored. Here initially the data was curated using the dynamic decomposed kalman filter. Then the data can be augmented. Afterward the augmented image features can be fused using the multimodal discriminant centroid feature clustering. Then the selected area can be segmented using the iterative initial seeded algorithm (IISA). Finally, the level and the driver of deforestation can be classified using the poly-highway forest convolution network (PHFCN). The whole experimentation was carried out in a dataset of 6048 Landsat-8 satellite sub-images under MATLAB environment. From the result obtained the suggested methodology express satisfied performance than other existing mechanisms.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136232081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting diabetic macular edema in retina fundus images based on optimized deep residual network techniques on medical internet of things 基于优化的医疗物联网深度残差网络技术预测视网膜眼底图像中的糖尿病黄斑水肿
4区 计算机科学 Q1 Mathematics Pub Date : 2023-10-28 DOI: 10.3233/jifs-234649
Vo Thi Hong Tuyet, Nguyen Thanh Binh, Dang Thanh Tin
With the medical internet of things, many automated diagnostic models related to eye diseases are easier. The doctors could quickly contrast and compare retina fundus images. The retina image contains a lot of information in the image. The task of detecting diabetic macular edema from retinal images in the healthcare system is difficult because the details in these images are very small. This paper proposed the new model based on the medical internet of things for predicting diabetic macular edema in retina fundus images. The method called DMER (Diabetic Macular Edema in Retina fundus images) to detect diabetic macular edema in retina fundus images based on improving deep residual network being combined with feature pyramid network in the context of the medical internet of things. The DMER method includes the following stages: (i) ResNet101 improved combining with feature pyramid network is used to extract features of the image and obtain the map of these features; (ii) a region proposal network to look for potential anomalies; and (iii) the predicted bounding boxes against the true bounding box by the regression method to certify the capability of macular edema. The MESSIDOR and DIARETDB1 datasets are used for testing with evaluation criteria such as sensitivity, specificity, and accuracy. The accuracy of the DMER method is about 98.08% with MESSIDOR dataset and 98.92% with DIARETDB1 dataset. The results of the method DMER are better than those of the other methods up to the present time with the above datasets.
随着医疗物联网的发展,许多与眼病相关的自动诊断模型变得更加容易。医生可以快速对比和比较视网膜眼底图像。视网膜图像中包含了大量的图像信息。从视网膜图像中检测糖尿病黄斑水肿的任务在医疗系统中是困难的,因为这些图像中的细节非常小。本文提出了一种基于医疗物联网的糖尿病黄斑水肿视网膜眼底图像预测模型。在医疗物联网背景下,基于改进的深度残差网络与特征金字塔网络相结合,检测视网膜眼底图像中糖尿病黄斑水肿的方法称为DMER (Diabetic Macular Edema in Retina fundus images)。DMER方法包括以下几个阶段:(i)利用改进的ResNet101结合特征金字塔网络提取图像的特征,得到这些特征的映射;(ii)寻找潜在异常的区域建议网络;(3)用回归法将预测边界框与真实边界框对比,证明黄斑水肿的能力。MESSIDOR和DIARETDB1数据集用于测试评估标准,如敏感性、特异性和准确性。DMER方法在MESSIDOR数据集和DIARETDB1数据集上的准确率分别为98.08%和98.92%。对于上述数据集,DMER方法的结果优于目前其他方法的结果。
{"title":"Predicting diabetic macular edema in retina fundus images based on optimized deep residual network techniques on medical internet of things","authors":"Vo Thi Hong Tuyet, Nguyen Thanh Binh, Dang Thanh Tin","doi":"10.3233/jifs-234649","DOIUrl":"https://doi.org/10.3233/jifs-234649","url":null,"abstract":"With the medical internet of things, many automated diagnostic models related to eye diseases are easier. The doctors could quickly contrast and compare retina fundus images. The retina image contains a lot of information in the image. The task of detecting diabetic macular edema from retinal images in the healthcare system is difficult because the details in these images are very small. This paper proposed the new model based on the medical internet of things for predicting diabetic macular edema in retina fundus images. The method called DMER (Diabetic Macular Edema in Retina fundus images) to detect diabetic macular edema in retina fundus images based on improving deep residual network being combined with feature pyramid network in the context of the medical internet of things. The DMER method includes the following stages: (i) ResNet101 improved combining with feature pyramid network is used to extract features of the image and obtain the map of these features; (ii) a region proposal network to look for potential anomalies; and (iii) the predicted bounding boxes against the true bounding box by the regression method to certify the capability of macular edema. The MESSIDOR and DIARETDB1 datasets are used for testing with evaluation criteria such as sensitivity, specificity, and accuracy. The accuracy of the DMER method is about 98.08% with MESSIDOR dataset and 98.92% with DIARETDB1 dataset. The results of the method DMER are better than those of the other methods up to the present time with the above datasets.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136232085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hidden Markov model-based modeling and prediction for implied volatility surface1 基于隐马尔可夫模型的隐含波动面建模与预测
4区 计算机科学 Q1 Mathematics Pub Date : 2023-10-28 DOI: 10.3233/jifs-232139
Hongyue Guo, Qiqi Deng, Wenjuan Jia, Lidong Wang, Cong Sui
The implied volatility plays a pivotal role in the options market, and a collection of implied volatilities across strike and maturity is known as the implied volatility surface (IVS). To capture the dynamics of IVS, this study examines the latent states of IVS and their relationship based on the regime-switching framework of the hidden Markov model (HMM). The cross-sectional models are first built for daily implied volatilities, and the obtained regression factors are regarded as the proxies of the IVS. Then, having these latent factors, the HMM is employed to model the dynamics of IVS. Take the advantages of HMM, the hidden state for each daily data is identified to achieve the corresponding time distribution, the characteristics, and the transition between the hidden states. The empirical study is conducted on the Shanghai 50ETF options, and the analysis results indicate that the HMM can capture the latent factors of IVS. The achieved states reflect different financial characteristics, and some of their typical features and transfer are associated with certain events. In addition, the HMM exploited to predict the regression factors of the cross-sectional models enables the further forecasting of implied volatilities. The autoregressive integrated moving average model, the vector auto-regression model, and the support vector regression model are regarded as benchmarks for comparison. The results show that the HMM performs better in the implied volatility prediction compared with other models.
隐含波动率在期权市场中起着至关重要的作用,期权执行期和到期日隐含波动率的集合称为隐含波动率面(IVS)。为了捕捉IVS的动态,本研究基于隐马尔可夫模型(HMM)的状态切换框架,研究了IVS的潜在状态及其关系。首先建立了日隐含波动率的横截面模型,并将得到的回归因子作为IVS的代理。然后,利用这些潜在因素,利用隐马尔可夫模型对IVS动力学进行建模。利用隐马尔可夫模型的优点,对每天的数据进行隐藏状态的识别,从而实现相应的时间分布、特征以及隐藏状态之间的转换。对上证50ETF期权进行了实证研究,分析结果表明HMM可以捕捉到IVS的潜在因素。实现的状态反映了不同的财务特征,它们的一些典型特征和转移与某些事件有关。此外,利用HMM来预测横截面模型的回归因子,可以进一步预测隐含波动率。以自回归综合移动平均模型、向量自回归模型和支持向量回归模型作为比较基准。结果表明,隐马尔可夫模型对隐含波动率的预测效果优于其他模型。
{"title":"Hidden Markov model-based modeling and prediction for implied volatility surface1","authors":"Hongyue Guo, Qiqi Deng, Wenjuan Jia, Lidong Wang, Cong Sui","doi":"10.3233/jifs-232139","DOIUrl":"https://doi.org/10.3233/jifs-232139","url":null,"abstract":"The implied volatility plays a pivotal role in the options market, and a collection of implied volatilities across strike and maturity is known as the implied volatility surface (IVS). To capture the dynamics of IVS, this study examines the latent states of IVS and their relationship based on the regime-switching framework of the hidden Markov model (HMM). The cross-sectional models are first built for daily implied volatilities, and the obtained regression factors are regarded as the proxies of the IVS. Then, having these latent factors, the HMM is employed to model the dynamics of IVS. Take the advantages of HMM, the hidden state for each daily data is identified to achieve the corresponding time distribution, the characteristics, and the transition between the hidden states. The empirical study is conducted on the Shanghai 50ETF options, and the analysis results indicate that the HMM can capture the latent factors of IVS. The achieved states reflect different financial characteristics, and some of their typical features and transfer are associated with certain events. In addition, the HMM exploited to predict the regression factors of the cross-sectional models enables the further forecasting of implied volatilities. The autoregressive integrated moving average model, the vector auto-regression model, and the support vector regression model are regarded as benchmarks for comparison. The results show that the HMM performs better in the implied volatility prediction compared with other models.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136232612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modelling a machine learning based multivariate content grading system for YouTube Tamil-post analysis 基于机器学习的多变量内容评分系统建模,用于YouTube泰米尔帖子分析
4区 计算机科学 Q1 Mathematics Pub Date : 2023-10-28 DOI: 10.3233/jifs-222504
G. Srivatsun, S. Thivaharan
Writing is a crucial component of the language requirement and is an effective method for correctly reflecting language proficiency. Manually evaluating Tamil language exams becomes time-consuming and costly for standardized language administrators as they grow in popularity. Numerous studies on computerized English assessment systems have been conducted in recent years. Due to Tamil text’s complicated grammatical structures, less research has been done on computerized evaluation methods. In this research, we present a Tamil review comment analysis system using a novel multivariate naïve Bayes classifier (mv - NB) where the comments are acquired from an online social network and performed training using the database for further analysis. Experiments show that the graded Kappa of 0.4239, error rate of 2.55 and precision of 85% was achieved on the online dataset by our contents grading system, which is superior in grading compared to the other widely used machine learning algorithms training on big datasets. Our findings are promising. Additionally, our contents analysis may provide beneficial criticism on Tamil writing on YouTube posts including comments, spelling errors and morphological issues that help to analyze thelanguage correlation.
写作是语言要求的重要组成部分,是正确反映语言能力的有效方法。随着泰米尔语考试越来越受欢迎,对标准化语言管理人员来说,手动评估泰米尔语考试变得既耗时又昂贵。近年来,人们对计算机英语评价系统进行了大量的研究。由于泰米尔语语法结构复杂,计算机化评价方法的研究较少。在这项研究中,我们提出了一个泰米尔评论评论分析系统,该系统使用一种新的多元naïve贝叶斯分类器(mv - NB),其中评论从在线社交网络中获取,并使用数据库进行训练以进行进一步分析。实验表明,我们的内容分级系统在在线数据集上的分级Kappa为0.4239,错误率为2.55,精度为85%,与其他广泛使用的大数据集上训练的机器学习算法相比,在分级方面具有优势。我们的发现很有希望。此外,我们的内容分析可能会对YouTube帖子上的泰米尔语写作提供有益的批评,包括评论,拼写错误和形态问题,有助于分析语言相关性。
{"title":"Modelling a machine learning based multivariate content grading system for YouTube Tamil-post analysis","authors":"G. Srivatsun, S. Thivaharan","doi":"10.3233/jifs-222504","DOIUrl":"https://doi.org/10.3233/jifs-222504","url":null,"abstract":"Writing is a crucial component of the language requirement and is an effective method for correctly reflecting language proficiency. Manually evaluating Tamil language exams becomes time-consuming and costly for standardized language administrators as they grow in popularity. Numerous studies on computerized English assessment systems have been conducted in recent years. Due to Tamil text’s complicated grammatical structures, less research has been done on computerized evaluation methods. In this research, we present a Tamil review comment analysis system using a novel multivariate naïve Bayes classifier (mv - NB) where the comments are acquired from an online social network and performed training using the database for further analysis. Experiments show that the graded Kappa of 0.4239, error rate of 2.55 and precision of 85% was achieved on the online dataset by our contents grading system, which is superior in grading compared to the other widely used machine learning algorithms training on big datasets. Our findings are promising. Additionally, our contents analysis may provide beneficial criticism on Tamil writing on YouTube posts including comments, spelling errors and morphological issues that help to analyze thelanguage correlation.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136232613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal-geographical attention-based transformer for point-of-interest recommendation 用于兴趣点推荐的基于时间地理注意力的转换器
4区 计算机科学 Q1 Mathematics Pub Date : 2023-10-28 DOI: 10.3233/jifs-234824
Shaojie Jiang, Jiang Wu
Point-of-Interest (POI) recommendation is one of the most important tasks in the field of social network analysis. Many efforts have been proposed to enhance the model performance for the POI recommendation task in recent years. Existing studies have revealed that the temporal factor and geographical factor are two crucial contextual factors which influence user decisions. However, they only learn representations of POIs and users from the single contextual factor and fuse the learned representations in the final stage, which ignores the interactions of different contextual factors, leading to learning suboptimal representations of POIs and users. To overcome this gap, we propose a novel Temporal-Geographical Attention-based Transformer (TGAT) for the POI recommendation task. Specifically, TGAT develops a hybrid sequence sampling strategy that samples the sequence of POIs from the different contextual factor POI graphs generated by the users’ check-in records. In this way, the interactions of different contextual factors can be care-fully pre-served. Then TGAT conducts a Transformer-based neural network backbone to learn representations of POIs from the sampling sequences. In addition, a weighted aggregation strategy is proposed to fuse the representations learned from different context factors. The extensive experimental results on real-world datasets have demonstrated the effectiveness of TGAT.
兴趣点(POI)推荐是社交网络分析领域中最重要的任务之一。近年来,为了提高POI推荐任务的模型性能,人们提出了许多努力。已有研究表明,时间因素和地理因素是影响用户决策的两个重要的语境因素。然而,他们只从单一的语境因素中学习poi和用户的表征,并在最后阶段融合所学到的表征,忽略了不同语境因素的相互作用,导致学习到poi和用户的次优表征。为了克服这一差距,我们提出了一种新的基于时间-地理注意力的变压器(TGAT)用于POI推荐任务。具体来说,TGAT开发了一种混合序列采样策略,该策略从用户签入记录生成的不同上下文因素POI图中采样POI序列。通过这种方式,不同背景因素的相互作用可以被小心地保存下来。然后,TGAT通过基于变压器的神经网络主干从采样序列中学习poi的表示。此外,提出了一种加权聚合策略来融合从不同上下文因素中学习到的表示。在实际数据集上的大量实验结果证明了TGAT的有效性。
{"title":"Temporal-geographical attention-based transformer for point-of-interest recommendation","authors":"Shaojie Jiang, Jiang Wu","doi":"10.3233/jifs-234824","DOIUrl":"https://doi.org/10.3233/jifs-234824","url":null,"abstract":"Point-of-Interest (POI) recommendation is one of the most important tasks in the field of social network analysis. Many efforts have been proposed to enhance the model performance for the POI recommendation task in recent years. Existing studies have revealed that the temporal factor and geographical factor are two crucial contextual factors which influence user decisions. However, they only learn representations of POIs and users from the single contextual factor and fuse the learned representations in the final stage, which ignores the interactions of different contextual factors, leading to learning suboptimal representations of POIs and users. To overcome this gap, we propose a novel Temporal-Geographical Attention-based Transformer (TGAT) for the POI recommendation task. Specifically, TGAT develops a hybrid sequence sampling strategy that samples the sequence of POIs from the different contextual factor POI graphs generated by the users’ check-in records. In this way, the interactions of different contextual factors can be care-fully pre-served. Then TGAT conducts a Transformer-based neural network backbone to learn representations of POIs from the sampling sequences. In addition, a weighted aggregation strategy is proposed to fuse the representations learned from different context factors. The extensive experimental results on real-world datasets have demonstrated the effectiveness of TGAT.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136232084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Determination of multi-UAVs formation shape: Using a requirement satisfaction and spherical fuzzy ANP based TOPSIS approach 多无人机编队形状的确定:基于需求满足和球面模糊ANP的TOPSIS方法
4区 计算机科学 Q1 Mathematics Pub Date : 2023-10-28 DOI: 10.3233/jifs-231494
An Zhang, Minghao Li, Wenhao Bi
Multiple unmanned aerial vehicles (multi-UAVs) formation shape refers to the geometric shape when multi-UAVs fly in formation and describes their relative positions. It plays a necessary role in multi-UAVs collaboration to improve performance, avoid collision, and provide reference for control. This study aims to determine the most appropriate multi-UAVs formation shape in a specific mission to meet different and even conflicting requirements. The proposed approach introduces requirement satisfaction and spherical fuzzy analytic network process (SFANP) to improve the technique for order preference by similarity to ideal solution (TOPSIS). First, multi-UAVs capability criteria and their evaluation models are constructed. Next, performance data are transformed into requirement satisfaction of capability and unified into a same scale. Qualitative judgments are made and quantified based on spherical fuzzy sets and nonlinear transformation functions are developed for benefit, cost, and interval metrics. Then, SFANP is used to handle interrelationships among criteria and determine their global weights, which takes decision vagueness and hesitancy into account and extends decision-makers’ preference domain onto a spherical surface. Finally, alternative formation shapes are ranked by their distances to the positive and negative ideal solution according to the TOPSIS. Furthermore, a case study of 9 UAVs performing a search-attack mission is set up to illustrate the proposed approach, and a comparative analysis is conducted to verify the applicability and credibility.
多架无人机编队形状是指多架无人机编队飞行时的几何形状,描述了它们的相对位置。它在多无人机协同中发挥着提高性能、避免碰撞、为控制提供参考的必要作用。该研究旨在确定在特定任务中最合适的多无人机编队形状,以满足不同甚至相互冲突的需求。该方法引入需求满足和球面模糊分析网络过程(SFANP),改进了基于理想解相似度的排序偏好技术(TOPSIS)。首先,构建了多无人机能力准则及其评价模型;其次,将性能数据转化为能力的需求满意度,并统一到一个相同的尺度中。基于球面模糊集和非线性变换函数对效益、成本和区间指标进行了定性判断和量化。然后,利用SFANP处理决策准则之间的相互关系并确定其全局权重,该方法考虑了决策的模糊性和犹豫性,将决策者的偏好域扩展到球面上;最后,根据TOPSIS对备选的地层形状进行排序,根据它们到正理想解和负理想解的距离进行排序。以9架无人机执行搜索攻击任务为例,对所提方法进行了验证,并进行了对比分析,验证了所提方法的适用性和可信度。
{"title":"Determination of multi-UAVs formation shape: Using a requirement satisfaction and spherical fuzzy ANP based TOPSIS approach","authors":"An Zhang, Minghao Li, Wenhao Bi","doi":"10.3233/jifs-231494","DOIUrl":"https://doi.org/10.3233/jifs-231494","url":null,"abstract":"Multiple unmanned aerial vehicles (multi-UAVs) formation shape refers to the geometric shape when multi-UAVs fly in formation and describes their relative positions. It plays a necessary role in multi-UAVs collaboration to improve performance, avoid collision, and provide reference for control. This study aims to determine the most appropriate multi-UAVs formation shape in a specific mission to meet different and even conflicting requirements. The proposed approach introduces requirement satisfaction and spherical fuzzy analytic network process (SFANP) to improve the technique for order preference by similarity to ideal solution (TOPSIS). First, multi-UAVs capability criteria and their evaluation models are constructed. Next, performance data are transformed into requirement satisfaction of capability and unified into a same scale. Qualitative judgments are made and quantified based on spherical fuzzy sets and nonlinear transformation functions are developed for benefit, cost, and interval metrics. Then, SFANP is used to handle interrelationships among criteria and determine their global weights, which takes decision vagueness and hesitancy into account and extends decision-makers’ preference domain onto a spherical surface. Finally, alternative formation shapes are ranked by their distances to the positive and negative ideal solution according to the TOPSIS. Furthermore, a case study of 9 UAVs performing a search-attack mission is set up to illustrate the proposed approach, and a comparative analysis is conducted to verify the applicability and credibility.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136232091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Medical image segmentation using hybrid Contrast Limited Adaptive Histogram Equalization (CLAHE) and expectation maximization algorithm 基于混合对比度限制自适应直方图均衡化和期望最大化算法的医学图像分割
4区 计算机科学 Q1 Mathematics Pub Date : 2023-10-28 DOI: 10.3233/jifs-233931
K.C. Prabu Shankar, S. Prayla Shyry
Early detection of diseases in men and women can improve treatment and reduce the risk involved in human life. Nowadays techniques which are non-invasive in nature are popularly used to detect the various types of diseases. Histopathological analysis plays a major role in finding the nature of the disease through medical images. Manual interpretation of these medical imaging takes time, is tedious, subjective, and can have human errors. It has also been discovered that the interpretation of these images varies amongst diagnostic labs. As computer power and memory capacity have increased, methodologies and medical image processing techniques have been developed to interpret and analyse these images as a substitute for human involvement. The challenge lies in devising an efficient pre-processing technique that helps in analysing, processing and preparing the medical image for further diagnostics. This research provides a hybrid technique that reduces noise in the NITFI medical image by using a 2D adaptive median filter at level 1. The edges of the filtered medical image are preserved using the modified CLAHE algorithm which preserves the local contrast of the image. Expectation Maximization (EM) algorithm extracts the ROI part of the image which helps in easy and accurate identification of the disease. All the three steps are run over the 3D image slices of a NIFTI image. The proposed method proves that it achieves close to ideal RMSE, PSNR and UQI values as well as achieves an average runtime of 37.193 seconds for EM per slice.
早期发现男性和女性的疾病可以改善治疗并减少涉及人类生命的风险。目前,非侵入性技术被广泛用于检测各种疾病。组织病理学分析在通过医学图像发现疾病的性质方面起着重要作用。人工解读这些医学影像需要时间、繁琐、主观,还可能存在人为错误。研究还发现,诊断实验室对这些图像的解释各不相同。随着计算机能力和内存容量的增加,医学图像处理技术和方法已经发展到解释和分析这些图像,以替代人类的参与。挑战在于设计一种有效的预处理技术,以帮助分析、处理和准备进一步诊断的医学图像。本研究提供了一种混合技术,通过在1级使用二维自适应中值滤波器来降低NITFI医学图像中的噪声。采用改进的CLAHE算法保留了滤波后的医学图像的边缘,保留了图像的局部对比度。期望最大化(EM)算法提取图像的ROI部分,有助于简单准确地识别疾病。所有这三个步骤都在NIFTI图像的3D图像切片上运行。结果表明,该方法得到了接近理想的RMSE、PSNR和UQI值,EM每片的平均运行时间为37.193秒。
{"title":"Medical image segmentation using hybrid Contrast Limited Adaptive Histogram Equalization (CLAHE) and expectation maximization algorithm","authors":"K.C. Prabu Shankar, S. Prayla Shyry","doi":"10.3233/jifs-233931","DOIUrl":"https://doi.org/10.3233/jifs-233931","url":null,"abstract":"Early detection of diseases in men and women can improve treatment and reduce the risk involved in human life. Nowadays techniques which are non-invasive in nature are popularly used to detect the various types of diseases. Histopathological analysis plays a major role in finding the nature of the disease through medical images. Manual interpretation of these medical imaging takes time, is tedious, subjective, and can have human errors. It has also been discovered that the interpretation of these images varies amongst diagnostic labs. As computer power and memory capacity have increased, methodologies and medical image processing techniques have been developed to interpret and analyse these images as a substitute for human involvement. The challenge lies in devising an efficient pre-processing technique that helps in analysing, processing and preparing the medical image for further diagnostics. This research provides a hybrid technique that reduces noise in the NITFI medical image by using a 2D adaptive median filter at level 1. The edges of the filtered medical image are preserved using the modified CLAHE algorithm which preserves the local contrast of the image. Expectation Maximization (EM) algorithm extracts the ROI part of the image which helps in easy and accurate identification of the disease. All the three steps are run over the 3D image slices of a NIFTI image. The proposed method proves that it achieves close to ideal RMSE, PSNR and UQI values as well as achieves an average runtime of 37.193 seconds for EM per slice.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136232341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Functional dependency-based group decision-making with incomplete information under social media influence: An application to automobile 社交媒体影响下基于功能依赖的不完全信息群体决策:在汽车领域的应用
4区 计算机科学 Q1 Mathematics Pub Date : 2023-10-28 DOI: 10.3233/jifs-232608
Garima Bisht, A.K. Pal
In today’s complex decision-making environment, accounting for attribute interdependencies and expert relationships is crucial. Traditional models often assume attribute independence and overlook the significant impact of expert relationships on decision outcomes. Also, amidst the dynamic and ever-changing decision-making landscape, the effect of news and real-time updates on alternative rankings is significant. In complex decision-making environments, information is constantly evolving, and staying up-to-date with the latest developments is paramount. To overcome these limitations, this study aims to develop a novel model that effectively captures attribute dependencies and incorporates the influence of social media on alternative ordering. To establish the model, the Decision-making trial and evaluation laboratory (DEMATEL) method and regression analysis are integrated to capture attribute dependencies. Furthermore, social network analysis (SNA) is employed to develop a trust propagation model for determining experts’ weights. Additionally, we present a two-stage multi-skilled and high potential multi-criteria decision-making (MCDM) framework, where the base-criterion method (BCM) is adopted to evaluate attribute weights and the well-known traditional Vlekriterijumsko KOmpromisno Rangiranje (VIKOR) method is redefined using Heronian mean (HM) operator to capture the relationships between arguments. Despite uncertainties, the proposed fuzzy-BCM-VIKOR-Heronian (F-BCM-VIKOR-H) approach enhances flexibility by addressing inconsistent data in complex decision-making problems. Similarly, certain news or future updates about any alternative or attribute can significantly affect the ranking. Acknowledging the significance of timely information, the proposed approach actively considers the effect of such news through the formation of an updated matrix. By factoring in the latest developments, we ensure that the proposed decision-making model remains relevant and adaptable, capturing the most current insights into alternative performance. To demonstrate the model’s effectiveness, we apply the proposed approach to a numerical illustration in the electronics industry, specifically for ranking cars. Sensitivity analysis evaluates the model’s stability, and comparing the results with existing approaches showcases its advantage and superiority.
在当今复杂的决策环境中,考虑属性相互依赖和专家关系至关重要。传统模型往往假设属性独立性,忽略了专家关系对决策结果的重要影响。此外,在动态和不断变化的决策环境中,新闻和实时更新对替代排名的影响是显著的。在复杂的决策环境中,信息是不断发展的,紧跟最新的发展是至关重要的。为了克服这些限制,本研究旨在开发一种新的模型,有效地捕捉属性依赖关系,并将社交媒体对替代排序的影响纳入其中。为了建立模型,将决策试验与评估实验室(DEMATEL)方法与回归分析相结合,捕捉属性依赖关系。此外,利用社会网络分析(SNA)建立了确定专家权重的信任传播模型。此外,我们提出了一个两阶段的多技能和高潜力多准则决策(MCDM)框架,其中采用基本准则方法(BCM)来评估属性权重,并使用Heronian mean (HM)算子重新定义传统的Vlekriterijumsko KOmpromisno Rangiranje (VIKOR)方法来捕获参数之间的关系。尽管存在不确定性,但提出的模糊- bcm - vikor - heronian (F-BCM-VIKOR-H)方法通过解决复杂决策问题中的不一致数据,提高了灵活性。类似地,关于任何选项或属性的某些新闻或未来更新可能会显著影响排名。该方法认识到及时信息的重要性,通过形成一个更新的矩阵,积极考虑这些新闻的影响。通过考虑最新的发展,我们确保建议的决策模型保持相关性和适应性,捕捉到对替代性能的最新见解。为了证明模型的有效性,我们将所提出的方法应用于电子行业的数值说明,特别是对汽车进行排名。灵敏度分析对模型的稳定性进行了评价,并与现有方法进行了比较,说明了该方法的优点和优越性。
{"title":"Functional dependency-based group decision-making with incomplete information under social media influence: An application to automobile","authors":"Garima Bisht, A.K. Pal","doi":"10.3233/jifs-232608","DOIUrl":"https://doi.org/10.3233/jifs-232608","url":null,"abstract":"In today’s complex decision-making environment, accounting for attribute interdependencies and expert relationships is crucial. Traditional models often assume attribute independence and overlook the significant impact of expert relationships on decision outcomes. Also, amidst the dynamic and ever-changing decision-making landscape, the effect of news and real-time updates on alternative rankings is significant. In complex decision-making environments, information is constantly evolving, and staying up-to-date with the latest developments is paramount. To overcome these limitations, this study aims to develop a novel model that effectively captures attribute dependencies and incorporates the influence of social media on alternative ordering. To establish the model, the Decision-making trial and evaluation laboratory (DEMATEL) method and regression analysis are integrated to capture attribute dependencies. Furthermore, social network analysis (SNA) is employed to develop a trust propagation model for determining experts’ weights. Additionally, we present a two-stage multi-skilled and high potential multi-criteria decision-making (MCDM) framework, where the base-criterion method (BCM) is adopted to evaluate attribute weights and the well-known traditional Vlekriterijumsko KOmpromisno Rangiranje (VIKOR) method is redefined using Heronian mean (HM) operator to capture the relationships between arguments. Despite uncertainties, the proposed fuzzy-BCM-VIKOR-Heronian (F-BCM-VIKOR-H) approach enhances flexibility by addressing inconsistent data in complex decision-making problems. Similarly, certain news or future updates about any alternative or attribute can significantly affect the ranking. Acknowledging the significance of timely information, the proposed approach actively considers the effect of such news through the formation of an updated matrix. By factoring in the latest developments, we ensure that the proposed decision-making model remains relevant and adaptable, capturing the most current insights into alternative performance. To demonstrate the model’s effectiveness, we apply the proposed approach to a numerical illustration in the electronics industry, specifically for ranking cars. Sensitivity analysis evaluates the model’s stability, and comparing the results with existing approaches showcases its advantage and superiority.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136231812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid ResGRU: Effective brain tumour classification approach using of abnormal images 混合ResGRU:利用异常图像进行脑肿瘤分类的有效方法
4区 计算机科学 Q1 Mathematics Pub Date : 2023-10-28 DOI: 10.3233/jifs-233546
Aishwarya Rajendran, Sumathi Ganesan, T.K.S. Rathis Babu
Brain tumor is observed to be grown in irregular shape and presented deep inside the tissues that led to cancer. Human brain tumor identification and categorization are performed with high latency, but also an essential task for the medical experts. The assistance through the automated diagnosis is generally utilized for the advancement in the diagnosis ability in order to get superior accuracy in brain tumor detection. Although the researches are enhancing the brain tumor detection performance, the highly challenging is to segment the brain tumor since it has variability concerning the tumor type, contrast, image modality and also in other factors. To meet up all the challenges, a novel classification method is introduced using segmentation and machine learning approaches. Initially, the required images are collected from benchmark data sources. The input images are undergone for pre-processing stage, where it is done via “Contrast Limited Adaptive Histogram Equalization (CLAHE) and filtering methods”. Further, the pre-processed imagesare given as input to two classifier models as “Residual Network (ResNet) and Gated Recurrent Unit (GRU)”, in which the model provide the result as normal and abnormal images. In the second part, obtained abnormal image acts an input for segmentation step. In segmentation, it is needed to extract the relevant features by texture and spatial features. The resultant features are subjected for optimizing, where the optimal features are acquired through Adaptive Coyote Optimization Algorithm (ACOA). Then, the extracted features are fed into machine learning model like “Support Vector Machine (SVM), Artificial Neural Network (ANN), and Random Forest (RF)” to render the segmented image. Finally, the hybrid classification named Hybrid ResGRUis developed by integrating the ResNet and GRU, where the hyper parameters are tuned optimally using developed ACOA, thus it is used for classifying the abnormal image that belongs to benign stage or malignant stage. The experimental results are evaluated, and its performance is analyzed by various metrics. Hence, the proposed classification model ensures effective segmentation and classification performance.
脑肿瘤呈不规则形状生长,并出现在导致癌症的组织深处。人脑肿瘤的识别和分类具有较高的潜伏期,也是医学专家必须完成的一项任务。为了在脑肿瘤检测中获得较高的准确性,通常利用自动诊断的辅助来提高诊断能力。虽然研究提高了脑肿瘤的检测性能,但由于脑肿瘤的类型、对比度、图像形态等因素的多变性,对脑肿瘤进行分割是一项极具挑战性的工作。为了应对这些挑战,本文引入了一种基于分割和机器学习的分类方法。最初,从基准数据源收集所需的图像。输入图像经过预处理阶段,通过“对比度有限自适应直方图均衡化(CLAHE)和滤波方法”完成。此外,将预处理后的图像作为“残差网络(ResNet)”和“门控循环单元(GRU)”两个分类器模型的输入,其中模型提供正常和异常图像的结果。在第二部分中,获取的异常图像作为分割步骤的输入。在分割中,需要通过纹理和空间特征提取相关特征。对得到的特征进行优化,通过自适应郊狼优化算法(ACOA)获得最优特征。然后,将提取的特征输入到“支持向量机(SVM)、人工神经网络(ANN)和随机森林(RF)”等机器学习模型中,对分割后的图像进行渲染。最后,结合ResNet和GRU开发了混合分类hybrid ResGRUis,其中超参数使用开发的ACOA进行优化调整,从而用于分类属于良性阶段或恶性阶段的异常图像。对实验结果进行了评价,并用各种指标对其性能进行了分析。因此,所提出的分类模型保证了有效的分割和分类性能。
{"title":"Hybrid ResGRU: Effective brain tumour classification approach using of abnormal images","authors":"Aishwarya Rajendran, Sumathi Ganesan, T.K.S. Rathis Babu","doi":"10.3233/jifs-233546","DOIUrl":"https://doi.org/10.3233/jifs-233546","url":null,"abstract":"Brain tumor is observed to be grown in irregular shape and presented deep inside the tissues that led to cancer. Human brain tumor identification and categorization are performed with high latency, but also an essential task for the medical experts. The assistance through the automated diagnosis is generally utilized for the advancement in the diagnosis ability in order to get superior accuracy in brain tumor detection. Although the researches are enhancing the brain tumor detection performance, the highly challenging is to segment the brain tumor since it has variability concerning the tumor type, contrast, image modality and also in other factors. To meet up all the challenges, a novel classification method is introduced using segmentation and machine learning approaches. Initially, the required images are collected from benchmark data sources. The input images are undergone for pre-processing stage, where it is done via “Contrast Limited Adaptive Histogram Equalization (CLAHE) and filtering methods”. Further, the pre-processed imagesare given as input to two classifier models as “Residual Network (ResNet) and Gated Recurrent Unit (GRU)”, in which the model provide the result as normal and abnormal images. In the second part, obtained abnormal image acts an input for segmentation step. In segmentation, it is needed to extract the relevant features by texture and spatial features. The resultant features are subjected for optimizing, where the optimal features are acquired through Adaptive Coyote Optimization Algorithm (ACOA). Then, the extracted features are fed into machine learning model like “Support Vector Machine (SVM), Artificial Neural Network (ANN), and Random Forest (RF)” to render the segmented image. Finally, the hybrid classification named Hybrid ResGRUis developed by integrating the ResNet and GRU, where the hyper parameters are tuned optimally using developed ACOA, thus it is used for classifying the abnormal image that belongs to benign stage or malignant stage. The experimental results are evaluated, and its performance is analyzed by various metrics. Hence, the proposed classification model ensures effective segmentation and classification performance.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136232345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Intelligent & Fuzzy Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1