首页 > 最新文献

Information Technology and Control最新文献

英文 中文
Revocable Certificateless Public Key Encryption with Equality Test 具有相等性测试的可撤销的无证书公钥加密
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-12-12 DOI: 10.5755/j01.itc.51.4.30691
Tung-Tso Tsai, Han-Yu Lin, Han-Ching Tsai
Traditional public key cryptography requires certificates as a link between each user’s identity and her/his public key. Typically, public key infrastructures (PKI) are used to manage and maintain certificates. However, it takes a lot of resources to build PKI which includes many roles and complex policies. The concept of certificateless public key encryption (CL-PKC) was introduced to eliminate the need for certificates. Based on this concept, a mechanism called certificateless public key encryption with equality test (CL-PKEET) was proposed to ensure the confidentiality of private data and provide an equality test of different ciphertexts. The mechanism is suitable for cloud applications where users cannot only protect personal private data but also enjoy cloud services which test the equality of different ciphertexts. More specifically, any two ciphertexts can be tested to determine whether they are encrypted from the same plaintext. Indeed, any practical system needs to provide a solution to revoke compromised users. However, these existing CL-PKEET schemes do not address the revocation problem, and the related research is scant. Therefore, the aim of this article is to propose the first revocable CL-PKEET scheme called RCL-PKEET which can effectively remove illegal users from the system while maintaining the effectiveness of existing CL-PKEET schemes in encryption, decryption, and equality testing processes. Additionally, we formally demonstrate the security of the proposed scheme under the bilinear Diffie-Hellman assumption.
传统的公钥加密需要证书作为每个用户的身份与其公钥之间的链接。通常,使用公钥基础设施PKI (public key infrastructure)来管理和维护证书。但是,PKI的构建需要大量的资源,其中包括许多角色和复杂的策略。引入无证书公钥加密(CL-PKC)的概念是为了消除对证书的需求。在此基础上,提出了一种具有相等性检验的无证书公钥加密机制(CL-PKEET),以保证私有数据的机密性,并提供不同密文的相等性检验。该机制适用于云应用,用户不仅可以保护个人隐私数据,还可以享受云服务,测试不同密文的平等性。更具体地说,可以测试任意两个密文,以确定它们是否由相同的明文加密。实际上,任何实际的系统都需要提供一个解决方案来撤销受损的用户。然而,现有的这些CL-PKEET方案并没有解决撤销问题,相关的研究也很少。因此,本文的目的是提出第一个可撤销的CL-PKEET方案,称为RCL-PKEET,它可以有效地从系统中删除非法用户,同时保持现有CL-PKEET方案在加密,解密和相等性测试过程中的有效性。此外,我们在双线性Diffie-Hellman假设下正式证明了所提方案的安全性。
{"title":"Revocable Certificateless Public Key Encryption with Equality Test","authors":"Tung-Tso Tsai, Han-Yu Lin, Han-Ching Tsai","doi":"10.5755/j01.itc.51.4.30691","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.30691","url":null,"abstract":"Traditional public key cryptography requires certificates as a link between each user’s identity and her/his public key. Typically, public key infrastructures (PKI) are used to manage and maintain certificates. However, it takes a lot of resources to build PKI which includes many roles and complex policies. The concept of certificateless public key encryption (CL-PKC) was introduced to eliminate the need for certificates. Based on this concept, a mechanism called certificateless public key encryption with equality test (CL-PKEET) was proposed to ensure the confidentiality of private data and provide an equality test of different ciphertexts. The mechanism is suitable for cloud applications where users cannot only protect personal private data but also enjoy cloud services which test the equality of different ciphertexts. More specifically, any two ciphertexts can be tested to determine whether they are encrypted from the same plaintext. Indeed, any practical system needs to provide a solution to revoke compromised users. However, these existing CL-PKEET schemes do not address the revocation problem, and the related research is scant. Therefore, the aim of this article is to propose the first revocable CL-PKEET scheme called RCL-PKEET which can effectively remove illegal users from the system while maintaining the effectiveness of existing CL-PKEET schemes in encryption, decryption, and equality testing processes. Additionally, we formally demonstrate the security of the proposed scheme under the bilinear Diffie-Hellman assumption.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"8 1","pages":"638-660"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73673320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Analysis of a 2-bit Dual-Mode Uniform Scalar Quantizer for Laplacian Source 拉普拉斯源2位双模均匀标量量化器的性能分析
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-12-12 DOI: 10.5755/j01.itc.51.4.30473
Z. Perić, B. Denic, A. Jovanovic, S. Milosavljevic, Milan S. Savic
The main issue when dealing with the non-adaptive scalar quantizers is their sensitivity to variance-mismatch, the effect that occurs when the data variance differs from the one used for the quantizer design. In this paper, we consider the influence of that effect in low-rate (2-bit) uniform scalar quantization (USQ) of Laplacian source and also we propose adequate measure to suppress it. Particularly, the approach we propose represents the upgraded version of the previous approaches used to improve performance of the single quantizer. It is based on dual-mode quantization that combines two 2-bit USQs (with adequately chosen parameters) to process input data, selected by applying the special rule. Analysis conducted in theoretical domain has shown that the proposed approach is less sensitive to variance-mismatch, making the dual-mode USQ more efficient in terms of robustness than the single USQ. Also, a gain is achieved compared to other 2-bit quantizer solutions. Experimental results are also provided for quantization of weights of the multi-layer perceptron (MLP) neural network, where good matching with the theoretical results is observed. Due to these achievements, we believe that the solution we propose can be a good choice for compression of non-stationary data modeled by Laplacian distribution, such as neural network parameters.
处理非自适应标量量化器时的主要问题是它们对方差不匹配的敏感性,当数据方差与用于量化器设计的方差不同时,就会发生这种影响。本文考虑了低速率(2位)均匀标量量化(USQ)中该效应的影响,并提出了适当的抑制措施。特别是,我们提出的方法代表了先前用于提高单个量化器性能的方法的升级版本。它基于双模式量化,结合了两个2位usq(具有适当选择的参数)来处理通过应用特殊规则选择的输入数据。理论分析表明,该方法对方差失配的敏感性较低,使得双模USQ比单模USQ在鲁棒性方面更有效。此外,与其他2位量化器解决方案相比,实现了增益。本文还对多层感知器(MLP)神经网络的权重量化给出了实验结果,实验结果与理论结果吻合良好。由于这些成果,我们相信我们提出的解决方案可以成为一个很好的选择,用于压缩由拉普拉斯分布建模的非平稳数据,如神经网络参数。
{"title":"Performance Analysis of a 2-bit Dual-Mode Uniform Scalar Quantizer for Laplacian Source","authors":"Z. Perić, B. Denic, A. Jovanovic, S. Milosavljevic, Milan S. Savic","doi":"10.5755/j01.itc.51.4.30473","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.30473","url":null,"abstract":"The main issue when dealing with the non-adaptive scalar quantizers is their sensitivity to variance-mismatch, the effect that occurs when the data variance differs from the one used for the quantizer design. In this paper, we consider the influence of that effect in low-rate (2-bit) uniform scalar quantization (USQ) of Laplacian source and also we propose adequate measure to suppress it. Particularly, the approach we propose represents the upgraded version of the previous approaches used to improve performance of the single quantizer. It is based on dual-mode quantization that combines two 2-bit USQs (with adequately chosen parameters) to process input data, selected by applying the special rule. Analysis conducted in theoretical domain has shown that the proposed approach is less sensitive to variance-mismatch, making the dual-mode USQ more efficient in terms of robustness than the single USQ. Also, a gain is achieved compared to other 2-bit quantizer solutions. Experimental results are also provided for quantization of weights of the multi-layer perceptron (MLP) neural network, where good matching with the theoretical results is observed. Due to these achievements, we believe that the solution we propose can be a good choice for compression of non-stationary data modeled by Laplacian distribution, such as neural network parameters.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"40 1","pages":"625-637"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85209963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A New Range-based Breast Cancer Prediction Model Using the Bayes' Theorem and Ensemble Learning 基于贝叶斯定理和集成学习的新型范围乳腺癌预测模型
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-12-12 DOI: 10.5755/j01.itc.51.4.31347
Sam Khozama, Ali Mahmoud Mayya
Breast cancer prediction is essential for preventing and treating cancer. In this research, a novel breast cancer prediction model is introduced. In addition, this research aims to provide a range-based cancer score instead of binary classification results (yes or no). The Breast Cancer Surveillance Consortium dataset (BCSC) dataset is used and modified by applying a proposed probabilistic model to achieve the range-based cancer score. The suggested model analyses a sub dataset of the whole BCSC dataset, including 67632 records and 13 risk factors. Three types of statistics are acquired (general cancer and non-cancer probabilities, previous medical knowledge, and the likelihood of each risk factor given all prediction classes). The model also uses the weighting methodology to achieve the best fusion of the BCSC's risk factors. The computation of the final prediction score is done using the post probability of the weighted combination of risk factors and the three statistics acquired from the probabilistic model. This final prediction is added to the BCSC dataset, and the new version of the BCSC dataset is used to train an ensemble model consisting of 30 learners. The experiments are applied using the sub and the whole datasets (including 317880 medical records). The results indicate that the new range-based model is accurate and robust with an accuracy of 91.33%, a false rejection rate of 1.12%, and an AUC of 0.9795. The new version of the BCSC dataset can be used for further research and analysis.
乳腺癌预测对于预防和治疗癌症至关重要。本研究介绍了一种新的乳腺癌预测模型。此外,本研究旨在提供基于范围的癌症评分,而不是二元分类结果(是或否)。使用乳腺癌监测联盟数据集(BCSC)数据集,并通过应用提出的概率模型进行修改,以实现基于范围的癌症评分。该模型分析了整个BCSC数据集的一个子数据集,包括67632条记录和13个风险因素。获得三种类型的统计数据(一般癌症和非癌症概率,以前的医学知识,以及给定所有预测类别的每个风险因素的可能性)。该模型还采用加权方法实现了BCSC各风险因素的最佳融合。利用风险因素加权组合后的后概率和概率模型得到的三个统计量计算最终预测分数。最终的预测结果被添加到BCSC数据集中,新版本的BCSC数据集被用来训练一个由30个学习者组成的集成模型。实验采用子数据集和全数据集(包括317880份病历)进行。结果表明,该模型具有较好的鲁棒性和准确性,准确率为91.33%,误拒率为1.12%,AUC为0.9795。新版本的BCSC数据集可用于进一步的研究和分析。
{"title":"A New Range-based Breast Cancer Prediction Model Using the Bayes' Theorem and Ensemble Learning","authors":"Sam Khozama, Ali Mahmoud Mayya","doi":"10.5755/j01.itc.51.4.31347","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.31347","url":null,"abstract":"Breast cancer prediction is essential for preventing and treating cancer. In this research, a novel breast cancer prediction model is introduced. In addition, this research aims to provide a range-based cancer score instead of binary classification results (yes or no). The Breast Cancer Surveillance Consortium dataset (BCSC) dataset is used and modified by applying a proposed probabilistic model to achieve the range-based cancer score. The suggested model analyses a sub dataset of the whole BCSC dataset, including 67632 records and 13 risk factors. Three types of statistics are acquired (general cancer and non-cancer probabilities, previous medical knowledge, and the likelihood of each risk factor given all prediction classes). The model also uses the weighting methodology to achieve the best fusion of the BCSC's risk factors. The computation of the final prediction score is done using the post probability of the weighted combination of risk factors and the three statistics acquired from the probabilistic model. This final prediction is added to the BCSC dataset, and the new version of the BCSC dataset is used to train an ensemble model consisting of 30 learners. The experiments are applied using the sub and the whole datasets (including 317880 medical records). The results indicate that the new range-based model is accurate and robust with an accuracy of 91.33%, a false rejection rate of 1.12%, and an AUC of 0.9795. The new version of the BCSC dataset can be used for further research and analysis.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"6 1","pages":"757-770"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87456506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
UPSNet: Universal Point Cloud Sampling Network Without Knowing Downstream Tasks UPSNet:不知道下游任务的通用点云采样网络
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-12-12 DOI: 10.5755/j01.itc.51.4.29894
Fujing Tian, Yang Song, Zhidi Jiang, Wenxu Tao, G. Jiang
With the development of three-dimensional sensing technology, the data volume of point cloud grows rapidly. Therefore, point cloud is usually down-sampled in advance so as to save memory space and reduce the computational complexity for its downstream processing tasks such as classification, segmentation, reconstruction in learning based point cloud processing. Obviously, the sampled point clouds should be well representative and maintain the geometric structure of the original point clouds so that the downstream tasks can achieve satisfied performance based on the point clouds sampled from the original ones. Traditional point cloud sampling methods such as farthest point sampling and random sampling mainly heuristically select a subset of the original point cloud. However, they do not make full use of high-level semantic representation of point clouds, are sensitive to outliers. Some of other sampling methods are task oriented. In this paper, a Universal Point cloud Sampling Network without knowing downstream tasks (denoted as UPSNet) is proposed. It consists of three modules. The importance learning module is responsible for learning the mutual information between the points of input point cloud and calculating a group of variational importance probabilities to represent the importance of each point in the input point cloud, based on which a mask is designed to discard the points with lower importance so that the number of remaining points is controlled. Then, the regional learning module learns from the input point cloud to get the high dimensional space embedding of each region, and the global feature of each region are obtained by weighting the high dimensional space embedding with the variational importance probability. Finally, through the coordinate regression module, the global feature and the high dimensional space embedding of each region are cascaded for learning to obtain the sampled point cloud. A series of experiments are implemented in which the point cloud classification, segmentation, reconstruction and retrieval are performed on the reconstructed point clouds sampled with different point cloud sampling methods. The experimental results show that the proposed UPSNet can provide more reasonable sampling result of the input point cloud for the downstream tasks of classification, segmentation, reconstruction and retrieval, and is superior to the existing sampling methods without knowing the downstream tasks. The proposed UPSNet is not oriented to specific downstream tasks, so it has wide applicability.
随着三维传感技术的发展,点云的数据量迅速增长。因此,在基于学习的点云处理中,为了节省内存空间,降低分类、分割、重构等下游处理任务的计算复杂度,通常会对点云进行提前下采样。显然,采样的点云必须具有良好的代表性,并保持原始点云的几何结构,以便下游任务能够在原始点云采样的基础上获得满意的性能。传统的点云采样方法,如最远点采样和随机采样,主要是启发式地选择原始点云的一个子集。然而,它们没有充分利用点云的高级语义表示,对离群值很敏感。其他一些抽样方法是面向任务的。本文提出了一种不知道下游任务的通用点云采样网络(简称UPSNet)。它由三个模块组成。重要性学习模块负责学习输入点云中各点之间的互信息,计算一组变分重要性概率来表示输入点云中各点的重要性,并在此基础上设计掩码,丢弃重要性较低的点,从而控制剩余点的数量。然后,区域学习模块从输入点云中学习得到每个区域的高维空间嵌入,并通过变分重要概率对高维空间嵌入进行加权得到每个区域的全局特征。最后,通过坐标回归模块,对全局特征和各区域的高维空间嵌入进行级联学习,得到采样点云。对采用不同点云采样方法采样的重构点云进行了点云分类、分割、重构和检索等一系列实验。实验结果表明,提出的UPSNet可以为分类、分割、重构和检索等下游任务提供更合理的输入点云采样结果,优于现有的不知道下游任务的采样方法。所提出的UPSNet不面向特定的下游任务,因此具有广泛的适用性。
{"title":"UPSNet: Universal Point Cloud Sampling Network Without Knowing Downstream Tasks","authors":"Fujing Tian, Yang Song, Zhidi Jiang, Wenxu Tao, G. Jiang","doi":"10.5755/j01.itc.51.4.29894","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.29894","url":null,"abstract":"With the development of three-dimensional sensing technology, the data volume of point cloud grows rapidly. Therefore, point cloud is usually down-sampled in advance so as to save memory space and reduce the computational complexity for its downstream processing tasks such as classification, segmentation, reconstruction in learning based point cloud processing. Obviously, the sampled point clouds should be well representative and maintain the geometric structure of the original point clouds so that the downstream tasks can achieve satisfied performance based on the point clouds sampled from the original ones. Traditional point cloud sampling methods such as farthest point sampling and random sampling mainly heuristically select a subset of the original point cloud. However, they do not make full use of high-level semantic representation of point clouds, are sensitive to outliers. Some of other sampling methods are task oriented. In this paper, a Universal Point cloud Sampling Network without knowing downstream tasks (denoted as UPSNet) is proposed. It consists of three modules. The importance learning module is responsible for learning the mutual information between the points of input point cloud and calculating a group of variational importance probabilities to represent the importance of each point in the input point cloud, based on which a mask is designed to discard the points with lower importance so that the number of remaining points is controlled. Then, the regional learning module learns from the input point cloud to get the high dimensional space embedding of each region, and the global feature of each region are obtained by weighting the high dimensional space embedding with the variational importance probability. Finally, through the coordinate regression module, the global feature and the high dimensional space embedding of each region are cascaded for learning to obtain the sampled point cloud. A series of experiments are implemented in which the point cloud classification, segmentation, reconstruction and retrieval are performed on the reconstructed point clouds sampled with different point cloud sampling methods. The experimental results show that the proposed UPSNet can provide more reasonable sampling result of the input point cloud for the downstream tasks of classification, segmentation, reconstruction and retrieval, and is superior to the existing sampling methods without knowing the downstream tasks. The proposed UPSNet is not oriented to specific downstream tasks, so it has wide applicability.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"295 1","pages":"723-737"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86392699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Alzheimer's Disease Segmentation and Classification on MRI Brain Images Using Enhanced Expectation Maximization Adaptive Histogram (EEM-AH) and Machine Learning 利用增强期望最大化自适应直方图(EEM-AH)和机器学习对MRI脑图像的阿尔茨海默病分割和分类
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-12-12 DOI: 10.5755/j01.itc.51.4.28052
J. Ramya, B. Maheswari, M. Rajakumar, R. Sonia
Alzheimer’s disease (AD) is an irreversible ailment. This ailment causes rapid loss of memory and behavioral changes. Recently, this disorder is very common among the elderly. Although there is no specific treatment for this disorder, its diagnosis aids in delaying the spread of the disease. Therefore, in the past few years, automatic recognition of AD using image processing techniques has achieved much attraction. In this research, we propose a novel framework for the classification of AD using magnetic resonance imaging (MRI) data. Initially, the image is filtered using 2D Adaptive Bilateral Filter (2D-ABF). The denoised image is then enhanced using Entropy-based Contrast Limited Adaptive Histogram Equalization (ECLAHE) algorithm. From enhanced data, the region of interest (ROI) is segmented using clustering and thresholding techniques. Clustering is performed using Enhanced Expectation Maximization (EEM) and thresholding is performed using Adaptive Histogram (AH) thresholding algorithm. From the ROI, Gray Level Co-Occurrence Matrix (GLCM) features are generated. GLCM is a feature that computes the occurrence of pixel pairs in specific spatial coordinates of an image.  The dimension of these features is reduced using Principle Component Analysis (PCA). Finally, the obtained features are classified using classifiers. In this work, we have employed Logistic Regression (LR) for classification. The classification results were achieved with the accuracy of 96.92% from the confusion matrix to identify the Alzheimer’s Disease. The proposed framework was then evaluated using performance evaluation metrics like accuracy, sensitivity, F-score, precision and specificity that were arrived from the confusion matrix. Our study demonstrates that the proposed Alzheimer’s disease detection model outperforms other models proposed in the literature.
阿尔茨海默病(AD)是一种不可逆转的疾病。这种疾病会导致记忆的迅速丧失和行为的改变。最近,这种疾病在老年人中很常见。虽然这种疾病没有特殊的治疗方法,但它的诊断有助于延缓疾病的传播。因此,在过去的几年里,利用图像处理技术对AD进行自动识别取得了很大的进展。在这项研究中,我们提出了一种新的框架,用于使用磁共振成像(MRI)数据对AD进行分类。首先,使用2D自适应双边滤波器(2D- abf)对图像进行滤波。然后使用基于熵的对比度有限自适应直方图均衡化(ECLAHE)算法增强去噪图像。从增强的数据中,使用聚类和阈值技术分割感兴趣区域(ROI)。使用增强期望最大化(EEM)进行聚类,使用自适应直方图(AH)阈值算法进行阈值分割。从ROI中生成灰度共生矩阵(GLCM)特征。GLCM是一种计算图像特定空间坐标中像素对出现的特征。使用主成分分析(PCA)降低这些特征的维数。最后,使用分类器对得到的特征进行分类。在这项工作中,我们采用了逻辑回归(LR)进行分类。从混淆矩阵中识别阿尔茨海默病的分类结果达到96.92%的准确率。然后使用从混淆矩阵得出的准确性、灵敏度、f分数、精度和特异性等性能评估指标对所提出的框架进行评估。我们的研究表明,提出的阿尔茨海默病检测模型优于文献中提出的其他模型。
{"title":"Alzheimer's Disease Segmentation and Classification on MRI Brain Images Using Enhanced Expectation Maximization Adaptive Histogram (EEM-AH) and Machine Learning","authors":"J. Ramya, B. Maheswari, M. Rajakumar, R. Sonia","doi":"10.5755/j01.itc.51.4.28052","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.28052","url":null,"abstract":"Alzheimer’s disease (AD) is an irreversible ailment. This ailment causes rapid loss of memory and behavioral changes. Recently, this disorder is very common among the elderly. Although there is no specific treatment for this disorder, its diagnosis aids in delaying the spread of the disease. Therefore, in the past few years, automatic recognition of AD using image processing techniques has achieved much attraction. In this research, we propose a novel framework for the classification of AD using magnetic resonance imaging (MRI) data. Initially, the image is filtered using 2D Adaptive Bilateral Filter (2D-ABF). The denoised image is then enhanced using Entropy-based Contrast Limited Adaptive Histogram Equalization (ECLAHE) algorithm. From enhanced data, the region of interest (ROI) is segmented using clustering and thresholding techniques. Clustering is performed using Enhanced Expectation Maximization (EEM) and thresholding is performed using Adaptive Histogram (AH) thresholding algorithm. From the ROI, Gray Level Co-Occurrence Matrix (GLCM) features are generated. GLCM is a feature that computes the occurrence of pixel pairs in specific spatial coordinates of an image.  The dimension of these features is reduced using Principle Component Analysis (PCA). Finally, the obtained features are classified using classifiers. In this work, we have employed Logistic Regression (LR) for classification. The classification results were achieved with the accuracy of 96.92% from the confusion matrix to identify the Alzheimer’s Disease. The proposed framework was then evaluated using performance evaluation metrics like accuracy, sensitivity, F-score, precision and specificity that were arrived from the confusion matrix. Our study demonstrates that the proposed Alzheimer’s disease detection model outperforms other models proposed in the literature.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"32 1","pages":"786-800"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87052413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Lion Based Butterfly Optimization with Improved YOLO-v4 for Heart Disease Prediction Using IoMT 基于Lion的蝴蝶优化与改进的YOLO-v4在IoMT心脏病预测中的应用
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-12-12 DOI: 10.5755/j01.itc.51.4.31323
V. Alamelu, S. Thilagamani
The Internet of Medical Things (IoMT) has subsequently been used in healthcare services to gather sensor data for the prediction and diagnosis of cardiac disease. Recently image processing techniques require a clear focused solution to predict diseases. The primary goal of the proposed method is to use health information and medical pictures for classifying the data and forecasting cardiac disease. It consists of two phases for categorizing the data and prediction. If the previous phase's results are practical heart problems, then there is no need for phase 2 to predict. The first phase categorized data collected from healthcare sensors attached to the patient's body. The second stage evaluated the echocardiography images for the prediction of heart disease. A Hybrid Lion-based Butterfly Optimization Algorithm (L-BOA) is used for classifying the sensor data. In the existing method, Hybrid Faster R-CNN with SE-Rest-Net-101 is used for classification. Faster R-CNN uses areas to locate the item in the picture. The proposed method uses Improved YOLO-v4. It increases the semantic knowledge of little things. An Improved YOLO-v4 with CSPDarkNet53 is used for feature extraction and classifying the echo-cardiogram pictures. Both categorization approaches were used, and the results were integrated and confirmed in the ability to forecast heart disease. The LBO-YOLO-v4 process detected regular sensor data with 97.25% accuracy and irregular sensor data with 98.87% accuracy. The proposed improved YOLO-v4 with the CSPDarkNet53 method gives better classification among echo-cardiogram pictures.
医疗物联网(IoMT)随后被用于医疗保健服务,以收集用于预测和诊断心脏病的传感器数据。最近的图像处理技术需要一个明确的重点解决方案来预测疾病。该方法的主要目标是利用健康信息和医学图像对数据进行分类和预测心脏病。它包括数据分类和预测两个阶段。如果前一阶段的结果是实际的心脏问题,那么就没有必要进行第二阶段的预测。第一阶段对从附着在患者身上的医疗保健传感器收集的数据进行分类。第二阶段评价超声心动图对心脏病的预测作用。采用基于混合狮子的蝴蝶优化算法(L-BOA)对传感器数据进行分类。在现有方法中,使用Hybrid Faster R-CNN与SE-Rest-Net-101进行分类。更快的R-CNN使用区域来定位图片中的项目。该方法采用改进的YOLO-v4。它增加了小事物的语义知识。采用改进的YOLO-v4和CSPDarkNet53对超声心动图进行特征提取和分类。这两种分类方法都被使用,结果被整合并确认为预测心脏病的能力。LBO-YOLO-v4过程对常规传感器数据的检测准确率为97.25%,对不规则传感器数据的检测准确率为98.87%。采用CSPDarkNet53方法改进的YOLO-v4对超声心动图图像进行了更好的分类。
{"title":"Lion Based Butterfly Optimization with Improved YOLO-v4 for Heart Disease Prediction Using IoMT","authors":"V. Alamelu, S. Thilagamani","doi":"10.5755/j01.itc.51.4.31323","DOIUrl":"https://doi.org/10.5755/j01.itc.51.4.31323","url":null,"abstract":"The Internet of Medical Things (IoMT) has subsequently been used in healthcare services to gather sensor data for the prediction and diagnosis of cardiac disease. Recently image processing techniques require a clear focused solution to predict diseases. The primary goal of the proposed method is to use health information and medical pictures for classifying the data and forecasting cardiac disease. It consists of two phases for categorizing the data and prediction. If the previous phase's results are practical heart problems, then there is no need for phase 2 to predict. The first phase categorized data collected from healthcare sensors attached to the patient's body. The second stage evaluated the echocardiography images for the prediction of heart disease. A Hybrid Lion-based Butterfly Optimization Algorithm (L-BOA) is used for classifying the sensor data. In the existing method, Hybrid Faster R-CNN with SE-Rest-Net-101 is used for classification. Faster R-CNN uses areas to locate the item in the picture. The proposed method uses Improved YOLO-v4. It increases the semantic knowledge of little things. An Improved YOLO-v4 with CSPDarkNet53 is used for feature extraction and classifying the echo-cardiogram pictures. Both categorization approaches were used, and the results were integrated and confirmed in the ability to forecast heart disease. The LBO-YOLO-v4 process detected regular sensor data with 97.25% accuracy and irregular sensor data with 98.87% accuracy. The proposed improved YOLO-v4 with the CSPDarkNet53 method gives better classification among echo-cardiogram pictures.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"37 6 1","pages":"692-703"},"PeriodicalIF":1.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79285793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Browser Selection for Android Smartphones Using Novel Fuzzy Hybrid Multi Criteria Decision Making Technique 基于模糊混合多准则决策的Android智能手机浏览器选择
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-09-23 DOI: 10.5755/j01.itc.51.3.30525
Ramathilagam Arunagiri, P. Pandian, Valarmathi Krishnasamy, R. Sivaprakasam
IT and Telecommunication sector has grown massively over the past few decades. Mobile phones that were initially developed for making calls and now become an essential item and are just not restricted to calling. They have dominated most of the gadgets like computers, cameras etc. Regularly people come across an extensive number of enhanced and better-quality features being inbuilt with them. A variety of mobile phones with different shapes and sizes are manufactured within a wide range of budgets. This is the key motivation behind an exponential growth in the number of users and the arrival of new manufacturers in the field. Along with this growth, there is a fast growth of mobile application software providers also. Apart from calling, many consumers use smartphones for browsing the internet. This puts users into a dilemma to select a better browser for their smartphone to fulfill their requirements. With this aim, an attempt is made in this paper for the evaluation and selection of a better browser. To achieve this, a hybrid Multi Criteria Decision Making (MCDM) approach is proposed by combining COPRAS (Complex Proportional Assessment of alternatives) technique and Fuzzy Analytical Hierarchy Process (FAHP).
在过去的几十年里,IT和电信行业得到了巨大的发展。手机最初是为了打电话而开发的,现在已经成为必需品,不仅仅局限于打电话。他们已经控制了大多数电子产品,比如电脑、相机等。人们经常会遇到大量增强的、质量更好的内置功能。各种不同形状和尺寸的手机在各种预算范围内生产。这是用户数量呈指数级增长和新制造商进入该领域背后的关键动机。与此同时,移动应用软件提供商也在快速增长。除了打电话,许多消费者还使用智能手机上网。这让用户陷入两难境地,他们需要为自己的智能手机选择更好的浏览器来满足自己的需求。为此,本文对更好的浏览器的评价和选择进行了尝试。为了实现这一目标,提出了一种混合多准则决策(MCDM)方法,该方法将COPRAS(复杂比例评估)技术与模糊层次分析法(FAHP)相结合。
{"title":"Browser Selection for Android Smartphones Using Novel Fuzzy Hybrid Multi Criteria Decision Making Technique","authors":"Ramathilagam Arunagiri, P. Pandian, Valarmathi Krishnasamy, R. Sivaprakasam","doi":"10.5755/j01.itc.51.3.30525","DOIUrl":"https://doi.org/10.5755/j01.itc.51.3.30525","url":null,"abstract":"IT and Telecommunication sector has grown massively over the past few decades. Mobile phones that were initially developed for making calls and now become an essential item and are just not restricted to calling. They have dominated most of the gadgets like computers, cameras etc. Regularly people come across an extensive number of enhanced and better-quality features being inbuilt with them. A variety of mobile phones with different shapes and sizes are manufactured within a wide range of budgets. This is the key motivation behind an exponential growth in the number of users and the arrival of new manufacturers in the field. Along with this growth, there is a fast growth of mobile application software providers also. Apart from calling, many consumers use smartphones for browsing the internet. This puts users into a dilemma to select a better browser for their smartphone to fulfill their requirements. With this aim, an attempt is made in this paper for the evaluation and selection of a better browser. To achieve this, a hybrid Multi Criteria Decision Making (MCDM) approach is proposed by combining COPRAS (Complex Proportional Assessment of alternatives) technique and Fuzzy Analytical Hierarchy Process (FAHP).","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"22 1","pages":"467-484"},"PeriodicalIF":1.1,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87097241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Implementation of a Self-Learner Smart Home System Using Machine Learning Algorithms 基于机器学习算法的自学习智能家居系统的设计与实现
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-09-23 DOI: 10.5755/j01.itc.51.3.31273
C. Güven, M. Aci
Smart home systems are the integration of technology and services through the network for a better quality of life. Smart homes perform daily housework and activities more easily without user intervention or with remote control of the user. In this study, a machine learning-based smart home system has been developed. The aim of the study is to design a system that can continuously improve itself and learn instead of an ordinary smart home system that can be remotely controlled. The developed machine learning model predicts the routine activities of the users in the home and performs some operations for the user autonomously. The dataset used in the study consists of real data received from the sensors as a result of the daily use. Naive Bayes (NB) (i.e. Gaussian NB, Bernoulli NB, Multinomial NB, and Complement NB), ensemble (i.e. Random Forest, Gradient Tree Boosting and eXtreme Gradient Boosting), linear (i.e. Logistic Regression, Stochastic Gradient Descent, and Passive-Aggressive Classification), and other (i.e. Decision Tree, Support Vector Machine, K Nearest Neighbor, Gaussian Process Classifier (GPC), Multilayer Perceptron) machine learning-based algorithms were utilized. The performance of the proposed smart home system was evaluated using several performance metrics: The best results were obtained from the GPC algorithm (i.e. Precision: 0.97, Recall: 0.98, F1-score: 0.97, Accuracy: 0.97).
智能家居系统是通过网络整合技术和服务,以实现更好的生活质量。智能家居无需用户干预或远程控制,可以更轻松地完成日常家务和活动。本研究开发了一种基于机器学习的智能家居系统。研究的目的是设计一个可以不断自我改进和学习的系统,而不是普通的可以远程控制的智能家居系统。开发的机器学习模型可以预测用户在家中的日常活动,并自主地为用户执行一些操作。研究中使用的数据集由日常使用中从传感器接收到的真实数据组成。利用朴素贝叶斯(NB)(即高斯NB、伯努利NB、多项式NB和补元NB)、集成(即随机森林、梯度树增强和极端梯度增强)、线性(即逻辑回归、随机梯度下降和被动攻击分类)和其他(即决策树、支持向量机、K近邻、高斯过程分类器(GPC)、多层感知器)机器学习的算法。使用几个性能指标对所提出的智能家居系统的性能进行了评估:GPC算法获得了最好的结果(即Precision: 0.97, Recall: 0.98, F1-score: 0.97, Accuracy: 0.97)。
{"title":"Design and Implementation of a Self-Learner Smart Home System Using Machine Learning Algorithms","authors":"C. Güven, M. Aci","doi":"10.5755/j01.itc.51.3.31273","DOIUrl":"https://doi.org/10.5755/j01.itc.51.3.31273","url":null,"abstract":"Smart home systems are the integration of technology and services through the network for a better quality of life. Smart homes perform daily housework and activities more easily without user intervention or with remote control of the user. In this study, a machine learning-based smart home system has been developed. The aim of the study is to design a system that can continuously improve itself and learn instead of an ordinary smart home system that can be remotely controlled. The developed machine learning model predicts the routine activities of the users in the home and performs some operations for the user autonomously. The dataset used in the study consists of real data received from the sensors as a result of the daily use. Naive Bayes (NB) (i.e. Gaussian NB, Bernoulli NB, Multinomial NB, and Complement NB), ensemble (i.e. Random Forest, Gradient Tree Boosting and eXtreme Gradient Boosting), linear (i.e. Logistic Regression, Stochastic Gradient Descent, and Passive-Aggressive Classification), and other (i.e. Decision Tree, Support Vector Machine, K Nearest Neighbor, Gaussian Process Classifier (GPC), Multilayer Perceptron) machine learning-based algorithms were utilized. The performance of the proposed smart home system was evaluated using several performance metrics: The best results were obtained from the GPC algorithm (i.e. Precision: 0.97, Recall: 0.98, F1-score: 0.97, Accuracy: 0.97).","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"24 1","pages":"545-562"},"PeriodicalIF":1.1,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85718173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
C-DRM: Coalesced P-TOPSIS Entropy Technique addressing Uncertainty in Cloud Service Selection C-DRM:解决云服务选择不确定性的合并P-TOPSIS熵技术
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-09-23 DOI: 10.5755/j01.itc.51.3.30881
K. Nivitha, Pabitha Parameshwaran
Cloud Computing is diversified with its services exponentially and lured large number of consumers towards the technology indefinitely. It has become a highly challenging problem to satiate the user requirements. Most of the existing system ingest large search space or provide inappropriate service; hence, there is a need for the reliable and space competent service selection/ranking in the cloud environment. The proposed work introduces a novel pruning method and Dual Ranking Method (DRM) to rank the services from n services in terms of space conserving and providing reliable service quenching the user requirements as well. Dual Ranking Method (DRM) is proposed focusing on the uncertainty of user preferences along with their priorities; converting it to weights with the use of Jensen-Shannon (JS) Entropy Function. The ranking of service is employed through Priority-Technique for Order of Preference by Similarity to Ideal Solution (P-TOPSIS) and space complexity is reduced by novel Utility Pruning method. The performance of the proposed work  Clustering – Dual Ranking Method (C-DRM) is estimated in terms of accuracy, Closeness Index (CI) and space complexity have been validated through case study where results outperforms the existing approaches
云计算的服务呈指数级多样化,吸引了大量的消费者无限期地使用这项技术。如何满足用户的需求已成为一个极具挑战性的问题。现有系统大多占用较大的搜索空间或提供不适当的服务;因此,需要在云环境中进行可靠的、空间合理的服务选择/排序。提出了一种新的剪枝方法和双排序方法(Dual Ranking method, DRM),从节省空间和提供满足用户需求的可靠服务的角度对n个服务进行排序。针对用户偏好及其优先级的不确定性,提出了双排序方法;利用Jensen-Shannon (JS)熵函数将其转换为权重。该方法采用理想解相似性优先排序技术(P-TOPSIS)对服务进行排序,并采用新的效用修剪方法降低空间复杂度。本文提出的聚类-双排序方法(C-DRM)的性能在准确性方面进行了估计,通过案例研究验证了接近指数(CI)和空间复杂性,结果优于现有方法
{"title":"C-DRM: Coalesced P-TOPSIS Entropy Technique addressing Uncertainty in Cloud Service Selection","authors":"K. Nivitha, Pabitha Parameshwaran","doi":"10.5755/j01.itc.51.3.30881","DOIUrl":"https://doi.org/10.5755/j01.itc.51.3.30881","url":null,"abstract":"Cloud Computing is diversified with its services exponentially and lured large number of consumers towards the technology indefinitely. It has become a highly challenging problem to satiate the user requirements. Most of the existing system ingest large search space or provide inappropriate service; hence, there is a need for the reliable and space competent service selection/ranking in the cloud environment. The proposed work introduces a novel pruning method and Dual Ranking Method (DRM) to rank the services from n services in terms of space conserving and providing reliable service quenching the user requirements as well. Dual Ranking Method (DRM) is proposed focusing on the uncertainty of user preferences along with their priorities; converting it to weights with the use of Jensen-Shannon (JS) Entropy Function. The ranking of service is employed through Priority-Technique for Order of Preference by Similarity to Ideal Solution (P-TOPSIS) and space complexity is reduced by novel Utility Pruning method. The performance of the proposed work  Clustering – Dual Ranking Method (C-DRM) is estimated in terms of accuracy, Closeness Index (CI) and space complexity have been validated through case study where results outperforms the existing approaches","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"38 1","pages":"592-605"},"PeriodicalIF":1.1,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87345675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Variational Mode Decomposition-based Synchronous Multi-Frequency Electrical Impedance Tomography 基于变分模分解的同步多频电阻抗断层扫描
IF 1.1 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-09-23 DOI: 10.5755/j01.itc.51.3.30014
Qing-Xin Pan, Yang Li, Nan Wang, Peng-fei Zhao, Lan Huang, Zhongyi Wang
Electrical Impedance Tomography (EIT) can perform non-invasive, low-cost, safe, fast, and simple system structure and functional imaging to map the distribution and changes of root zone. Multi frequency EIT solves the problem that single-frequency EIT can only carry more impedance information than a given single excitation frequency. It still remains challenges to simultaneously obtain multi-frequency electrical impedance tomography. To address the problem, a mixed signal superimposed by multiple frequencies is injected to the object. Essentially, separating the measured mixed voltage signals, which can be used to obtain electrical impedance information at different frequencies at the same time quickly. Since the measurement signal is a multi-frequency signal, the effect of decomposing the multi-frequency signal directly affects the accuracy of imaging. In order to obtain more accurate data, this article used the variational mode decomposition (VMD) method to decompose the measured multi-frequency signal. Accurate amplitude and phase information could be obtained simultaneously at the same time in multi-frequency excitation, and these data could be used to reconstruct electrical impedance distribution The results showed that the proposed method can achieve the expected imaging effect. It was concluded that using the variational modal decomposition method to process the data of multi-frequency signals is more accurate and the imaging effect is better, and it can be applied to multi-frequency electrical impedance imaging in practice.
电阻抗断层成像(EIT)可以实现无创、低成本、安全、快速、简单的系统结构和功能成像,绘制根区分布和变化。多频电阻抗技术解决了单频电阻抗技术只能携带比给定单激励频率更多的阻抗信息的问题。同时获得多频电阻抗层析成像仍然是一个挑战。为了解决这个问题,将多个频率叠加的混合信号注入到目标中。本质上是将被测的混合电压信号分离,从而可以快速获得不同频率下的电阻抗信息。由于测量信号是多频信号,多频信号的分解效果直接影响成像的精度。为了获得更精确的数据,本文采用变分模态分解(VMD)方法对测量到的多频信号进行分解。在多频激励下,可以同时获得准确的幅值和相位信息,并利用这些数据重建电阻抗分布。结果表明,该方法可以达到预期的成像效果。结果表明,采用变分模态分解方法处理多频信号数据精度更高,成像效果更好,可应用于实际的多频电阻抗成像。
{"title":"Variational Mode Decomposition-based Synchronous Multi-Frequency Electrical Impedance Tomography","authors":"Qing-Xin Pan, Yang Li, Nan Wang, Peng-fei Zhao, Lan Huang, Zhongyi Wang","doi":"10.5755/j01.itc.51.3.30014","DOIUrl":"https://doi.org/10.5755/j01.itc.51.3.30014","url":null,"abstract":"Electrical Impedance Tomography (EIT) can perform non-invasive, low-cost, safe, fast, and simple system structure and functional imaging to map the distribution and changes of root zone. Multi frequency EIT solves the problem that single-frequency EIT can only carry more impedance information than a given single excitation frequency. It still remains challenges to simultaneously obtain multi-frequency electrical impedance tomography. To address the problem, a mixed signal superimposed by multiple frequencies is injected to the object. Essentially, separating the measured mixed voltage signals, which can be used to obtain electrical impedance information at different frequencies at the same time quickly. Since the measurement signal is a multi-frequency signal, the effect of decomposing the multi-frequency signal directly affects the accuracy of imaging. In order to obtain more accurate data, this article used the variational mode decomposition (VMD) method to decompose the measured multi-frequency signal. Accurate amplitude and phase information could be obtained simultaneously at the same time in multi-frequency excitation, and these data could be used to reconstruct electrical impedance distribution The results showed that the proposed method can achieve the expected imaging effect. It was concluded that using the variational modal decomposition method to process the data of multi-frequency signals is more accurate and the imaging effect is better, and it can be applied to multi-frequency electrical impedance imaging in practice.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"42 1","pages":"446-466"},"PeriodicalIF":1.1,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77512573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Information Technology and Control
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1