Cloud storage is a significant cloud computing (CC) service which enables the client to save and retrieve the data at anytime and anywhere. Due to the increased demand and familiarity of the CC environment, different kinds of security threats and susceptibilities are raised. Data integrity and privacy are the major problems in CC environment where the data can be saved in distinct geographic regions. So, privacy preservation and data integrity become important factors of the user concern related to the CC environment. Several auditing protocols are majorly dependent upon the conventional public key infrastructure, which led to high computational complexity and it is unsuitable for the setting of multiple users. To resolve these issues, this study develops a new block chain enabled auditing with optimal multi‐key homomorphic encryption (BEA‐OMKHE) technique for public cloud environment. The proposed BEA‐OMKHE technique aims to assure data integrity, security, and auditing in public cloud storage. Besides, an OMKHE technique is derived to accomplish data integrating into the cloud environment by the design of end to end encryption system. A secure generation of keys and encryption processes are carried out by the use of MKHE technique; thereby the data becomes highly secure. In addition, the choice of keys is performed by the improved beetle antenna search optimization (IBAS) algorithm. Therefore, the proposed BEA‐OMKHE technique offers an efficient way of enhancing the data integrity in CC method. The performance validation of the BEA‐OMKHE technique takes place and the results are inspected under various aspects. The comparative result analysis ensured the betterment of the BEA‐OMKHE technique interms of different measures such as communication cost, encryption time, decryption time, computation cost, privacy preserving rate, and authentication accuracy.
{"title":"Block chain enabled auditing with optimal multi‐key homomorphic encryption technique for public cloud computing environment","authors":"Venkata Naga Rani Bandaru, P. Visalakshi","doi":"10.1002/cpe.7128","DOIUrl":"https://doi.org/10.1002/cpe.7128","url":null,"abstract":"Cloud storage is a significant cloud computing (CC) service which enables the client to save and retrieve the data at anytime and anywhere. Due to the increased demand and familiarity of the CC environment, different kinds of security threats and susceptibilities are raised. Data integrity and privacy are the major problems in CC environment where the data can be saved in distinct geographic regions. So, privacy preservation and data integrity become important factors of the user concern related to the CC environment. Several auditing protocols are majorly dependent upon the conventional public key infrastructure, which led to high computational complexity and it is unsuitable for the setting of multiple users. To resolve these issues, this study develops a new block chain enabled auditing with optimal multi‐key homomorphic encryption (BEA‐OMKHE) technique for public cloud environment. The proposed BEA‐OMKHE technique aims to assure data integrity, security, and auditing in public cloud storage. Besides, an OMKHE technique is derived to accomplish data integrating into the cloud environment by the design of end to end encryption system. A secure generation of keys and encryption processes are carried out by the use of MKHE technique; thereby the data becomes highly secure. In addition, the choice of keys is performed by the improved beetle antenna search optimization (IBAS) algorithm. Therefore, the proposed BEA‐OMKHE technique offers an efficient way of enhancing the data integrity in CC method. The performance validation of the BEA‐OMKHE technique takes place and the results are inspected under various aspects. The comparative result analysis ensured the betterment of the BEA‐OMKHE technique interms of different measures such as communication cost, encryption time, decryption time, computation cost, privacy preserving rate, and authentication accuracy.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"150 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87272427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weimin Li, Dingmei Wei, Xiaokang Zhou, Shaohua Li, Qun Jin
The spread of rumors has a major negative impact on social stability. Traditional rumor spreading models are mostly based on infectious disease models and do not consider the influence of individual differences and the network structure on rumor spreading. In this paper, we propose a rumor Fick‐spreading model that integrates information decay in social networks. The dissemination of rumors in social networks is random and uncertain and is affected by the dissemination capabilities of individuals and the network environment. The rumor Fick‐transition coefficient and Fick‐transition gradient are defined to determine the influence of the individual transition capacity and the network environment on rumor propagation, respectively. The Fick‐state transition probability is used to describe the probability of change of an individual's state. Moreover, an information decay function is defined to characterize the self‐healing probability of individuals. According to the different roles and reactions of users during rumor dissemination, the user state and the rumor dissemination rules among users are refined, and the influence of the network structure on the rumor dissemination is ascertained. The experimental results demonstrate that the proposed model outperforms other rumor spread models.
{"title":"F‐SWIR: Rumor Fick‐spreading model considering fusion information decay in social networks","authors":"Weimin Li, Dingmei Wei, Xiaokang Zhou, Shaohua Li, Qun Jin","doi":"10.1002/cpe.7166","DOIUrl":"https://doi.org/10.1002/cpe.7166","url":null,"abstract":"The spread of rumors has a major negative impact on social stability. Traditional rumor spreading models are mostly based on infectious disease models and do not consider the influence of individual differences and the network structure on rumor spreading. In this paper, we propose a rumor Fick‐spreading model that integrates information decay in social networks. The dissemination of rumors in social networks is random and uncertain and is affected by the dissemination capabilities of individuals and the network environment. The rumor Fick‐transition coefficient and Fick‐transition gradient are defined to determine the influence of the individual transition capacity and the network environment on rumor propagation, respectively. The Fick‐state transition probability is used to describe the probability of change of an individual's state. Moreover, an information decay function is defined to characterize the self‐healing probability of individuals. According to the different roles and reactions of users during rumor dissemination, the user state and the rumor dissemination rules among users are refined, and the influence of the network structure on the rumor dissemination is ascertained. The experimental results demonstrate that the proposed model outperforms other rumor spread models.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73126309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandre Denis, Julien Jaeger, E. Jeannot, Florian Reynier
By allowing computation/communication overlap, MPI nonblocking collectives (NBC) are supposed to improve application scalability and performance. However, it is known that to actually get overlap, the MPI library has to implement progression mechanisms in software or rely on the network hardware. These mechanisms may be present or not, adequate or perfectible, they may have an impact on communication performance or may interfere with computation by stealing CPU cycles. From a user point of view, assessing and understanding the behavior of an MPI library concerning computation/communication overlap is difficult. In this article, we propose a methodology to assess the computation/communication overlap of NBC. We propose new metrics to measure how much communication and computation do overlap, and to evaluate how they interfere with each other. We integrate these metrics into a complete methodology. We compare our methodology with state of the art metrics and benchmarks, and show that ours provides more meaningful informations. We perform experiments on a large panel of MPI implementations and network hardware and show when and why overlap is efficient, nonexistent or even degrades performance.
{"title":"A methodology for assessing computation/communication overlap of MPI nonblocking collectives","authors":"Alexandre Denis, Julien Jaeger, E. Jeannot, Florian Reynier","doi":"10.1002/cpe.7168","DOIUrl":"https://doi.org/10.1002/cpe.7168","url":null,"abstract":"By allowing computation/communication overlap, MPI nonblocking collectives (NBC) are supposed to improve application scalability and performance. However, it is known that to actually get overlap, the MPI library has to implement progression mechanisms in software or rely on the network hardware. These mechanisms may be present or not, adequate or perfectible, they may have an impact on communication performance or may interfere with computation by stealing CPU cycles. From a user point of view, assessing and understanding the behavior of an MPI library concerning computation/communication overlap is difficult. In this article, we propose a methodology to assess the computation/communication overlap of NBC. We propose new metrics to measure how much communication and computation do overlap, and to evaluate how they interfere with each other. We integrate these metrics into a complete methodology. We compare our methodology with state of the art metrics and benchmarks, and show that ours provides more meaningful informations. We perform experiments on a large panel of MPI implementations and network hardware and show when and why overlap is efficient, nonexistent or even degrades performance.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89351200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meghana Thiyyakat, Subramaniam Kalambur, D. Sitaram
Cloud providers place tasks from multiple applications on the same resource pool to improve the resource utilization of the infrastructure. The consequent resource contention has an undesirable effect on latency‐sensitive tasks. In this article, we present Niyama—a resource isolation approach that uses a modified version of deadline scheduling to protect latency‐sensitive tasks from CPU bandwidth contention. Conventionally, deadline scheduling has been used to schedule real‐time tasks with well‐defined deadlines. Therefore, it cannot be used directly when the deadlines are unspecified. In Niyama, we estimate deadlines in intervals and secure bandwidth required for the interval, thereby ensuring optimal job response times. We compare our approach with cgroups: Linux's default resource isolation mechanism used in containers today. Our experiments show that Niyama reduces the average delay in tasks by 3 ×$$ times $$ –20 ×$$ times $$ when compared to cgroups. Since Linux's deadline scheduling policy is work‐conserving in nature, there is a small drop in the server‐level CPU utilization when Niyama is used naively. We demonstrate how the use of core reservation and oversubscription in the inter‐node scheduler can be used to offset this drop; our experiments show a 1.3 ×$$ times $$ –2.24 ×$$ times $$ decrease in delay in job response time over cgroups while achieving high CPU utilization.
云提供商将来自多个应用程序的任务放在同一个资源池上,以提高基础设施的资源利用率。随之而来的资源争用对延迟敏感的任务有不良影响。在本文中,我们介绍了niyama -一种资源隔离方法,该方法使用修改版本的截止日期调度来保护延迟敏感任务免受CPU带宽争用。传统上,截止日期调度已被用于调度实时任务与明确的截止日期。因此,在未指定截止日期时不能直接使用它。在Niyama中,我们以间隔估计最后期限和间隔所需的安全带宽,从而确保最佳的作业响应时间。我们将我们的方法与cgroups: Linux目前在容器中使用的默认资源隔离机制进行比较。我们的实验表明,与cgroups相比,Niyama将任务的平均延迟减少了3 × $$ times $$ -20 × $$ times $$。由于Linux的最后期限调度策略本质上是节省工作的,所以当Niyama被天真地使用时,服务器级CPU利用率会有一个小的下降。我们演示了如何在节点间调度程序中使用核心保留和超额订阅来抵消这种下降;我们的实验表明,在实现高CPU利用率的同时,作业响应时间延迟比cgroups减少了1.3 × $$ times $$ -2.24 × $$ times $$。
{"title":"Niyama: Node scheduling for cloud workloads with resource isolation","authors":"Meghana Thiyyakat, Subramaniam Kalambur, D. Sitaram","doi":"10.1002/cpe.7196","DOIUrl":"https://doi.org/10.1002/cpe.7196","url":null,"abstract":"Cloud providers place tasks from multiple applications on the same resource pool to improve the resource utilization of the infrastructure. The consequent resource contention has an undesirable effect on latency‐sensitive tasks. In this article, we present Niyama—a resource isolation approach that uses a modified version of deadline scheduling to protect latency‐sensitive tasks from CPU bandwidth contention. Conventionally, deadline scheduling has been used to schedule real‐time tasks with well‐defined deadlines. Therefore, it cannot be used directly when the deadlines are unspecified. In Niyama, we estimate deadlines in intervals and secure bandwidth required for the interval, thereby ensuring optimal job response times. We compare our approach with cgroups: Linux's default resource isolation mechanism used in containers today. Our experiments show that Niyama reduces the average delay in tasks by 3 ×$$ times $$ –20 ×$$ times $$ when compared to cgroups. Since Linux's deadline scheduling policy is work‐conserving in nature, there is a small drop in the server‐level CPU utilization when Niyama is used naively. We demonstrate how the use of core reservation and oversubscription in the inter‐node scheduler can be used to offset this drop; our experiments show a 1.3 ×$$ times $$ –2.24 ×$$ times $$ decrease in delay in job response time over cgroups while achieving high CPU utilization.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"90 8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87728387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The popularity of the cloud storage space mainly attracted organizations to store their data in them. Therefore, the avoidance of duplicate data contents is unavoidable and several users share the cloud storage space for data storage, and sometimes this makes higher storage space utilization. Because of the extremely high duplicate copy, memory wastage arises in the case of multimedia data. Identifying the final duplicate copies in the cloud takes more time. To overcome this problem, we employ a significant storage optimization model for deduplication. The digital data hash value is stored by requiring an additional memory space. This study proposed an enhanced prefix hash tree (EPHT) method to optimize the image and text deduplication system to reduce the overhead caused by this procedure. The efficiency of the proposed approach is compared with the interpolation search technique using different levels of tree height (2, 4, 2, 8, 16) in terms of space and time complexity. The proposed EPHT technique shows improvements in terms of speed and space complexity when the number of levels in the EPHT increases.
{"title":"An efficient enhanced prefix hash tree model for optimizing the storage and image deduplication in cloud","authors":"G. Sujatha, R. Raj","doi":"10.1002/cpe.7199","DOIUrl":"https://doi.org/10.1002/cpe.7199","url":null,"abstract":"The popularity of the cloud storage space mainly attracted organizations to store their data in them. Therefore, the avoidance of duplicate data contents is unavoidable and several users share the cloud storage space for data storage, and sometimes this makes higher storage space utilization. Because of the extremely high duplicate copy, memory wastage arises in the case of multimedia data. Identifying the final duplicate copies in the cloud takes more time. To overcome this problem, we employ a significant storage optimization model for deduplication. The digital data hash value is stored by requiring an additional memory space. This study proposed an enhanced prefix hash tree (EPHT) method to optimize the image and text deduplication system to reduce the overhead caused by this procedure. The efficiency of the proposed approach is compared with the interpolation search technique using different levels of tree height (2, 4, 2, 8, 16) in terms of space and time complexity. The proposed EPHT technique shows improvements in terms of speed and space complexity when the number of levels in the EPHT increases.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91104143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At present, artificial intelligence technology has been widely used. Artificial intelligence technology can not only enrich people's lives, effectively improve work efficiency, achieve technical development, but also improve the efficiency of enterprises, and bring rich profits for the development of enterprises. Therefore, the author systematically analyzes the advantages and disadvantages of AI technology, and expounds the application of AI technology in computer network technology from the aspects of network security technology, enterprise management technology, network system and evaluation technology.
{"title":"Research on the application of artificial intelligence in computer network technology in the era of big data","authors":"Zhenyu Xu","doi":"10.1002/cpe.7262","DOIUrl":"https://doi.org/10.1002/cpe.7262","url":null,"abstract":"At present, artificial intelligence technology has been widely used. Artificial intelligence technology can not only enrich people's lives, effectively improve work efficiency, achieve technical development, but also improve the efficiency of enterprises, and bring rich profits for the development of enterprises. Therefore, the author systematically analyzes the advantages and disadvantages of AI technology, and expounds the application of AI technology in computer network technology from the aspects of network security technology, enterprise management technology, network system and evaluation technology.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86162570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the Internet of Things framework, crowdsourcing (CS) has played a significant role. A sufficient number of participants carrying sensors or IoT devices are necessary to obtain maximum coverage within a given budget for CS a task. Cloud computing is used for centralized processing, storage, and large‐scale data analysis. The delay associated with transferring data to cloud servers creates a time‐consuming decision‐making process. Fog computing is responsible for this capability. As a result, FogiRecruiter, a novel framework, is offered to efficiently choose participants for data collection from the vital environment while staying within a budget. We also utilize fuzzy logic to pick the best fog nodes for relaying data to them and then to faraway cloud servers. Realizing emergency communication, despite the fact that a direct connection to the cloud is inconvenient. Simulations and prototype testing are used to show the efficacy of the proposed approach.
{"title":"FogiRecruiter: A fog‐enabled selection mechanism of crowdsourcing for disaster management","authors":"Riya Samanta, S. Ghosh","doi":"10.1002/cpe.7207","DOIUrl":"https://doi.org/10.1002/cpe.7207","url":null,"abstract":"In the Internet of Things framework, crowdsourcing (CS) has played a significant role. A sufficient number of participants carrying sensors or IoT devices are necessary to obtain maximum coverage within a given budget for CS a task. Cloud computing is used for centralized processing, storage, and large‐scale data analysis. The delay associated with transferring data to cloud servers creates a time‐consuming decision‐making process. Fog computing is responsible for this capability. As a result, FogiRecruiter, a novel framework, is offered to efficiently choose participants for data collection from the vital environment while staying within a budget. We also utilize fuzzy logic to pick the best fog nodes for relaying data to them and then to faraway cloud servers. Realizing emergency communication, despite the fact that a direct connection to the cloud is inconvenient. Simulations and prototype testing are used to show the efficacy of the proposed approach.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82124786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In digital forensics, image tamper detection and localization have attracted increased attention in recent days, where the standard methods have limited description ability and high computational costs. As a result, this research introduces a novel picture tamper detection and localization model. Feature extraction, tamper detection, as well as tamper localization are the three major phases of the proposed model. From the input digital images, a group of features like “Scale‐based Adaptive Speeded Up Robust Features (SA‐SURF), Discrete Wavelet Transform (DWT) based Patched Local Vector Pattern (LVP) features, HoG feature with harmonic mean based PCA and MBFDF” are extracted. Then, with this extracted feature strain the “optimized Convolutional Neural Network (CNN)” will be trained in the tamper detection phase. Since it is the key decision‐maker about the presence/absence of tamper, its weighting parameters are fine‐tuned via a novel improved Sea‐lion Customized Firefly algorithm (ISCFF) model. This ensures the enhancement of detection accuracy. Once an image is recognized to have tampers, then it is essential to identify the tamper localization. In the tamper localization phase, the copy‐move tampers are localized using the SIFT features, splicing tampers are localized using the DBN and the noise inconsistency is localized with a newly introduced threshold‐based tamper localization technique. The simulation outcomes illustrate that the adopted model attains better tamper detection as well as localization performance over the existing methods.
近年来,在数字取证领域,图像篡改检测和定位受到越来越多的关注,标准方法的描述能力有限,计算成本高。因此,本研究引入了一种新的图像篡改检测与定位模型。特征提取、篡改检测和篡改定位是该模型的三个主要阶段。从输入的数字图像中提取出“基于尺度的自适应加速鲁棒特征(SA‐SURF)”、基于离散小波变换(DWT)的patch Local Vector Pattern (LVP)特征、基于谐波均值PCA和MBFDF的HoG特征等特征。然后,利用提取的特征应变,在篡改检测阶段训练“优化卷积神经网络(CNN)”。由于它是关于是否存在篡改的关键决策者,因此其权重参数通过一种新的改进的海狮定制萤火虫算法(ISCFF)模型进行微调。这保证了检测精度的提高。一旦识别出图像存在篡改,就必须确定篡改的位置。在篡改定位阶段,使用SIFT特征对复制-移动篡改进行定位,使用DBN对拼接篡改进行定位,使用新引入的基于阈值的篡改定位技术对噪声不一致性进行定位。仿真结果表明,所采用的模型比现有方法具有更好的篡改检测和定位性能。
{"title":"Detection and localization of image tampering in digital images with fused features","authors":"Mohassin Ahmad, F. Khursheed","doi":"10.1002/cpe.7191","DOIUrl":"https://doi.org/10.1002/cpe.7191","url":null,"abstract":"In digital forensics, image tamper detection and localization have attracted increased attention in recent days, where the standard methods have limited description ability and high computational costs. As a result, this research introduces a novel picture tamper detection and localization model. Feature extraction, tamper detection, as well as tamper localization are the three major phases of the proposed model. From the input digital images, a group of features like “Scale‐based Adaptive Speeded Up Robust Features (SA‐SURF), Discrete Wavelet Transform (DWT) based Patched Local Vector Pattern (LVP) features, HoG feature with harmonic mean based PCA and MBFDF” are extracted. Then, with this extracted feature strain the “optimized Convolutional Neural Network (CNN)” will be trained in the tamper detection phase. Since it is the key decision‐maker about the presence/absence of tamper, its weighting parameters are fine‐tuned via a novel improved Sea‐lion Customized Firefly algorithm (ISCFF) model. This ensures the enhancement of detection accuracy. Once an image is recognized to have tampers, then it is essential to identify the tamper localization. In the tamper localization phase, the copy‐move tampers are localized using the SIFT features, splicing tampers are localized using the DBN and the noise inconsistency is localized with a newly introduced threshold‐based tamper localization technique. The simulation outcomes illustrate that the adopted model attains better tamper detection as well as localization performance over the existing methods.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84515170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, the detection and categorization of acute intracranial hemorrhage (ICH) subtypes using a multilayer DenseNet‐ResNet architecture with improved random forest classifier (IRF) is proposed to detect the subtypes of intracerebral hemorrhage with high accuracy and less computational time. Here, the brain CT images are taken from the physionet repository publicly dataset. Then the images are preprocessed to eliminate the unwanted noises. After that, the image features are extracted by using multilayer densely connected convolutional network (DenseNet) combined with residual network (ResNet) architecture with multiple convolutional layers. The subtypes are epidural hemorrhage (EDH), subarachnoid hemorrhage (SAH), intraparenchymal hemorrhage (IPH), subdural hemorrhage (SDH), intraventricular hemorrhage (IVH) are classified by using an IRF classifier with high accuracy. The simulation process is carried out in MATLAB site. The proposed multilayer‐DenseNet‐ResNet‐IRF attains higher accuracy 23.44%, 31.93%, 42.83%, 41.9% is compared with the existing methods, such as deep learning algorithm for automatic detection and classification of acute intracranial hemorrhages in head CT scans (ICH‐DC‐2D‐CNN), fusion‐based deep learning along nature‐inspired algorithm for the diagnosis of intracerebral hemorrhage (ICH‐DC‐FSVM), and detection of intracranial hemorrhage on CT scan images using convolutional neural network (ICH‐DC‐CNN) and double fully convolutional networks (FCNs), respectively.
本文提出了一种基于多层DenseNet - ResNet结构和改进的随机森林分类器(IRF)的急性颅内出血(ICH)亚型检测与分类方法,以高精度和更少的计算时间检测颅内出血亚型。这里,大脑CT图像取自physionet知识库公开数据集。然后对图像进行预处理,去除不需要的噪声。然后,采用多层密集连接卷积网络(DenseNet)结合多卷积层残差网络(ResNet)架构提取图像特征。其中,硬膜外出血(EDH)、蛛网膜下腔出血(SAH)、脑实质内出血(IPH)、硬膜下出血(SDH)、脑室内出血(IVH)是采用IRF分类器进行分类,准确率较高。仿真过程在MATLAB现场进行。与现有的深度学习算法(ICH - DC - 2D - CNN)、基于融合的深度学习算法(ICH - DC - FSVM)和基于自然启发的脑出血诊断算法(ICH - DC - FSVM)相比,本文提出的多层- DenseNet - ResNet - IRF的准确率分别为23.44%、31.93%、42.83%和41.9%。分别利用卷积神经网络(ICH - DC - CNN)和双全卷积网络(fcnn)对CT扫描图像进行颅内出血检测。
{"title":"Detection and categorization of acute intracranial hemorrhage subtypes using a multilayer DenseNet‐ResNet architecture with improved random forest classifier","authors":"B. M. Jenefer, K. Senathipathi, Aarthi, Annapandi","doi":"10.1002/cpe.7167","DOIUrl":"https://doi.org/10.1002/cpe.7167","url":null,"abstract":"In this article, the detection and categorization of acute intracranial hemorrhage (ICH) subtypes using a multilayer DenseNet‐ResNet architecture with improved random forest classifier (IRF) is proposed to detect the subtypes of intracerebral hemorrhage with high accuracy and less computational time. Here, the brain CT images are taken from the physionet repository publicly dataset. Then the images are preprocessed to eliminate the unwanted noises. After that, the image features are extracted by using multilayer densely connected convolutional network (DenseNet) combined with residual network (ResNet) architecture with multiple convolutional layers. The subtypes are epidural hemorrhage (EDH), subarachnoid hemorrhage (SAH), intraparenchymal hemorrhage (IPH), subdural hemorrhage (SDH), intraventricular hemorrhage (IVH) are classified by using an IRF classifier with high accuracy. The simulation process is carried out in MATLAB site. The proposed multilayer‐DenseNet‐ResNet‐IRF attains higher accuracy 23.44%, 31.93%, 42.83%, 41.9% is compared with the existing methods, such as deep learning algorithm for automatic detection and classification of acute intracranial hemorrhages in head CT scans (ICH‐DC‐2D‐CNN), fusion‐based deep learning along nature‐inspired algorithm for the diagnosis of intracerebral hemorrhage (ICH‐DC‐FSVM), and detection of intracranial hemorrhage on CT scan images using convolutional neural network (ICH‐DC‐CNN) and double fully convolutional networks (FCNs), respectively.","PeriodicalId":10584,"journal":{"name":"Concurrency and Computation: Practice and Experience","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82397746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}