Brain tumor classification is highly effective in identifying and diagnosing the exact location of the tumor in the brain. The medical imaging system reported that early diagnosis and classification of the tumor increases the life of the human. Among various imaging modalities, magnetic resonance imaging (MRI) is highly used by clinical experts, as it offers contrast information of brain tumors. An effective classification method named fractional-chicken swarm optimization (fractional-CSO) is introduced to perform the severity-level tumor classification. Here, the chicken swarm behavior is merged with the derivative factor to enhance the accuracy of severity level classification. The optimal solution is obtained by updating the position of the rooster, which updates their location based on better fitness value. The brain images are pre-processed and the features are effectively extracted, and the cancer classification is carried out. Moreover, the severity level of tumor classification is performed using the deep recurrent neural network, which is trained by the proposed fractional-CSO algorithm. Moreover, the performance of the proposed fractional-CSO attained better performance in terms of the evaluation metrics, such as accuracy, specificity and sensitivity with the values of 93.35, 96 and 95% using simulated BRATS dataset, respectively.
{"title":"Severity Level Classification of Brain Tumor based on MRI Images using Fractional-Chicken Swarm Optimization Algorithm","authors":"R Cristin;K Suresh Kumar;P Anbhazhagan","doi":"10.1093/comjnl/bxab057","DOIUrl":"https://doi.org/10.1093/comjnl/bxab057","url":null,"abstract":"Brain tumor classification is highly effective in identifying and diagnosing the exact location of the tumor in the brain. The medical imaging system reported that early diagnosis and classification of the tumor increases the life of the human. Among various imaging modalities, magnetic resonance imaging (MRI) is highly used by clinical experts, as it offers contrast information of brain tumors. An effective classification method named fractional-chicken swarm optimization (fractional-CSO) is introduced to perform the severity-level tumor classification. Here, the chicken swarm behavior is merged with the derivative factor to enhance the accuracy of severity level classification. The optimal solution is obtained by updating the position of the rooster, which updates their location based on better fitness value. The brain images are pre-processed and the features are effectively extracted, and the cancer classification is carried out. Moreover, the severity level of tumor classification is performed using the deep recurrent neural network, which is trained by the proposed fractional-CSO algorithm. Moreover, the performance of the proposed fractional-CSO attained better performance in terms of the evaluation metrics, such as accuracy, specificity and sensitivity with the values of 93.35, 96 and 95% using simulated BRATS dataset, respectively.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"64 10","pages":"1514-1530"},"PeriodicalIF":1.4,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/comjnl/bxab057","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49943516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are several Contact Tracing solutions since the outbreak of SARS COVID-19. All these solutions are localized—specific to a country. The Apps supported by these solutions do not interwork with each other. There are no standards to the proximity data collected by these Apps. Once the international travel restrictions are relaxed, this will become an issue. This paper explores this issue, by addressing one of the key requirements of Contact Tracing solutions. All the current solutions use an Identifier, Proximity Identifier (PID), that anonymously represents the user in the proximity data exchanged. The PID used in these applications varies in their structure, management and properties. This paper first identifies the common desirable properties of PID, including some non-obvious ones for its global application. This identification is essential for the design and development of the Contact Tracing solution that can work across boundaries seamlessly. The paper also evaluates representative solutions from two different design classes against these properties.
{"title":"Contact Tracing Solution for Global Community","authors":"Hari T S Narayanan","doi":"10.1093/comjnl/bxab099","DOIUrl":"https://doi.org/10.1093/comjnl/bxab099","url":null,"abstract":"There are several Contact Tracing solutions since the outbreak of SARS COVID-19. All these solutions are localized—specific to a country. The Apps supported by these solutions do not interwork with each other. There are no standards to the proximity data collected by these Apps. Once the international travel restrictions are relaxed, this will become an issue. This paper explores this issue, by addressing one of the key requirements of Contact Tracing solutions. All the current solutions use an Identifier, Proximity Identifier (PID), that anonymously represents the user in the proximity data exchanged. The PID used in these applications varies in their structure, management and properties. This paper first identifies the common desirable properties of PID, including some non-obvious ones for its global application. This identification is essential for the design and development of the Contact Tracing solution that can work across boundaries seamlessly. The paper also evaluates representative solutions from two different design classes against these properties.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"64 10","pages":"1565-1574"},"PeriodicalIF":1.4,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49943519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Identifying influential nodes is a fundamental and open issue in analysis of the complex networks. The measurement of the spreading capabilities of nodes is an attractive challenge in this field. Node centrality is one of the most popular methods used to identify the influential nodes, which includes the degree centrality (DC), betweenness centrality (BC) and closeness centrality (CC). The DC is an efficient method but not effective. The BC and CC are effective but not efficient. They have high computational complexity. To balance the effectiveness and efficiency, this paper proposes the neighborhood entropy centrality to rank the influential nodes. The proposed method uses the notion of entropy to improve the DC. For evaluating the performance, the susceptible-infected-recovered model is used to simulate the information spreading process of messages on nine real-world networks. The experimental results reveal the accuracy and efficiency of the proposed method.
{"title":"Identifying Influential Nodes in Complex Networks Based on Neighborhood Entropy Centrality","authors":"Liqing Qiu;Jianyi Zhang;Xiangbo Tian;Shuang Zhang","doi":"10.1093/comjnl/bxab034","DOIUrl":"https://doi.org/10.1093/comjnl/bxab034","url":null,"abstract":"Identifying influential nodes is a fundamental and open issue in analysis of the complex networks. The measurement of the spreading capabilities of nodes is an attractive challenge in this field. Node centrality is one of the most popular methods used to identify the influential nodes, which includes the degree centrality (DC), betweenness centrality (BC) and closeness centrality (CC). The DC is an efficient method but not effective. The BC and CC are effective but not efficient. They have high computational complexity. To balance the effectiveness and efficiency, this paper proposes the neighborhood entropy centrality to rank the influential nodes. The proposed method uses the notion of entropy to improve the DC. For evaluating the performance, the susceptible-infected-recovered model is used to simulate the information spreading process of messages on nine real-world networks. The experimental results reveal the accuracy and efficiency of the proposed method.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"64 10","pages":"1465-1476"},"PeriodicalIF":1.4,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49984582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advancements in the area of object localization are in great progress for analyzing the spatial relations of different objects from the set of images. Several object localization techniques rely on classification, which decides, if the object exist or not, but does not provide the object information using pixel-wise segmentation. This work introduces an object detection and localization technique using semantic segmentation network (SSN) and deep convolutional neural network (Deep CNN). Here, the proposed technique consists of the following steps: Initially, the image is denoised using the filtering to eliminate the noise present in the image. Then, pre-processed image undergoes sparking process for making the image suitable for the segmentation using SSN for object segmentation. The obtained segments are subjected as the input to the proposed Stochastic-Cat Crow optimization (Stochastic-CCO)-based Deep CNN for the object classification. Here, the proposed Stochastic-CCO, obtained by integrating stochastic gradient descent and the CCO, is used for training the Deep CNN. The CCO is designed by the integration of cat swarm optimization (CSO) and crow search algorithm and takes advantages of both optimization algorithms. The experimentation proves that the proposed Stochastic-CCO-based Deep CNN-based technique acquired maximal accuracy of 98.7.
{"title":"Robust Object Detection and Localization Using Semantic Segmentation Network","authors":"A Francis Alexander Raghu;J P Ananth","doi":"10.1093/comjnl/bxab079","DOIUrl":"https://doi.org/10.1093/comjnl/bxab079","url":null,"abstract":"The advancements in the area of object localization are in great progress for analyzing the spatial relations of different objects from the set of images. Several object localization techniques rely on classification, which decides, if the object exist or not, but does not provide the object information using pixel-wise segmentation. This work introduces an object detection and localization technique using semantic segmentation network (SSN) and deep convolutional neural network (Deep CNN). Here, the proposed technique consists of the following steps: Initially, the image is denoised using the filtering to eliminate the noise present in the image. Then, pre-processed image undergoes sparking process for making the image suitable for the segmentation using SSN for object segmentation. The obtained segments are subjected as the input to the proposed Stochastic-Cat Crow optimization (Stochastic-CCO)-based Deep CNN for the object classification. Here, the proposed Stochastic-CCO, obtained by integrating stochastic gradient descent and the CCO, is used for training the Deep CNN. The CCO is designed by the integration of cat swarm optimization (CSO) and crow search algorithm and takes advantages of both optimization algorithms. The experimentation proves that the proposed Stochastic-CCO-based Deep CNN-based technique acquired maximal accuracy of 98.7.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"64 10","pages":"1531-1548"},"PeriodicalIF":1.4,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49943517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kannan Krishnan;B Yamini;Wael Mohammad Alenazy;M Nalini
The most famous wireless sensor networks is one of the cheapest and rapidly evolving networks in modern communication. It can be used to sense various substantial and environmental specifications by providing cost-effective sensor devices. The development of these sensor networks is exploited to provide an energy-efficient weighted clustering method to increase the lifespan of the network. We propose a novel energy-efficient method, which utilizes the brainstorm algorithm in order to adopt the ideal cluster head (CH) to reduce energy draining. Furthermore, the effectiveness of the BrainStorm Optimization (BSO) algorithm is enhanced with the incorporation of the modified teacher–learner optimized (MTLBO) algorithm with it. The modified BSO–MTLBO algorithm can be used to attain an improved throughput, network lifetime, and to reduce the energy consumption by nodes and CH, death of sensor nodes, routing overhead. The performance of our proposed work is analyzed with other existing approaches and inferred that our approach performs better than all the other approaches.
{"title":"Energy-Efficient Cluster-Based Routing Protocol for WSN Based on Hybrid BSO–TLBO Optimization Model","authors":"Kannan Krishnan;B Yamini;Wael Mohammad Alenazy;M Nalini","doi":"10.1093/comjnl/bxab044","DOIUrl":"https://doi.org/10.1093/comjnl/bxab044","url":null,"abstract":"The most famous wireless sensor networks is one of the cheapest and rapidly evolving networks in modern communication. It can be used to sense various substantial and environmental specifications by providing cost-effective sensor devices. The development of these sensor networks is exploited to provide an energy-efficient weighted clustering method to increase the lifespan of the network. We propose a novel energy-efficient method, which utilizes the brainstorm algorithm in order to adopt the ideal cluster head (CH) to reduce energy draining. Furthermore, the effectiveness of the BrainStorm Optimization (BSO) algorithm is enhanced with the incorporation of the modified teacher–learner optimized (MTLBO) algorithm with it. The modified BSO–MTLBO algorithm can be used to attain an improved throughput, network lifetime, and to reduce the energy consumption by nodes and CH, death of sensor nodes, routing overhead. The performance of our proposed work is analyzed with other existing approaches and inferred that our approach performs better than all the other approaches.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"64 10","pages":"1477-1493"},"PeriodicalIF":1.4,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49943521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Typically, underwater image processing is mainly concerned with balancing the color change distortion or light scattering. Various researches have been processed under these issues. This proposed model incorporates two phases, namely, contrast correction and color correction. Moreover, two processes are involved within the contrast correction model, namely: (i) global contrast correction and (ii) local contrast correction. For the image enhancement, the main target is on the histogram evaluation, and therefore, the optimal selection of histogram limit is very essential. For this optimization purpose, a new hybrid algorithm is introduced namely, swarm updated Dragonfly Algorithm, which is the hybridization of Particle Swarm Optimization (PSO) and Dragonfly Algorithm (DA). Further, this paper mainly focused on Integrated Global and Local Contrast Correction (IGLCC). The proposed model is finally distinguished over the other conventional models like Contrast Limited Adaptive Histogram, IGLCC, dynamic stretching IGLCC-Genetic Algorithm, IGLCC-PSO, IGLCC- Firefly and IGLCC-Cuckoo Search, IGLCC-Distance-Oriented Cuckoo Search and DA, and the superiority of the proposed work is proved.
{"title":"Underwater Image Enhancement With Optimal Histogram Using Hybridized Particle Swarm and Dragonfly","authors":"R Prasath;T Kumanan","doi":"10.1093/comjnl/bxab056","DOIUrl":"https://doi.org/10.1093/comjnl/bxab056","url":null,"abstract":"Typically, underwater image processing is mainly concerned with balancing the color change distortion or light scattering. Various researches have been processed under these issues. This proposed model incorporates two phases, namely, contrast correction and color correction. Moreover, two processes are involved within the contrast correction model, namely: (i) global contrast correction and (ii) local contrast correction. For the image enhancement, the main target is on the histogram evaluation, and therefore, the optimal selection of histogram limit is very essential. For this optimization purpose, a new hybrid algorithm is introduced namely, swarm updated Dragonfly Algorithm, which is the hybridization of Particle Swarm Optimization (PSO) and Dragonfly Algorithm (DA). Further, this paper mainly focused on Integrated Global and Local Contrast Correction (IGLCC). The proposed model is finally distinguished over the other conventional models like Contrast Limited Adaptive Histogram, IGLCC, dynamic stretching IGLCC-Genetic Algorithm, IGLCC-PSO, IGLCC- Firefly and IGLCC-Cuckoo Search, IGLCC-Distance-Oriented Cuckoo Search and DA, and the superiority of the proposed work is proved.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"64 10","pages":"1494-1513"},"PeriodicalIF":1.4,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49943518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern 5G networks promise more bandwidth, less delay and more flexibility for an ever increasing number of users and applications, with Software Defined Networking, Network Function Virtualization and Network Slicing as key enablers. Within that context, efficiently provisioning the network and cloud resources of a wide variety of applications with dynamic user demand is a real challenge. We study here the network slice reconfiguration problem. Reconfiguring network slices from time to time reduces network operational costs and increases the number of slices that can be managed within the network. However, this affect the Quality of Service of users during the reconfiguration step. To solve this issue, we study solutions implementing a make-before-break scheme. We propose new models and scalable algorithms (relying on column generation techniques) that solve large data instances in few seconds.
{"title":"Be Scalable and Rescue My Slices During Reconfiguration","authors":"Adrien Gausseran;Frederic Giroire;Brigitte Jaumard;Joanna Moulierac","doi":"10.1093/comjnl/bxab108","DOIUrl":"https://doi.org/10.1093/comjnl/bxab108","url":null,"abstract":"Modern 5G networks promise more bandwidth, less delay and more flexibility for an ever increasing number of users and applications, with Software Defined Networking, Network Function Virtualization and Network Slicing as key enablers. Within that context, efficiently provisioning the network and cloud resources of a wide variety of applications with dynamic user demand is a real challenge. We study here the network slice reconfiguration problem. Reconfiguring network slices from time to time reduces network operational costs and increases the number of slices that can be managed within the network. However, this affect the Quality of Service of users during the reconfiguration step. To solve this issue, we study solutions implementing a make-before-break scheme. We propose new models and scalable algorithms (relying on column generation techniques) that solve large data instances in few seconds.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"64 10","pages":"1584-1599"},"PeriodicalIF":1.4,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49984584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of cloud computing and mobile devices is increasing in healthcare service delivery primarily because of the huge storage capacity of cloud, the heterogeneous structure of health data and the user-friendly interfaces on mobile devices. We propose a healthcare delivery scheme where a large knowledge base is stored in the cloud and user responses from mobile devices are input to this knowledge base to reach a preliminary diagnosis of diseases based on patients' symptoms. However, instead of sending every response to the cloud and getting data from cloud server, it may often be desirable to prune a portion of the knowledge base that is stored in a graph form and download in to the mobile devices. Downloading data from cloud depends on the storage, battery power, processor of a mobile device, wireless network bandwidth and cloud processor capacity. In this paper, we focus on developing mathematical expressions involving the above mentioned criteria and show how these parameters are dependent on each other. The expressions built in this paper can be used in real-life scenarios to take decisions regarding the amount of data to be pruned in order to save energy as well as time.
{"title":"Pruning of Health Data in Mobile-Assisted Remote Healthcare Service Delivery","authors":"Safikureshi Mondal;Nandini Mukherjee","doi":"10.1093/comjnl/bxab083","DOIUrl":"https://doi.org/10.1093/comjnl/bxab083","url":null,"abstract":"The use of cloud computing and mobile devices is increasing in healthcare service delivery primarily because of the huge storage capacity of cloud, the heterogeneous structure of health data and the user-friendly interfaces on mobile devices. We propose a healthcare delivery scheme where a large knowledge base is stored in the cloud and user responses from mobile devices are input to this knowledge base to reach a preliminary diagnosis of diseases based on patients' symptoms. However, instead of sending every response to the cloud and getting data from cloud server, it may often be desirable to prune a portion of the knowledge base that is stored in a graph form and download in to the mobile devices. Downloading data from cloud depends on the storage, battery power, processor of a mobile device, wireless network bandwidth and cloud processor capacity. In this paper, we focus on developing mathematical expressions involving the above mentioned criteria and show how these parameters are dependent on each other. The expressions built in this paper can be used in real-life scenarios to take decisions regarding the amount of data to be pruned in order to save energy as well as time.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"64 10","pages":"1549-1564"},"PeriodicalIF":1.4,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49943520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thanks to excellent reliability, availability, flexibility and scalability, redundant arrays of independent (or inexpensive) disks (RAID) are widely deployed in large-scale data centers. RAID scaling effectively relieves the storage pressure of the data center and increases both the capacity and I/O parallelism of storage systems. To regain load balancing among all disks including old and new, some data usually are migrated from old disks to new disks. Owing to unique parity layouts of erasure codes, traditional scaling approaches may incur high migration overhead on RAID-6 scaling. This paper proposes an efficient approach based Short-Code for RAID-6 scaling. The approach exhibits three salient features: first, SS6 introduces $tau $