Abdolrasoul Sakhaei Gharagezlou, Mahdi Nangir, Nima Imani
—In this paper, the performance of a system in terms of the energy efficiency (EE) is studied. To check the EE performance, an appropriate power is allocated to each user. The system in question in this paper is a multiple-input multiple-output (MIMO) system with non-orthogonal multiple access (NOMA) method. Precoding in this system is considered to be the zero forcing (ZF). It is also assumed that the channel state information (CSI) mode is perfect. First, all the parameters that affect the channel, such as path loss and beam forming are investigated, and then the channel matrix is obtained. To improve system performance, better conditions are provided for users with poor channel conditions. These conditions are created by allocating more appropriate power to these users, or in other words, the total transmission power is divided according to the distance of users from the base station (BS) and the channel conditions of each user. The problem of maximizing the EE is formulated with two constraints of the minimum user rate and the maximum transmission power. This is a non-convex problem that becomes a convex problem using optimization properties, and because the problem is constrained it becomes an unconstrained problem using the Lagrange dual function. Numerical and simulation results are presented to prove the mathematical relationships which show that the performance of the proposed scheme is improved compared to the existing methods. The simulation results are related to two different algorithms with a same objective function. Furthermore, to comparison with performance of other methods, output of these two algorithms are also compared with each other.
{"title":"Energy Efficient Power Allocation in MIMO-NOMA Systems with ZF Precoding Using Cell Division Technique","authors":"Abdolrasoul Sakhaei Gharagezlou, Mahdi Nangir, Nima Imani","doi":"10.52547/itrc.14.3.10","DOIUrl":"https://doi.org/10.52547/itrc.14.3.10","url":null,"abstract":"—In this paper, the performance of a system in terms of the energy efficiency (EE) is studied. To check the EE performance, an appropriate power is allocated to each user. The system in question in this paper is a multiple-input multiple-output (MIMO) system with non-orthogonal multiple access (NOMA) method. Precoding in this system is considered to be the zero forcing (ZF). It is also assumed that the channel state information (CSI) mode is perfect. First, all the parameters that affect the channel, such as path loss and beam forming are investigated, and then the channel matrix is obtained. To improve system performance, better conditions are provided for users with poor channel conditions. These conditions are created by allocating more appropriate power to these users, or in other words, the total transmission power is divided according to the distance of users from the base station (BS) and the channel conditions of each user. The problem of maximizing the EE is formulated with two constraints of the minimum user rate and the maximum transmission power. This is a non-convex problem that becomes a convex problem using optimization properties, and because the problem is constrained it becomes an unconstrained problem using the Lagrange dual function. Numerical and simulation results are presented to prove the mathematical relationships which show that the performance of the proposed scheme is improved compared to the existing methods. The simulation results are related to two different algorithms with a same objective function. Furthermore, to comparison with performance of other methods, output of these two algorithms are also compared with each other.","PeriodicalId":270455,"journal":{"name":"International Journal of Information and Communication Technology Research","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128146203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seyedeh Motahareh Hosseini, M. Aghdasi, B. Teimourpour, A. Albadvi
—The importance of process analysis in engineering, procurement and construction companies (EPC), due to the complexity of the measures, the high level of communication between people, different and non-integrated information systems, as well as the amount of capital involved in these projects is much higher and more challenging. Limited research has been done on exploring business processes in these companies. In this study, in order to better and more accurately analyze the company's performance, three perspectives of process mining (process flow, case and organizational) is analyzed by using the event logs recorded in the supplier selection process. The results of this study led to the identification of challenges in the process, including repetitive loops, duplicate activities, survey of factors affecting the implementation of the process and also examining the relationships between people involved in the project, which can be used to improve the future performance of the company.
{"title":"Implementing Process Mining Techniques to Analyze Performance in EPC Companies","authors":"Seyedeh Motahareh Hosseini, M. Aghdasi, B. Teimourpour, A. Albadvi","doi":"10.52547/itrc.14.2.66","DOIUrl":"https://doi.org/10.52547/itrc.14.2.66","url":null,"abstract":"—The importance of process analysis in engineering, procurement and construction companies (EPC), due to the complexity of the measures, the high level of communication between people, different and non-integrated information systems, as well as the amount of capital involved in these projects is much higher and more challenging. Limited research has been done on exploring business processes in these companies. In this study, in order to better and more accurately analyze the company's performance, three perspectives of process mining (process flow, case and organizational) is analyzed by using the event logs recorded in the supplier selection process. The results of this study led to the identification of challenges in the process, including repetitive loops, duplicate activities, survey of factors affecting the implementation of the process and also examining the relationships between people involved in the project, which can be used to improve the future performance of the company.","PeriodicalId":270455,"journal":{"name":"International Journal of Information and Communication Technology Research","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114097852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Vahdat-Nejad, F. Azizi, Mahdi Hajiabadi, F. Salmani, Sajedeh Abbasi, Mohadese Jamalian, Reyhane Mosafer, H. Hajiabadi, W. Mansoor
—The outbreak of the COVID-19 in 2020 and lack of an effective cure caused psychological problems among humans. This has been reflected widely on social media. Analyzing a large number of English tweets posted in the early stages of the pandemic, this paper addresses three psychological parameters: fear, hope, and depression. The main issue is the extraction of the related tweets with each of these parameters. To this end, three lexicons are proposed for these psychological parameters to extract the tweets through content analysis. A lexicon-based method is then used with GEO Names (i.e. a geographical database) to label tweets with country tags. Fear, hope, and depression trends are then extracted for the entire world and 30 countries. According to the analysis of results, there is a high correlation between the frequency of tweets and the official daily statistics of active cases in many countries. Moreover, fear tweets dominate hope tweets in most countries, something which shows the worldwide fear in the early months of the pandemic. Ultimately, the diagrams of many countries demonstrate unusual spikes caused by the dissemination of specific news and announcements.
{"title":"Large-Scale Twitter Mining for Extracting the Psychological Impacts of COVID-19","authors":"H. Vahdat-Nejad, F. Azizi, Mahdi Hajiabadi, F. Salmani, Sajedeh Abbasi, Mohadese Jamalian, Reyhane Mosafer, H. Hajiabadi, W. Mansoor","doi":"10.52547/itrc.14.2.23","DOIUrl":"https://doi.org/10.52547/itrc.14.2.23","url":null,"abstract":"—The outbreak of the COVID-19 in 2020 and lack of an effective cure caused psychological problems among humans. This has been reflected widely on social media. Analyzing a large number of English tweets posted in the early stages of the pandemic, this paper addresses three psychological parameters: fear, hope, and depression. The main issue is the extraction of the related tweets with each of these parameters. To this end, three lexicons are proposed for these psychological parameters to extract the tweets through content analysis. A lexicon-based method is then used with GEO Names (i.e. a geographical database) to label tweets with country tags. Fear, hope, and depression trends are then extracted for the entire world and 30 countries. According to the analysis of results, there is a high correlation between the frequency of tweets and the official daily statistics of active cases in many countries. Moreover, fear tweets dominate hope tweets in most countries, something which shows the worldwide fear in the early months of the pandemic. Ultimately, the diagrams of many countries demonstrate unusual spikes caused by the dissemination of specific news and announcements.","PeriodicalId":270455,"journal":{"name":"International Journal of Information and Communication Technology Research","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121114972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
—Cloud computing is a computing model which uses network facilities to provision, use and deliver computing services. Nowadays, the issue of reducing energy consumption has become very important alongside the efficiency for Cloud service providers. Dynamic virtual machine (VM) consolidation is a technology that has been used for energy efficient computing in Cloud data centers. In this paper, we offer solutions to reduce overall costs, including energy consumption and service level agreement (SLA) violation. To consolidate VMs into a smaller number of physical machines, a novel SLA-aware VM placement method based on genetic algorithms is presented. In order to make the VM placement algorithm be SLA-aware, the proposed approach considers workloads as non-stationary stochastic processes, and automatically approximates them as stationary processes using a novel dynamic sliding window algorithm. Simulation results in the CloudSim toolkit confirms that the proposed virtual server consolidation algorithms in this paper provides significant total cost savings (evaluated by ESV metric), which is about 45% better than the best of the benchmark algorithms.
{"title":"Cost Reduction Using SLA-Aware Genetic Algorithm for Consolidation of Virtual Machines in Cloud Data Centers","authors":"Hossein Monshizadeh Naeen","doi":"10.52547/itrc.14.2.14","DOIUrl":"https://doi.org/10.52547/itrc.14.2.14","url":null,"abstract":"—Cloud computing is a computing model which uses network facilities to provision, use and deliver computing services. Nowadays, the issue of reducing energy consumption has become very important alongside the efficiency for Cloud service providers. Dynamic virtual machine (VM) consolidation is a technology that has been used for energy efficient computing in Cloud data centers. In this paper, we offer solutions to reduce overall costs, including energy consumption and service level agreement (SLA) violation. To consolidate VMs into a smaller number of physical machines, a novel SLA-aware VM placement method based on genetic algorithms is presented. In order to make the VM placement algorithm be SLA-aware, the proposed approach considers workloads as non-stationary stochastic processes, and automatically approximates them as stationary processes using a novel dynamic sliding window algorithm. Simulation results in the CloudSim toolkit confirms that the proposed virtual server consolidation algorithms in this paper provides significant total cost savings (evaluated by ESV metric), which is about 45% better than the best of the benchmark algorithms.","PeriodicalId":270455,"journal":{"name":"International Journal of Information and Communication Technology Research","volume":"16 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130377107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Pooya Malek, Shaghayegh Naderi, Hossein Gharaee Garakani
—Almost every industry has revolutionized with Artificial Intelligence. The telecommunication industry is one of them to improve customers' Quality of Services and Quality of Experience by enhancing networking infrastructure capabilities which could lead to much higher rates even in 5G Networks. To this end, network traffic classification methods for identifying and classifying user behavior have been used. Traditional analysis with Statistical-Based, Port-Based, Payload-Based, and Flow-Based methods was the key for these systems before the 4th industrial revolution. AI combination with such methods leads to higher accuracy and better performance. In the last few decades, numerous studies have been conducted on Machine Learning and Deep Learning, but there are still some doubts about using DL over ML or vice versa. This paper endeavors to investigate challenges in ML/DL use-cases by exploring more than 140 identical researches. We then analyze the results and visualize a practical way of classifying internet traffic for popular applications.
{"title":"A Review on Internet Traffic Classification Based on Artificial Intelligence Techniques","authors":"Mohammad Pooya Malek, Shaghayegh Naderi, Hossein Gharaee Garakani","doi":"10.52547/itrc.14.2.1","DOIUrl":"https://doi.org/10.52547/itrc.14.2.1","url":null,"abstract":"—Almost every industry has revolutionized with Artificial Intelligence. The telecommunication industry is one of them to improve customers' Quality of Services and Quality of Experience by enhancing networking infrastructure capabilities which could lead to much higher rates even in 5G Networks. To this end, network traffic classification methods for identifying and classifying user behavior have been used. Traditional analysis with Statistical-Based, Port-Based, Payload-Based, and Flow-Based methods was the key for these systems before the 4th industrial revolution. AI combination with such methods leads to higher accuracy and better performance. In the last few decades, numerous studies have been conducted on Machine Learning and Deep Learning, but there are still some doubts about using DL over ML or vice versa. This paper endeavors to investigate challenges in ML/DL use-cases by exploring more than 140 identical researches. We then analyze the results and visualize a practical way of classifying internet traffic for popular applications.","PeriodicalId":270455,"journal":{"name":"International Journal of Information and Communication Technology Research","volume":"333 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122847486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Fasanghari, Helena Bahrami, Hamideh Sadat Cheraghchi
— In machine learning and data analysis, clustering large amounts of data is one of the most challenging tasks. In reality, many fields, including research, health, social life, and commerce, rely on the information generated every second. The significance of this enormous amount of data in all facets of contemporary human existence has prompted numerous attempts to develop new methods for analyzing large amounts of data. In this research, an Incremental Heap Self-Organizing Map (IHSOM) is proposed for clustering a vast amount of data that continues to grow. The gradual nature of IHSOM enables environments to change and evolve. In other words, IHSOM can quickly adapt to the size of a dataset. The heap binary tree structure of the proposed approach offers several advantages over other structures. Initially, the topology or neighborhood relationship between data in the input space is maintained in the output space. The outlier data are then routed to the tree's leaf nodes, where they may be efficiently managed. This capability is supplied by a probability density function as a threshold for allocating more similar data to a cluster and transferring less similar data to the following node. The pruning and expanding nodes process renders the algorithm noise-resistant, more precise in clustering, and memory-efficient. Therefore, heap tree structure accelerates node traversal and reorganization following the addition or deletion of nodes. IHSOM's simple user-defined parameters make it a practical unsupervised clustering approach. On both synthetic and real-world datasets, the performance of the proposed algorithm is evaluated and compared to existing hierarchical self-organizing maps and clustering algorithms. The outcomes of the investigation demonstrated IHSOM's proficiency in clustering
{"title":"Clustering Large-Scale Data using an Incremental Heap Self-Organizing Map","authors":"M. Fasanghari, Helena Bahrami, Hamideh Sadat Cheraghchi","doi":"10.52547/itrc.14.2.41","DOIUrl":"https://doi.org/10.52547/itrc.14.2.41","url":null,"abstract":"— In machine learning and data analysis, clustering large amounts of data is one of the most challenging tasks. In reality, many fields, including research, health, social life, and commerce, rely on the information generated every second. The significance of this enormous amount of data in all facets of contemporary human existence has prompted numerous attempts to develop new methods for analyzing large amounts of data. In this research, an Incremental Heap Self-Organizing Map (IHSOM) is proposed for clustering a vast amount of data that continues to grow. The gradual nature of IHSOM enables environments to change and evolve. In other words, IHSOM can quickly adapt to the size of a dataset. The heap binary tree structure of the proposed approach offers several advantages over other structures. Initially, the topology or neighborhood relationship between data in the input space is maintained in the output space. The outlier data are then routed to the tree's leaf nodes, where they may be efficiently managed. This capability is supplied by a probability density function as a threshold for allocating more similar data to a cluster and transferring less similar data to the following node. The pruning and expanding nodes process renders the algorithm noise-resistant, more precise in clustering, and memory-efficient. Therefore, heap tree structure accelerates node traversal and reorganization following the addition or deletion of nodes. IHSOM's simple user-defined parameters make it a practical unsupervised clustering approach. On both synthetic and real-world datasets, the performance of the proposed algorithm is evaluated and compared to existing hierarchical self-organizing maps and clustering algorithms. The outcomes of the investigation demonstrated IHSOM's proficiency in clustering","PeriodicalId":270455,"journal":{"name":"International Journal of Information and Communication Technology Research","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127518120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seyed Mehdi Mousavi, Ahmad Khademzadeh, A. Rahmani
— The IoT can lead to fundamental developments in health, education, urbanization, agriculture, industry, and other areas. Regarding the variety of different end-user applications and needs, developing a versatile communication network that can support such diverse and heterogeneous applications is necessary to decrease the implementation costs than developing a dedicated communication network for each application. LoRa is a type of LPWAN networks that is supported by LoRa Alliance and due to long-range communication and low power and reasonable cost, IoT has become the main goal of establishing LoRa. LoRaWAN covers the protocol and architecture of the system on top of the LoRa physical layer. The LoRa physical layer uses proprietary CSS modulation. This modulation operates below the noise level and is resistant to fading, interference, and blocking attacks, and is difficult to decode. LoRa operates in the unlicensed frequency band below 1GHz with different frequencies in different geographical areas. LoRa is much more useful for IoT applications than short-range protocols such as WiFi and Bluetooth, despite limitations in data transfer speeds and QoS. Therefore, in this manuscript, considering the importance and advantages of LoRa, this protocol is introduced and its various network aspect, importance, and application are examined. Then, a solution based on the cognitive radio technique is presented for QoS improvement to utilize the LoRa technology as a kind of versatile communication infrastructure for IoT.
{"title":"Toward a Versatile IoT Communication Infrastructure","authors":"Seyed Mehdi Mousavi, Ahmad Khademzadeh, A. Rahmani","doi":"10.52547/itrc.14.1.25","DOIUrl":"https://doi.org/10.52547/itrc.14.1.25","url":null,"abstract":"— The IoT can lead to fundamental developments in health, education, urbanization, agriculture, industry, and other areas. Regarding the variety of different end-user applications and needs, developing a versatile communication network that can support such diverse and heterogeneous applications is necessary to decrease the implementation costs than developing a dedicated communication network for each application. LoRa is a type of LPWAN networks that is supported by LoRa Alliance and due to long-range communication and low power and reasonable cost, IoT has become the main goal of establishing LoRa. LoRaWAN covers the protocol and architecture of the system on top of the LoRa physical layer. The LoRa physical layer uses proprietary CSS modulation. This modulation operates below the noise level and is resistant to fading, interference, and blocking attacks, and is difficult to decode. LoRa operates in the unlicensed frequency band below 1GHz with different frequencies in different geographical areas. LoRa is much more useful for IoT applications than short-range protocols such as WiFi and Bluetooth, despite limitations in data transfer speeds and QoS. Therefore, in this manuscript, considering the importance and advantages of LoRa, this protocol is introduced and its various network aspect, importance, and application are examined. Then, a solution based on the cognitive radio technique is presented for QoS improvement to utilize the LoRa technology as a kind of versatile communication infrastructure for IoT.","PeriodicalId":270455,"journal":{"name":"International Journal of Information and Communication Technology Research","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122677618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
—The most important challenge in wireless sensor networks is to extend the network lifetime, which is directly related to the energy consumption. Clustering is one of the well-known energy-saving solutions in WSNs. To put this in perspective, the most studies repeated cluster head selection methods for clustering in each round, which increases the number of sent and received messages. what's more, inappropriate cluster head selection and unbalanced clusters have increased energy dissipation. To create balanced clusters and reduce energy consumption, we used a centralized network and relay nodes, respectively. Besides, we applied a metaheuristic algorithm to select the optimal cluster heads because classical methods are easily trapped in local minimum. In this paper, the Grey Wolf Optimizer(GWO), which is a simple and flexible algorithm that is capable of balancing the two phases of exploration and exploitation is used. To prolong the network lifetime and reduce energy consumption in cluster head nodes, we proposed a centralized multiple clustering based on GWO that uses both energy and distance in cluster head selection. This research is compared with classical and metaheuristic algorithms in three scenarios based on the criteria of "Network Lifetime", "Number of dead nodes in each round" and "Total Remaining Energy(TRE) in the cluster head and relay nodes. The simulation results show that our research performs better than other methods. In addition, to analyze the scalability, it has been evaluated in terms of "number of nodes", "network dimensions" and "BS location". Regarding to the results, by rising 2 and 5 times of these conditions, the network performance is increased by 1.5 and 2 times, respectively.
-无线传感器网络面临的最大挑战是延长网络寿命,这直接关系到网络的能耗。聚类是无线传感器网络中公认的节能解决方案之一。为了更好地理解这一点,大多数研究在每轮聚类中重复簇头选择方法,这增加了发送和接收消息的数量。此外,簇头选择不当和簇的不平衡也增加了能量耗散。为了创建平衡的集群并降低能耗,我们分别使用了集中式网络和中继节点。此外,由于传统方法容易陷入局部最小值,我们采用了一种元启发式算法来选择最优簇头。本文采用了灰狼优化算法(Grey Wolf Optimizer, GWO),它是一种简单灵活的算法,能够平衡探索和开发两个阶段。为了延长网络寿命和降低簇头节点的能量消耗,提出了一种基于GWO的集中多聚类算法,该算法在簇头选择中同时考虑了能量和距离。以“网络生存期”、“每轮死节点数”和“簇头节点和中继节点的总剩余能量(Total Remaining Energy, TRE)”为标准,对三种场景下的经典算法和元启发式算法进行了比较。仿真结果表明,我们的研究方法比其他方法具有更好的性能。此外,为了分析可扩展性,还从“节点数”、“网络规模”和“BS位置”三个方面对其进行了评估。从结果来看,通过将这些条件提高2倍和5倍,网络性能分别提高1.5倍和2倍。
{"title":"Energy Efficient Multi-Clustering Using Grey Wolf Optimizer in Wireless Sensor Network","authors":"Maryam Ghorbanvirdi, S. M. Mazinani","doi":"10.52547/itrc.14.1.1","DOIUrl":"https://doi.org/10.52547/itrc.14.1.1","url":null,"abstract":"—The most important challenge in wireless sensor networks is to extend the network lifetime, which is directly related to the energy consumption. Clustering is one of the well-known energy-saving solutions in WSNs. To put this in perspective, the most studies repeated cluster head selection methods for clustering in each round, which increases the number of sent and received messages. what's more, inappropriate cluster head selection and unbalanced clusters have increased energy dissipation. To create balanced clusters and reduce energy consumption, we used a centralized network and relay nodes, respectively. Besides, we applied a metaheuristic algorithm to select the optimal cluster heads because classical methods are easily trapped in local minimum. In this paper, the Grey Wolf Optimizer(GWO), which is a simple and flexible algorithm that is capable of balancing the two phases of exploration and exploitation is used. To prolong the network lifetime and reduce energy consumption in cluster head nodes, we proposed a centralized multiple clustering based on GWO that uses both energy and distance in cluster head selection. This research is compared with classical and metaheuristic algorithms in three scenarios based on the criteria of \"Network Lifetime\", \"Number of dead nodes in each round\" and \"Total Remaining Energy(TRE) in the cluster head and relay nodes. The simulation results show that our research performs better than other methods. In addition, to analyze the scalability, it has been evaluated in terms of \"number of nodes\", \"network dimensions\" and \"BS location\". Regarding to the results, by rising 2 and 5 times of these conditions, the network performance is increased by 1.5 and 2 times, respectively.","PeriodicalId":270455,"journal":{"name":"International Journal of Information and Communication Technology Research","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114568211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sajedeh Lashgari, B. Teimourpour, Mostafa Akhavan-Safar
—Cancer-causing genes are genes in which mutations cause the onset and spread of cancer. These genes are called driver genes or cancer-causal genes. Several computational methods have been proposed so far to find them. Most of these methods are based on the genome sequencing of cancer tissues. They look for key mutations in genome data to predict cancer genes. This study proposes a new approach called centrality maximization intersection, cMaxDriver, as a network-based tool for predicting cancer-causing genes in the human transcriptional regulatory network. In this approach, we used degree, closeness, and betweenness centralities, without using genome data. We first constructed three cancer transcriptional regulatory networks using gene expression data and regulatory interactions as benchmarks. We then calculated the three mentioned centralities for the genes in the network and considered the nodes with the highest values in each of the centralities as important genes in the network. Finally, we identified the nodes with the highest value between at least two centralities as cancer causal genes. We compared the results with eighteen previous computational and network-based methods. The results show that the proposed approach has improved the efficiency and F-measure, significantly. In addition, the cMaxDriver approach has identified unique cancer driver genes, which other methods cannot identify.
{"title":"cMaxDriver: A Centrality Maximization Intersection Approach for Prediction of Cancer-Causing Genes in the Transcriptional Regulatory Network","authors":"Sajedeh Lashgari, B. Teimourpour, Mostafa Akhavan-Safar","doi":"10.52547/itrc.14.1.57","DOIUrl":"https://doi.org/10.52547/itrc.14.1.57","url":null,"abstract":"—Cancer-causing genes are genes in which mutations cause the onset and spread of cancer. These genes are called driver genes or cancer-causal genes. Several computational methods have been proposed so far to find them. Most of these methods are based on the genome sequencing of cancer tissues. They look for key mutations in genome data to predict cancer genes. This study proposes a new approach called centrality maximization intersection, cMaxDriver, as a network-based tool for predicting cancer-causing genes in the human transcriptional regulatory network. In this approach, we used degree, closeness, and betweenness centralities, without using genome data. We first constructed three cancer transcriptional regulatory networks using gene expression data and regulatory interactions as benchmarks. We then calculated the three mentioned centralities for the genes in the network and considered the nodes with the highest values in each of the centralities as important genes in the network. Finally, we identified the nodes with the highest value between at least two centralities as cancer causal genes. We compared the results with eighteen previous computational and network-based methods. The results show that the proposed approach has improved the efficiency and F-measure, significantly. In addition, the cMaxDriver approach has identified unique cancer driver genes, which other methods cannot identify.","PeriodicalId":270455,"journal":{"name":"International Journal of Information and Communication Technology Research","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125852510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
— In this paper, we study the spectral efficiency (SE) and energy efficiency (EE) of wireless-powered full-duplex (FD) heterogeneous networks (HetNets). In particular, we consider a two-tire HetNet with half duplex (HD) massive multiple-input multiple-output (MIMO) macrocell base stations (MBSs), FD small cell base stations (SBSs) and FD user equipments (UEs). UEs rely on energy harvesting (EH) from radio frequency signals to charge their batteries for communication with serving base stations. During the energy harvesting phase, UEs are associated to MBSs/SBSs based on the mean maximum received power (MMP) scheme. In the consecutive data transmission phase, each UE downloads packets from the same MBSs/SBSs, while uploads packets to the nearest SBSs using the harvested energy. We use tools from stochastic geometry to develop an analytical framework for the average UL power transfer and the UL and DL coverage probability analysis. We further investigate the EE of the proposed DUDe scheme to demonstrate the impact of different system parameters on the EE. Finally, we validate the analytical results through simulation and discuss the significance of the proposed DUDe user association to improve the average DL and UL SE in the wireless-powered FD HetNets.
{"title":"Spectral and Energy Efficiency WirelessPowered Massive-MIMO Heterogeneous\u0000Network","authors":"Sepideh Haghgoy, M. Mohammadi, Z. Mobini","doi":"10.52547/itrc.14.1.13","DOIUrl":"https://doi.org/10.52547/itrc.14.1.13","url":null,"abstract":"— In this paper, we study the spectral efficiency (SE) and energy efficiency (EE) of wireless-powered full-duplex (FD) heterogeneous networks (HetNets). In particular, we consider a two-tire HetNet with half duplex (HD) massive multiple-input multiple-output (MIMO) macrocell base stations (MBSs), FD small cell base stations (SBSs) and FD user equipments (UEs). UEs rely on energy harvesting (EH) from radio frequency signals to charge their batteries for communication with serving base stations. During the energy harvesting phase, UEs are associated to MBSs/SBSs based on the mean maximum received power (MMP) scheme. In the consecutive data transmission phase, each UE downloads packets from the same MBSs/SBSs, while uploads packets to the nearest SBSs using the harvested energy. We use tools from stochastic geometry to develop an analytical framework for the average UL power transfer and the UL and DL coverage probability analysis. We further investigate the EE of the proposed DUDe scheme to demonstrate the impact of different system parameters on the EE. Finally, we validate the analytical results through simulation and discuss the significance of the proposed DUDe user association to improve the average DL and UL SE in the wireless-powered FD HetNets.","PeriodicalId":270455,"journal":{"name":"International Journal of Information and Communication Technology Research","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116604687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}