Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420567
M. Jaberi, H. Mala
One of the basic operations over distributed data is to find the k-th greatest value among union of these numerical data. The challenge arises when the datasets are private and their owners cannot trust any third party. In this paper, we propose a new secure protocol to find the k-th greatest value by means of secure summation sub-protocol. We compare our proposed protocol with other similar protocols. Specially, we will show that our scheme is more efficient than the well-known protocol of Aggarwal et.al. (2004) in terms of computation and communication complexity. Specifically, in the case of Ti = 1 secret value for any party Pi our protocol has log m computation overhead and δ log m communication overhead for party Pi, where m and δ are the maximum acceptable value and communication overhead of the secure summation sub-protocol, respectively. The overheads of our protocol is exactly half of the overheads of Aggarwal’s protocol.
{"title":"Secure Determining of the k-th Greatest Element Among Distributed Private Values","authors":"M. Jaberi, H. Mala","doi":"10.1109/CSICC52343.2021.9420567","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420567","url":null,"abstract":"One of the basic operations over distributed data is to find the k-th greatest value among union of these numerical data. The challenge arises when the datasets are private and their owners cannot trust any third party. In this paper, we propose a new secure protocol to find the k-th greatest value by means of secure summation sub-protocol. We compare our proposed protocol with other similar protocols. Specially, we will show that our scheme is more efficient than the well-known protocol of Aggarwal et.al. (2004) in terms of computation and communication complexity. Specifically, in the case of Ti = 1 secret value for any party Pi our protocol has log m computation overhead and δ log m communication overhead for party Pi, where m and δ are the maximum acceptable value and communication overhead of the secure summation sub-protocol, respectively. The overheads of our protocol is exactly half of the overheads of Aggarwal’s protocol.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124867931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420543
R. Ghanbari, K. Borna
In this paper, we analyzed different models of LSTM neural networks on the multi-step time-series dataset. The purpose of this study is to express a clear and precise method using LSTM neural networks for sequence datasets. These models can be used in other similar datasets, and the models are composed to be developed for various multi-step datasets with the slightest adjustment required. The principal purpose and question of this study were whether it is possible to provide a model to predict the amount of electricity consumed by a house over the next seven days. Using the specified models, we have made a prediction based on the dataset. We also made a comprehensive comparison with all the results obtained from the methods among different models. In this study, the dataset is household electricity consumption data gathered over four years. We have been able to achieve the desired prediction results with the least amount of error among the existing state-of-the-art models.
{"title":"Multivariate Time-Series Prediction Using LSTM Neural Networks","authors":"R. Ghanbari, K. Borna","doi":"10.1109/CSICC52343.2021.9420543","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420543","url":null,"abstract":"In this paper, we analyzed different models of LSTM neural networks on the multi-step time-series dataset. The purpose of this study is to express a clear and precise method using LSTM neural networks for sequence datasets. These models can be used in other similar datasets, and the models are composed to be developed for various multi-step datasets with the slightest adjustment required. The principal purpose and question of this study were whether it is possible to provide a model to predict the amount of electricity consumed by a house over the next seven days. Using the specified models, we have made a prediction based on the dataset. We also made a comprehensive comparison with all the results obtained from the methods among different models. In this study, the dataset is household electricity consumption data gathered over four years. We have been able to achieve the desired prediction results with the least amount of error among the existing state-of-the-art models.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125184945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420545
A. Alizadeh, Navid Malek Alayi, A. Fereidunian, H. Lesani
Recurrent neural networks (RNNs) utilize their internal state to handle variable length sequences, as time series; namely here as uncertain failure rates of the systems. Failure rate model of the components are required to improve systems reliability. Although the failure rate model has undeniable importance systems reliability assessment, an acceptable failure rate model has not been proposed to consider all causes of failures particularly random failures. Therefore, planners and decision makers are susceptible to a high financial risk for their decisions in the system. An approach is addressed to consider random failure rate along with deteriorating failure rate, to ameliorate this risks, in this paper. Therefore, the complexity of failure behavior is considered, while modeling considering the failure data as a time series. Moreover, the results of failure rate estimation are tested on a reliability-centered maintenance (RCM) implementation to prove the importance of random failure rate consideration. The results express that a more effective strategy can be regarded for preventive maintenance (PM) scheduling in RCM problem, when the proposed approach is utilized for failure rate modeling.
{"title":"A Recurrent Neural Network Approach to Model Failure Rate Considering Random and Deteriorating Failures","authors":"A. Alizadeh, Navid Malek Alayi, A. Fereidunian, H. Lesani","doi":"10.1109/CSICC52343.2021.9420545","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420545","url":null,"abstract":"Recurrent neural networks (RNNs) utilize their internal state to handle variable length sequences, as time series; namely here as uncertain failure rates of the systems. Failure rate model of the components are required to improve systems reliability. Although the failure rate model has undeniable importance systems reliability assessment, an acceptable failure rate model has not been proposed to consider all causes of failures particularly random failures. Therefore, planners and decision makers are susceptible to a high financial risk for their decisions in the system. An approach is addressed to consider random failure rate along with deteriorating failure rate, to ameliorate this risks, in this paper. Therefore, the complexity of failure behavior is considered, while modeling considering the failure data as a time series. Moreover, the results of failure rate estimation are tested on a reliability-centered maintenance (RCM) implementation to prove the importance of random failure rate consideration. The results express that a more effective strategy can be regarded for preventive maintenance (PM) scheduling in RCM problem, when the proposed approach is utilized for failure rate modeling.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128833464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420589
Mahboobeh Riahi-Madvar, B. Nasersharif, A. A. Azirani
Outlier detection in high dimensional data faces the challenge of curse of dimensionality where irrelevant features may prevent detection of outliers. The Principal Component Analysis (PCA) is widely used for dimensionality reduction in high dimensional outlier detection problem. While no single subspace can to thoroughly capture the outlier data points; we propose to combine the result of multiple subspaces to deal with this situation. In this research, we propose a subspace outlier detection algorithm in high dimensional data using an ensemble of PCA-based subspaces (SODEP) method. Three relevant subspaces are selected using PCA features to discover different types of outliers and subsequently, compute outlier scores in the projected subspaces. The experimental results show that our ensemble-based outlier selection is a promising method in high dimensional data and has better efficiency than other compared methods.
{"title":"Subspace Outlier Detection in High Dimensional Data using Ensemble of PCA-based Subspaces","authors":"Mahboobeh Riahi-Madvar, B. Nasersharif, A. A. Azirani","doi":"10.1109/CSICC52343.2021.9420589","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420589","url":null,"abstract":"Outlier detection in high dimensional data faces the challenge of curse of dimensionality where irrelevant features may prevent detection of outliers. The Principal Component Analysis (PCA) is widely used for dimensionality reduction in high dimensional outlier detection problem. While no single subspace can to thoroughly capture the outlier data points; we propose to combine the result of multiple subspaces to deal with this situation. In this research, we propose a subspace outlier detection algorithm in high dimensional data using an ensemble of PCA-based subspaces (SODEP) method. Three relevant subspaces are selected using PCA features to discover different types of outliers and subsequently, compute outlier scores in the projected subspaces. The experimental results show that our ensemble-based outlier selection is a promising method in high dimensional data and has better efficiency than other compared methods.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121665860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420577
Razie Roostaei, Marzieh Sheikhi, Z. Movahedi
Nowadays, the usage of mobile devices is increasing in human’s life. But these devices have some constraints such as limited storage, low battery lifetime, and weak computation capacity. To deal with these limitations, mobile devices offload their heavy applications to the cloud by using mobile cloud computing technology. Because of the network conditions, offloading may impose delay and energy costs on mobile devices. Thus, it is a tradeoff between local and remote execution. Further, offloading some components of the application may be cost-effective than the whole one. In this paper, we propose a fine-grain computation offloading scheme considering application components’ sequencing. The proposed scheme turns the exponential complexity of the decision algorithm into the polynomial. The simulation and evaluation results demonstrate that the offloading efficiency improves thanks to reducing the decision overhead.
{"title":"Fine-grain Computation Offloading Considering Application Components’ Sequencing","authors":"Razie Roostaei, Marzieh Sheikhi, Z. Movahedi","doi":"10.1109/CSICC52343.2021.9420577","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420577","url":null,"abstract":"Nowadays, the usage of mobile devices is increasing in human’s life. But these devices have some constraints such as limited storage, low battery lifetime, and weak computation capacity. To deal with these limitations, mobile devices offload their heavy applications to the cloud by using mobile cloud computing technology. Because of the network conditions, offloading may impose delay and energy costs on mobile devices. Thus, it is a tradeoff between local and remote execution. Further, offloading some components of the application may be cost-effective than the whole one. In this paper, we propose a fine-grain computation offloading scheme considering application components’ sequencing. The proposed scheme turns the exponential complexity of the decision algorithm into the polynomial. The simulation and evaluation results demonstrate that the offloading efficiency improves thanks to reducing the decision overhead.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122082256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420586
Akram Sabzmakan, S. L. Mirtaheri
With the ever-expanding digital communications and the need for advanced interoperability and collaboration, organizations and entities need to share their digital assets. Cloud computing is now widely used for managing and storing resources. Access control is a critical issue, facing many challenges in distributed environments, including clouds. In this paper, we present a model of the cloud access control system. Our distributed model utilizes a role-based access control to enable the management of resources and the parties’ access securely. We provide interoperability between multiple organizations to access shared resources using Ethereum Blockchain smart contracts and access levels for available resources. Roles define access permissions; however, unlike the traditional role-based access control model, the roles are determined according to the organizations involved' collaborative project, sometimes may not exist in any organization. They can only be created in their interactions. Finally, for evaluating its cost and time parameters. We use Ethereum smart contracts and deploy them in the Ethereum test network called Rinkby,
{"title":"An Improved Distributed Access Control Model in Cloud Computing by Blockchain","authors":"Akram Sabzmakan, S. L. Mirtaheri","doi":"10.1109/CSICC52343.2021.9420586","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420586","url":null,"abstract":"With the ever-expanding digital communications and the need for advanced interoperability and collaboration, organizations and entities need to share their digital assets. Cloud computing is now widely used for managing and storing resources. Access control is a critical issue, facing many challenges in distributed environments, including clouds. In this paper, we present a model of the cloud access control system. Our distributed model utilizes a role-based access control to enable the management of resources and the parties’ access securely. We provide interoperability between multiple organizations to access shared resources using Ethereum Blockchain smart contracts and access levels for available resources. Roles define access permissions; however, unlike the traditional role-based access control model, the roles are determined according to the organizations involved' collaborative project, sometimes may not exist in any organization. They can only be created in their interactions. Finally, for evaluating its cost and time parameters. We use Ethereum smart contracts and deploy them in the Ethereum test network called Rinkby,","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123111390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420571
Ghazal Rezaei, M. Hashemi
With the new coronavirus crisis, medical devices’ workload has increased dramatically, leaving them growingly vulnerable to security threats and in need of a comprehensive solution. In this work, we take advantage of the flexible and highly manageable nature of Software Defined Networks (SDN) to design a thoroughgoing security framework that covers a health organization’s various security requirements. Our solution comes to be an advanced SDN firewall that solves the issues facing traditional firewalls. It enables the partitioning of the organization’s network and the enforcement of different filtering and monitoring behaviors on each partition depending on security conditions. We pursued the network’s efficient and dynamic security management with the least human intervention in designing our model which makes it generally qualified to use in networks with different security requirements.
{"title":"An SDN-based Firewall for Networks with Varying Security Requirements","authors":"Ghazal Rezaei, M. Hashemi","doi":"10.1109/CSICC52343.2021.9420571","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420571","url":null,"abstract":"With the new coronavirus crisis, medical devices’ workload has increased dramatically, leaving them growingly vulnerable to security threats and in need of a comprehensive solution. In this work, we take advantage of the flexible and highly manageable nature of Software Defined Networks (SDN) to design a thoroughgoing security framework that covers a health organization’s various security requirements. Our solution comes to be an advanced SDN firewall that solves the issues facing traditional firewalls. It enables the partitioning of the organization’s network and the enforcement of different filtering and monitoring behaviors on each partition depending on security conditions. We pursued the network’s efficient and dynamic security management with the least human intervention in designing our model which makes it generally qualified to use in networks with different security requirements.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114714140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420550
Seyed Mohammad Seyed Motehayeri, Vahid Baghi, E. M. Miandoab, A. Moeini
Off-Policy Deep Reinforcement Learning (DRL) algorithms such as Deep Deterministic Policy Gradient (DDPG) has been used to teach intelligent agents to solve complicated problems in continuous space-action environments. Several methods have been successfully applied to increase the training performance and achieve better speed and stability for these algorithms. Such as experience replay to selecting a batch of transactions of the replay memory buffer. However, working with environments with sparse reward function is a challenge for these algorithms and causes them to reduce these algorithms' performance. This research intends to make the transaction selection process more efficient by increasing the likelihood of selecting important transactions from the replay memory buffer. Our proposed method works better with a sparse reward function or, in particular, with environments that have termination conditions. We are using a secondary replay memory buffer that stores more critical transactions. In the training process, transactions are select in both the first replay buffer and the secondary replay buffer. We also use a parallel environment to asynchronously execute and fill the primary replay buffer and the secondary replay buffer. This method will help us to get better performance and stability. Finally, we evaluate our proposed approach to the Crawler model, one of the Unity ML-Agent tasks with sparse reward function, against DDPG and AE-DDPG.
{"title":"Duplicated Replay Buffer for Asynchronous Deep Deterministic Policy Gradient","authors":"Seyed Mohammad Seyed Motehayeri, Vahid Baghi, E. M. Miandoab, A. Moeini","doi":"10.1109/CSICC52343.2021.9420550","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420550","url":null,"abstract":"Off-Policy Deep Reinforcement Learning (DRL) algorithms such as Deep Deterministic Policy Gradient (DDPG) has been used to teach intelligent agents to solve complicated problems in continuous space-action environments. Several methods have been successfully applied to increase the training performance and achieve better speed and stability for these algorithms. Such as experience replay to selecting a batch of transactions of the replay memory buffer. However, working with environments with sparse reward function is a challenge for these algorithms and causes them to reduce these algorithms' performance. This research intends to make the transaction selection process more efficient by increasing the likelihood of selecting important transactions from the replay memory buffer. Our proposed method works better with a sparse reward function or, in particular, with environments that have termination conditions. We are using a secondary replay memory buffer that stores more critical transactions. In the training process, transactions are select in both the first replay buffer and the secondary replay buffer. We also use a parallel environment to asynchronously execute and fill the primary replay buffer and the secondary replay buffer. This method will help us to get better performance and stability. Finally, we evaluate our proposed approach to the Crawler model, one of the Unity ML-Agent tasks with sparse reward function, against DDPG and AE-DDPG.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127670446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420573
Mohammad Hosein Hamian, Ali Beikmohammadi, A. Ahmadi, B. Nasersharif
One of the bold issues in autonomous driving is considered semantic image segmentation, which must be done with high accuracy and speed. Semantic segmentation is used to understand an image at the pixel level. In this regard, various architectures based on deep neural networks have been proposed for semantic segmentation of autonomous driving image datasets. In this paper, we proposed a novel combination method in which dividing the image into its constituent regions with the help of classical segmentation brings about achieving beneficial information that improves the DeepLab v3+ network results. The proposed method with the two backbones, Xception and MobileNetV2, obtains the mIoU of 81.73% and 76.31% on the Cityscapes dataset, respectively, which shows promising results compared to the model without post-processing.
{"title":"Semantic Segmentation of Autonomous Driving Images by the Combination of Deep Learning and Classical Segmentation","authors":"Mohammad Hosein Hamian, Ali Beikmohammadi, A. Ahmadi, B. Nasersharif","doi":"10.1109/CSICC52343.2021.9420573","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420573","url":null,"abstract":"One of the bold issues in autonomous driving is considered semantic image segmentation, which must be done with high accuracy and speed. Semantic segmentation is used to understand an image at the pixel level. In this regard, various architectures based on deep neural networks have been proposed for semantic segmentation of autonomous driving image datasets. In this paper, we proposed a novel combination method in which dividing the image into its constituent regions with the help of classical segmentation brings about achieving beneficial information that improves the DeepLab v3+ network results. The proposed method with the two backbones, Xception and MobileNetV2, obtains the mIoU of 81.73% and 76.31% on the Cityscapes dataset, respectively, which shows promising results compared to the model without post-processing.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132027568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420623
Mehdi Balouchi, A. Ahmadi
Graph structured data has become very popular and useful recently. Many areas in science and technology are using graphs for modeling the phenomena they are dealing with (e.g., computer science, computational economics, biology, …). Since the volume of data and its velocity of generation is increasing every day, using machine learning methods for analyzing this data has become necessary. For this purpose, we need to find a representation for our graph structured data that preserves topological information of the graph alongside the feature information of its nodes. Another challenge in incorporating machine learning methods as a graph data analyzer is to provide enough amount of labeled data for the model which may be hard to do in real-world applications. In this paper we present a graph neural network-based model for learning node representations that can be used efficiently in machine learning methods. The model learns representations in an unsupervised contrastive framework so that there is no need for labels to be present. Also, we test our model by measuring its performance in the task of community detection of graphs. Performance comparing on two citation graphs shows that our model has a better ability to learn representations that have a higher accuracy for community detection than other models in the field.
{"title":"Graph Representation Learning In A Contrastive Framework For Community Detection","authors":"Mehdi Balouchi, A. Ahmadi","doi":"10.1109/CSICC52343.2021.9420623","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420623","url":null,"abstract":"Graph structured data has become very popular and useful recently. Many areas in science and technology are using graphs for modeling the phenomena they are dealing with (e.g., computer science, computational economics, biology, …). Since the volume of data and its velocity of generation is increasing every day, using machine learning methods for analyzing this data has become necessary. For this purpose, we need to find a representation for our graph structured data that preserves topological information of the graph alongside the feature information of its nodes. Another challenge in incorporating machine learning methods as a graph data analyzer is to provide enough amount of labeled data for the model which may be hard to do in real-world applications. In this paper we present a graph neural network-based model for learning node representations that can be used efficiently in machine learning methods. The model learns representations in an unsupervised contrastive framework so that there is no need for labels to be present. Also, we test our model by measuring its performance in the task of community detection of graphs. Performance comparing on two citation graphs shows that our model has a better ability to learn representations that have a higher accuracy for community detection than other models in the field.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127166362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}