Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420571
Ghazal Rezaei, M. Hashemi
With the new coronavirus crisis, medical devices’ workload has increased dramatically, leaving them growingly vulnerable to security threats and in need of a comprehensive solution. In this work, we take advantage of the flexible and highly manageable nature of Software Defined Networks (SDN) to design a thoroughgoing security framework that covers a health organization’s various security requirements. Our solution comes to be an advanced SDN firewall that solves the issues facing traditional firewalls. It enables the partitioning of the organization’s network and the enforcement of different filtering and monitoring behaviors on each partition depending on security conditions. We pursued the network’s efficient and dynamic security management with the least human intervention in designing our model which makes it generally qualified to use in networks with different security requirements.
{"title":"An SDN-based Firewall for Networks with Varying Security Requirements","authors":"Ghazal Rezaei, M. Hashemi","doi":"10.1109/CSICC52343.2021.9420571","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420571","url":null,"abstract":"With the new coronavirus crisis, medical devices’ workload has increased dramatically, leaving them growingly vulnerable to security threats and in need of a comprehensive solution. In this work, we take advantage of the flexible and highly manageable nature of Software Defined Networks (SDN) to design a thoroughgoing security framework that covers a health organization’s various security requirements. Our solution comes to be an advanced SDN firewall that solves the issues facing traditional firewalls. It enables the partitioning of the organization’s network and the enforcement of different filtering and monitoring behaviors on each partition depending on security conditions. We pursued the network’s efficient and dynamic security management with the least human intervention in designing our model which makes it generally qualified to use in networks with different security requirements.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114714140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420586
Akram Sabzmakan, S. L. Mirtaheri
With the ever-expanding digital communications and the need for advanced interoperability and collaboration, organizations and entities need to share their digital assets. Cloud computing is now widely used for managing and storing resources. Access control is a critical issue, facing many challenges in distributed environments, including clouds. In this paper, we present a model of the cloud access control system. Our distributed model utilizes a role-based access control to enable the management of resources and the parties’ access securely. We provide interoperability between multiple organizations to access shared resources using Ethereum Blockchain smart contracts and access levels for available resources. Roles define access permissions; however, unlike the traditional role-based access control model, the roles are determined according to the organizations involved' collaborative project, sometimes may not exist in any organization. They can only be created in their interactions. Finally, for evaluating its cost and time parameters. We use Ethereum smart contracts and deploy them in the Ethereum test network called Rinkby,
{"title":"An Improved Distributed Access Control Model in Cloud Computing by Blockchain","authors":"Akram Sabzmakan, S. L. Mirtaheri","doi":"10.1109/CSICC52343.2021.9420586","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420586","url":null,"abstract":"With the ever-expanding digital communications and the need for advanced interoperability and collaboration, organizations and entities need to share their digital assets. Cloud computing is now widely used for managing and storing resources. Access control is a critical issue, facing many challenges in distributed environments, including clouds. In this paper, we present a model of the cloud access control system. Our distributed model utilizes a role-based access control to enable the management of resources and the parties’ access securely. We provide interoperability between multiple organizations to access shared resources using Ethereum Blockchain smart contracts and access levels for available resources. Roles define access permissions; however, unlike the traditional role-based access control model, the roles are determined according to the organizations involved' collaborative project, sometimes may not exist in any organization. They can only be created in their interactions. Finally, for evaluating its cost and time parameters. We use Ethereum smart contracts and deploy them in the Ethereum test network called Rinkby,","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123111390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420577
Razie Roostaei, Marzieh Sheikhi, Z. Movahedi
Nowadays, the usage of mobile devices is increasing in human’s life. But these devices have some constraints such as limited storage, low battery lifetime, and weak computation capacity. To deal with these limitations, mobile devices offload their heavy applications to the cloud by using mobile cloud computing technology. Because of the network conditions, offloading may impose delay and energy costs on mobile devices. Thus, it is a tradeoff between local and remote execution. Further, offloading some components of the application may be cost-effective than the whole one. In this paper, we propose a fine-grain computation offloading scheme considering application components’ sequencing. The proposed scheme turns the exponential complexity of the decision algorithm into the polynomial. The simulation and evaluation results demonstrate that the offloading efficiency improves thanks to reducing the decision overhead.
{"title":"Fine-grain Computation Offloading Considering Application Components’ Sequencing","authors":"Razie Roostaei, Marzieh Sheikhi, Z. Movahedi","doi":"10.1109/CSICC52343.2021.9420577","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420577","url":null,"abstract":"Nowadays, the usage of mobile devices is increasing in human’s life. But these devices have some constraints such as limited storage, low battery lifetime, and weak computation capacity. To deal with these limitations, mobile devices offload their heavy applications to the cloud by using mobile cloud computing technology. Because of the network conditions, offloading may impose delay and energy costs on mobile devices. Thus, it is a tradeoff between local and remote execution. Further, offloading some components of the application may be cost-effective than the whole one. In this paper, we propose a fine-grain computation offloading scheme considering application components’ sequencing. The proposed scheme turns the exponential complexity of the decision algorithm into the polynomial. The simulation and evaluation results demonstrate that the offloading efficiency improves thanks to reducing the decision overhead.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122082256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420543
R. Ghanbari, K. Borna
In this paper, we analyzed different models of LSTM neural networks on the multi-step time-series dataset. The purpose of this study is to express a clear and precise method using LSTM neural networks for sequence datasets. These models can be used in other similar datasets, and the models are composed to be developed for various multi-step datasets with the slightest adjustment required. The principal purpose and question of this study were whether it is possible to provide a model to predict the amount of electricity consumed by a house over the next seven days. Using the specified models, we have made a prediction based on the dataset. We also made a comprehensive comparison with all the results obtained from the methods among different models. In this study, the dataset is household electricity consumption data gathered over four years. We have been able to achieve the desired prediction results with the least amount of error among the existing state-of-the-art models.
{"title":"Multivariate Time-Series Prediction Using LSTM Neural Networks","authors":"R. Ghanbari, K. Borna","doi":"10.1109/CSICC52343.2021.9420543","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420543","url":null,"abstract":"In this paper, we analyzed different models of LSTM neural networks on the multi-step time-series dataset. The purpose of this study is to express a clear and precise method using LSTM neural networks for sequence datasets. These models can be used in other similar datasets, and the models are composed to be developed for various multi-step datasets with the slightest adjustment required. The principal purpose and question of this study were whether it is possible to provide a model to predict the amount of electricity consumed by a house over the next seven days. Using the specified models, we have made a prediction based on the dataset. We also made a comprehensive comparison with all the results obtained from the methods among different models. In this study, the dataset is household electricity consumption data gathered over four years. We have been able to achieve the desired prediction results with the least amount of error among the existing state-of-the-art models.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125184945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420589
Mahboobeh Riahi-Madvar, B. Nasersharif, A. A. Azirani
Outlier detection in high dimensional data faces the challenge of curse of dimensionality where irrelevant features may prevent detection of outliers. The Principal Component Analysis (PCA) is widely used for dimensionality reduction in high dimensional outlier detection problem. While no single subspace can to thoroughly capture the outlier data points; we propose to combine the result of multiple subspaces to deal with this situation. In this research, we propose a subspace outlier detection algorithm in high dimensional data using an ensemble of PCA-based subspaces (SODEP) method. Three relevant subspaces are selected using PCA features to discover different types of outliers and subsequently, compute outlier scores in the projected subspaces. The experimental results show that our ensemble-based outlier selection is a promising method in high dimensional data and has better efficiency than other compared methods.
{"title":"Subspace Outlier Detection in High Dimensional Data using Ensemble of PCA-based Subspaces","authors":"Mahboobeh Riahi-Madvar, B. Nasersharif, A. A. Azirani","doi":"10.1109/CSICC52343.2021.9420589","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420589","url":null,"abstract":"Outlier detection in high dimensional data faces the challenge of curse of dimensionality where irrelevant features may prevent detection of outliers. The Principal Component Analysis (PCA) is widely used for dimensionality reduction in high dimensional outlier detection problem. While no single subspace can to thoroughly capture the outlier data points; we propose to combine the result of multiple subspaces to deal with this situation. In this research, we propose a subspace outlier detection algorithm in high dimensional data using an ensemble of PCA-based subspaces (SODEP) method. Three relevant subspaces are selected using PCA features to discover different types of outliers and subsequently, compute outlier scores in the projected subspaces. The experimental results show that our ensemble-based outlier selection is a promising method in high dimensional data and has better efficiency than other compared methods.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121665860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420567
M. Jaberi, H. Mala
One of the basic operations over distributed data is to find the k-th greatest value among union of these numerical data. The challenge arises when the datasets are private and their owners cannot trust any third party. In this paper, we propose a new secure protocol to find the k-th greatest value by means of secure summation sub-protocol. We compare our proposed protocol with other similar protocols. Specially, we will show that our scheme is more efficient than the well-known protocol of Aggarwal et.al. (2004) in terms of computation and communication complexity. Specifically, in the case of Ti = 1 secret value for any party Pi our protocol has log m computation overhead and δ log m communication overhead for party Pi, where m and δ are the maximum acceptable value and communication overhead of the secure summation sub-protocol, respectively. The overheads of our protocol is exactly half of the overheads of Aggarwal’s protocol.
{"title":"Secure Determining of the k-th Greatest Element Among Distributed Private Values","authors":"M. Jaberi, H. Mala","doi":"10.1109/CSICC52343.2021.9420567","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420567","url":null,"abstract":"One of the basic operations over distributed data is to find the k-th greatest value among union of these numerical data. The challenge arises when the datasets are private and their owners cannot trust any third party. In this paper, we propose a new secure protocol to find the k-th greatest value by means of secure summation sub-protocol. We compare our proposed protocol with other similar protocols. Specially, we will show that our scheme is more efficient than the well-known protocol of Aggarwal et.al. (2004) in terms of computation and communication complexity. Specifically, in the case of Ti = 1 secret value for any party Pi our protocol has log m computation overhead and δ log m communication overhead for party Pi, where m and δ are the maximum acceptable value and communication overhead of the secure summation sub-protocol, respectively. The overheads of our protocol is exactly half of the overheads of Aggarwal’s protocol.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124867931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420545
A. Alizadeh, Navid Malek Alayi, A. Fereidunian, H. Lesani
Recurrent neural networks (RNNs) utilize their internal state to handle variable length sequences, as time series; namely here as uncertain failure rates of the systems. Failure rate model of the components are required to improve systems reliability. Although the failure rate model has undeniable importance systems reliability assessment, an acceptable failure rate model has not been proposed to consider all causes of failures particularly random failures. Therefore, planners and decision makers are susceptible to a high financial risk for their decisions in the system. An approach is addressed to consider random failure rate along with deteriorating failure rate, to ameliorate this risks, in this paper. Therefore, the complexity of failure behavior is considered, while modeling considering the failure data as a time series. Moreover, the results of failure rate estimation are tested on a reliability-centered maintenance (RCM) implementation to prove the importance of random failure rate consideration. The results express that a more effective strategy can be regarded for preventive maintenance (PM) scheduling in RCM problem, when the proposed approach is utilized for failure rate modeling.
{"title":"A Recurrent Neural Network Approach to Model Failure Rate Considering Random and Deteriorating Failures","authors":"A. Alizadeh, Navid Malek Alayi, A. Fereidunian, H. Lesani","doi":"10.1109/CSICC52343.2021.9420545","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420545","url":null,"abstract":"Recurrent neural networks (RNNs) utilize their internal state to handle variable length sequences, as time series; namely here as uncertain failure rates of the systems. Failure rate model of the components are required to improve systems reliability. Although the failure rate model has undeniable importance systems reliability assessment, an acceptable failure rate model has not been proposed to consider all causes of failures particularly random failures. Therefore, planners and decision makers are susceptible to a high financial risk for their decisions in the system. An approach is addressed to consider random failure rate along with deteriorating failure rate, to ameliorate this risks, in this paper. Therefore, the complexity of failure behavior is considered, while modeling considering the failure data as a time series. Moreover, the results of failure rate estimation are tested on a reliability-centered maintenance (RCM) implementation to prove the importance of random failure rate consideration. The results express that a more effective strategy can be regarded for preventive maintenance (PM) scheduling in RCM problem, when the proposed approach is utilized for failure rate modeling.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128833464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420550
Seyed Mohammad Seyed Motehayeri, Vahid Baghi, E. M. Miandoab, A. Moeini
Off-Policy Deep Reinforcement Learning (DRL) algorithms such as Deep Deterministic Policy Gradient (DDPG) has been used to teach intelligent agents to solve complicated problems in continuous space-action environments. Several methods have been successfully applied to increase the training performance and achieve better speed and stability for these algorithms. Such as experience replay to selecting a batch of transactions of the replay memory buffer. However, working with environments with sparse reward function is a challenge for these algorithms and causes them to reduce these algorithms' performance. This research intends to make the transaction selection process more efficient by increasing the likelihood of selecting important transactions from the replay memory buffer. Our proposed method works better with a sparse reward function or, in particular, with environments that have termination conditions. We are using a secondary replay memory buffer that stores more critical transactions. In the training process, transactions are select in both the first replay buffer and the secondary replay buffer. We also use a parallel environment to asynchronously execute and fill the primary replay buffer and the secondary replay buffer. This method will help us to get better performance and stability. Finally, we evaluate our proposed approach to the Crawler model, one of the Unity ML-Agent tasks with sparse reward function, against DDPG and AE-DDPG.
{"title":"Duplicated Replay Buffer for Asynchronous Deep Deterministic Policy Gradient","authors":"Seyed Mohammad Seyed Motehayeri, Vahid Baghi, E. M. Miandoab, A. Moeini","doi":"10.1109/CSICC52343.2021.9420550","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420550","url":null,"abstract":"Off-Policy Deep Reinforcement Learning (DRL) algorithms such as Deep Deterministic Policy Gradient (DDPG) has been used to teach intelligent agents to solve complicated problems in continuous space-action environments. Several methods have been successfully applied to increase the training performance and achieve better speed and stability for these algorithms. Such as experience replay to selecting a batch of transactions of the replay memory buffer. However, working with environments with sparse reward function is a challenge for these algorithms and causes them to reduce these algorithms' performance. This research intends to make the transaction selection process more efficient by increasing the likelihood of selecting important transactions from the replay memory buffer. Our proposed method works better with a sparse reward function or, in particular, with environments that have termination conditions. We are using a secondary replay memory buffer that stores more critical transactions. In the training process, transactions are select in both the first replay buffer and the secondary replay buffer. We also use a parallel environment to asynchronously execute and fill the primary replay buffer and the secondary replay buffer. This method will help us to get better performance and stability. Finally, we evaluate our proposed approach to the Crawler model, one of the Unity ML-Agent tasks with sparse reward function, against DDPG and AE-DDPG.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127670446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420573
Mohammad Hosein Hamian, Ali Beikmohammadi, A. Ahmadi, B. Nasersharif
One of the bold issues in autonomous driving is considered semantic image segmentation, which must be done with high accuracy and speed. Semantic segmentation is used to understand an image at the pixel level. In this regard, various architectures based on deep neural networks have been proposed for semantic segmentation of autonomous driving image datasets. In this paper, we proposed a novel combination method in which dividing the image into its constituent regions with the help of classical segmentation brings about achieving beneficial information that improves the DeepLab v3+ network results. The proposed method with the two backbones, Xception and MobileNetV2, obtains the mIoU of 81.73% and 76.31% on the Cityscapes dataset, respectively, which shows promising results compared to the model without post-processing.
{"title":"Semantic Segmentation of Autonomous Driving Images by the Combination of Deep Learning and Classical Segmentation","authors":"Mohammad Hosein Hamian, Ali Beikmohammadi, A. Ahmadi, B. Nasersharif","doi":"10.1109/CSICC52343.2021.9420573","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420573","url":null,"abstract":"One of the bold issues in autonomous driving is considered semantic image segmentation, which must be done with high accuracy and speed. Semantic segmentation is used to understand an image at the pixel level. In this regard, various architectures based on deep neural networks have been proposed for semantic segmentation of autonomous driving image datasets. In this paper, we proposed a novel combination method in which dividing the image into its constituent regions with the help of classical segmentation brings about achieving beneficial information that improves the DeepLab v3+ network results. The proposed method with the two backbones, Xception and MobileNetV2, obtains the mIoU of 81.73% and 76.31% on the Cityscapes dataset, respectively, which shows promising results compared to the model without post-processing.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132027568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-03DOI: 10.1109/CSICC52343.2021.9420595
Farnaz Sheikhi, Sharareh Alipour
As the world is struggling against COVID-19 pandemic, and unfortunately no certain treatments are discovered yet, prevention of further transmission by isolating infected people has become an effective strategy to overcome this outbreak. That is why scaling up COVID-19 testing is strongly recommended. However, depending on the time tests are performed, they may have a high rate of false-negative results. This inaccuracy of COVID-19 testing is a challenge against controlling the pandemic. Therefore, in this paper we propose a geometric classification algorithm that is fault-tolerant to handle the inaccuracy of tests. So, in a metropolis of n people, let w + r be the number of cases that are tested, where r is the number of positive, while w is the number of negative COVID-19 cases, and k is an upper bound on the number of false-negative COVID-19 cases. The proposed algorithm takes O(r • (log r + log w) + w3 + w log(hR)) time for isolating all positive cases together with at most k (according to the rate of error of testing) possibly positive (false-negative) cases from the rest of the people. The term hR in the time complexity is the size of convex hull of the set of positive cases, and obviously k ∈ O(w). For simplicity of this isolation, we consider a simple convex shape (a triangle) for this classification algorithm.
由于世界正在与COVID-19大流行作斗争,不幸的是尚未发现某些治疗方法,通过隔离感染者来预防进一步传播已成为克服疫情的有效策略。因此,强烈建议扩大COVID-19检测。但是,根据执行测试的时间,它们可能有很高的假阴性结果率。COVID-19检测的这种不准确性是对控制大流行的挑战。因此,本文提出了一种容错的几何分类算法来处理测试结果的不准确性。因此,在人口为n的大都市中,设w + r为检测病例数,其中r为阳性病例数,w为阴性病例数,k为假阴性病例数的上界。提出的算法需要O(r•(log r + log w) + w3 + w log(hR))时间来隔离所有阳性病例以及最多k个(根据测试错误率)可能阳性(假阴性)的病例。时间复杂度中的hR项是正情况集合的凸包大小,显然k∈O(w)。为了隔离的简单性,我们考虑一个简单的凸形状(三角形)作为这个分类算法。
{"title":"A Geometric Algorithm for Fault-Tolerant Classification of COVID-19 Infected People","authors":"Farnaz Sheikhi, Sharareh Alipour","doi":"10.1109/CSICC52343.2021.9420595","DOIUrl":"https://doi.org/10.1109/CSICC52343.2021.9420595","url":null,"abstract":"As the world is struggling against COVID-19 pandemic, and unfortunately no certain treatments are discovered yet, prevention of further transmission by isolating infected people has become an effective strategy to overcome this outbreak. That is why scaling up COVID-19 testing is strongly recommended. However, depending on the time tests are performed, they may have a high rate of false-negative results. This inaccuracy of COVID-19 testing is a challenge against controlling the pandemic. Therefore, in this paper we propose a geometric classification algorithm that is fault-tolerant to handle the inaccuracy of tests. So, in a metropolis of n people, let w + r be the number of cases that are tested, where r is the number of positive, while w is the number of negative COVID-19 cases, and k is an upper bound on the number of false-negative COVID-19 cases. The proposed algorithm takes O(r • (log r + log w) + w3 + w log(hR)) time for isolating all positive cases together with at most k (according to the rate of error of testing) possibly positive (false-negative) cases from the rest of the people. The term hR in the time complexity is the size of convex hull of the set of positive cases, and obviously k ∈ O(w). For simplicity of this isolation, we consider a simple convex shape (a triangle) for this classification algorithm.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128212048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}