In this work, we consider the problem of distributed consensus when some agents in the network are faulty and communication among agents happen over a random sequence of time-varying graphs. Agents iteratively communicate with their neighbors to achieve the consensus. We extend the network robustness condition presented in existing works on static (or time-varying but not random) graphs to the situation when communication graphs are derived from some probability distribution thus essentially random. We show that if the sequence of random graphs is uniformly stochastically robust, then the consensus can be achieved almost surely by all non-faulty agents.
{"title":"Byzantine fault-tolerant consensus over random graph processes","authors":"P. Vyavahare","doi":"10.1145/3427477.3429771","DOIUrl":"https://doi.org/10.1145/3427477.3429771","url":null,"abstract":"In this work, we consider the problem of distributed consensus when some agents in the network are faulty and communication among agents happen over a random sequence of time-varying graphs. Agents iteratively communicate with their neighbors to achieve the consensus. We extend the network robustness condition presented in existing works on static (or time-varying but not random) graphs to the situation when communication graphs are derived from some probability distribution thus essentially random. We show that if the sequence of random graphs is uniformly stochastically robust, then the consensus can be achieved almost surely by all non-faulty agents.","PeriodicalId":435827,"journal":{"name":"Adjunct Proceedings of the 2021 International Conference on Distributed Computing and Networking","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114055709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In an IoT (Internet of Things) system, if image data can be collected in addition to sensor data, the amount of information obtained will increase significantly, which is very useful for monitoring natural disasters, for example. Considering communication speed and covering range, it is assumed that Wi-Fi multi-hop communication is appropriate for image data transmission. Since Wi-Fi consumes a large amount of power, a power supply for sensor nodes is required. Therefore, we are developing a self-sustaining wind power supply called Nishikaze which employs the potential energy conversion method. In this paper, we produced the Nishikaze second prototype. The measurement results of the amount of the generated power show that the Nishikaze second prototype can cover the power required to transmit an image of 500 kbytes at a 10 minute interval assuming that 3 m/s of wind blows for 6 hours in a day, 100 % of the generated power is available, and the active period of sensor nodes is 5 seconds.
在IoT (Internet of Things)系统中,如果除了传感器数据之外,还能采集到图像数据,那么获得的信息量将会大大增加,这对于监测自然灾害等是非常有用的。考虑到通信速度和覆盖范围,假设Wi-Fi多跳通信适合图像数据传输。由于Wi-Fi的功耗很大,因此需要为传感器节点提供电源。因此,我们正在开发一种名为Nishikaze的自持式风力电源,它采用势能转换方法。在本文中,我们制作了Nishikaze第二原型机。发电量的测量结果表明,Nishikaze第二原型机可以覆盖以3米/秒的风速在一天吹6小时的情况下,以10分钟为间隔传输500 kb图像所需的功率,100%的发电量可用,传感器节点的活动周期为5秒。
{"title":"Nishikaze: Self-Sustained Wind Power Supply Employing Potential Energy Conversion Method","authors":"F. Teraoka, Shinichi Nishiura, H. Ohno","doi":"10.1145/3427477.3429460","DOIUrl":"https://doi.org/10.1145/3427477.3429460","url":null,"abstract":"In an IoT (Internet of Things) system, if image data can be collected in addition to sensor data, the amount of information obtained will increase significantly, which is very useful for monitoring natural disasters, for example. Considering communication speed and covering range, it is assumed that Wi-Fi multi-hop communication is appropriate for image data transmission. Since Wi-Fi consumes a large amount of power, a power supply for sensor nodes is required. Therefore, we are developing a self-sustaining wind power supply called Nishikaze which employs the potential energy conversion method. In this paper, we produced the Nishikaze second prototype. The measurement results of the amount of the generated power show that the Nishikaze second prototype can cover the power required to transmit an image of 500 kbytes at a 10 minute interval assuming that 3 m/s of wind blows for 6 hours in a day, 100 % of the generated power is available, and the active period of sensor nodes is 5 seconds.","PeriodicalId":435827,"journal":{"name":"Adjunct Proceedings of the 2021 International Conference on Distributed Computing and Networking","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124647762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Backup Placement problem in networks in the distributed setting considers a network graph G = (V, E), in which the goal of each vertex v ∈ V is selecting a neighbor, such that the maximum number of vertices in V that select the same vertex is minimized [9]. Previous backup placement algorithms suffer from obliviousness to main factors of heterogeneous wireless network. Specifically, there is no consideration of the nodes memory and storage capacities, and no reference to a case in which nodes have different energy capacity, and thus can leave (or join) the network at any time. These parameters are strongly correlated in wireless networks, as the load on different parts of the network can differ greatly, thus requiring more communication, energy, memory and storage. In order to fit the attributes of wireless networks, this work addresses a generalized version of the original problem, namely Backup K-Placement, in which each vertex selects K neighbors, for a positive parameter K. Our Backup K-Placement algorithm terminates within just one round. In addition we suggest two complementary algorithms which employ Backup K-Placement to obtain efficient virtual memory schemes for wireless networks. The first algorithm divides the memory of each node to many small parts. Each vertex is assigned the memories of a large subset of its neighbors. Thus more memory capacity for more vertices is gained, but with much fragmentation. The second algorithm requires greater round-complexity, but produces larger virtual memory for each vertex without any fragmentation.
{"title":"Distributed Backup K-Placement and Applications to Virtual Memory in Wireless Networks","authors":"Gal Oren, Leonid Barenboim","doi":"10.1145/3427477.3429466","DOIUrl":"https://doi.org/10.1145/3427477.3429466","url":null,"abstract":"The Backup Placement problem in networks in the distributed setting considers a network graph G = (V, E), in which the goal of each vertex v ∈ V is selecting a neighbor, such that the maximum number of vertices in V that select the same vertex is minimized [9]. Previous backup placement algorithms suffer from obliviousness to main factors of heterogeneous wireless network. Specifically, there is no consideration of the nodes memory and storage capacities, and no reference to a case in which nodes have different energy capacity, and thus can leave (or join) the network at any time. These parameters are strongly correlated in wireless networks, as the load on different parts of the network can differ greatly, thus requiring more communication, energy, memory and storage. In order to fit the attributes of wireless networks, this work addresses a generalized version of the original problem, namely Backup K-Placement, in which each vertex selects K neighbors, for a positive parameter K. Our Backup K-Placement algorithm terminates within just one round. In addition we suggest two complementary algorithms which employ Backup K-Placement to obtain efficient virtual memory schemes for wireless networks. The first algorithm divides the memory of each node to many small parts. Each vertex is assigned the memories of a large subset of its neighbors. Thus more memory capacity for more vertices is gained, but with much fragmentation. The second algorithm requires greater round-complexity, but produces larger virtual memory for each vertex without any fragmentation.","PeriodicalId":435827,"journal":{"name":"Adjunct Proceedings of the 2021 International Conference on Distributed Computing and Networking","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127976196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deborsi Basu, Addanki Sankara Rao, Uttam Ghosh, R. Datta
5th Generation of Wireless Communication Networks (5G) are targeting to fulfill all the service demands of end-users in a cost-effective manner. It is becoming extremely challenging to provide optimum end-to-end network services within a restricted resource environment. Telecommunication Service Providers (TSPs) face huge trouble to minimize the network deployment cost to cover more users by increasing the network coverage area. Software Defined Networking (SDN) and Network Function Virtualization (NFV) are two key technology enablers that can improve the techno-economic scenarios for next generation telecommunication networks. In this work, we have formulated a unique load & latency aware cost-effective controller placement algorithm (ICDA – Intelligent Controller Deployment Algorithm) that can successfully reduce the cost of CAPEX (Capital Expenditure), OPEX (Operational Expenditure), and TCO (Total Cost of Ownership) of 5G networks using the concept of Virtualized Software Defined Networking (vSDN). Seamless connectivity in Ultra-Low Latency (ULL) is one of the key features of 5G. That is why we further optimize the model based on network latency and traffic load demand of the UEs (User Entities). Using comparative graphical analysis, it has been demonstrated that our proposed algorithm shows significant cost reduction in the 5G network as compared to existing current days networks. The cost-efficient controller deployment algorithm also takes care of all other critical network constraints and makes this approach very efficient for TSPs.
{"title":"Realization of a Techno-Economic Controller Deployment Architecture for vSDN Enabled 5G Networks","authors":"Deborsi Basu, Addanki Sankara Rao, Uttam Ghosh, R. Datta","doi":"10.1145/3427477.3429991","DOIUrl":"https://doi.org/10.1145/3427477.3429991","url":null,"abstract":"5th Generation of Wireless Communication Networks (5G) are targeting to fulfill all the service demands of end-users in a cost-effective manner. It is becoming extremely challenging to provide optimum end-to-end network services within a restricted resource environment. Telecommunication Service Providers (TSPs) face huge trouble to minimize the network deployment cost to cover more users by increasing the network coverage area. Software Defined Networking (SDN) and Network Function Virtualization (NFV) are two key technology enablers that can improve the techno-economic scenarios for next generation telecommunication networks. In this work, we have formulated a unique load & latency aware cost-effective controller placement algorithm (ICDA – Intelligent Controller Deployment Algorithm) that can successfully reduce the cost of CAPEX (Capital Expenditure), OPEX (Operational Expenditure), and TCO (Total Cost of Ownership) of 5G networks using the concept of Virtualized Software Defined Networking (vSDN). Seamless connectivity in Ultra-Low Latency (ULL) is one of the key features of 5G. That is why we further optimize the model based on network latency and traffic load demand of the UEs (User Entities). Using comparative graphical analysis, it has been demonstrated that our proposed algorithm shows significant cost reduction in the 5G network as compared to existing current days networks. The cost-efficient controller deployment algorithm also takes care of all other critical network constraints and makes this approach very efficient for TSPs.","PeriodicalId":435827,"journal":{"name":"Adjunct Proceedings of the 2021 International Conference on Distributed Computing and Networking","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130498380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xander Mari M. Cruz, J. L. E. Honrado, Nathaniel J. C. Libatique, G. Tangonan, C. Oppus, P. Cabacungan, John Paul A. Mamaradlo, Neil Angelo M. Mercado, Jane Arleth dela Cruz, Julie Ann Dela Cruz, J. G. Cruz
The study aims to provide a software and hardware platform for asynchronous content distribution and content management with the goal of augmenting and supporting remote learning workflows for both instructors and students in low-bandwidth situations, which is the prevailing condition in the country. The intent of the project is to enable individual school units the ability to prepare, store, and act upon educational materials for students where bandwidth is limited or non-existent without the need to install and maintain on-site traditional ICT infrastructure or rely on cloud-enabled services that require always-on connectivity. This comes at a time when the need to have a risk resilience and disaster mitigation plan for education amidst the COVID-19 pandemic-induced restrictions to human mobility leading to social isolation of educators and students alike.
{"title":"Design and Demonstration of a Resilient Content Distribution and Remote Asynchronous Learning Platform","authors":"Xander Mari M. Cruz, J. L. E. Honrado, Nathaniel J. C. Libatique, G. Tangonan, C. Oppus, P. Cabacungan, John Paul A. Mamaradlo, Neil Angelo M. Mercado, Jane Arleth dela Cruz, Julie Ann Dela Cruz, J. G. Cruz","doi":"10.1145/3427477.3428190","DOIUrl":"https://doi.org/10.1145/3427477.3428190","url":null,"abstract":"The study aims to provide a software and hardware platform for asynchronous content distribution and content management with the goal of augmenting and supporting remote learning workflows for both instructors and students in low-bandwidth situations, which is the prevailing condition in the country. The intent of the project is to enable individual school units the ability to prepare, store, and act upon educational materials for students where bandwidth is limited or non-existent without the need to install and maintain on-site traditional ICT infrastructure or rely on cloud-enabled services that require always-on connectivity. This comes at a time when the need to have a risk resilience and disaster mitigation plan for education amidst the COVID-19 pandemic-induced restrictions to human mobility leading to social isolation of educators and students alike.","PeriodicalId":435827,"journal":{"name":"Adjunct Proceedings of the 2021 International Conference on Distributed Computing and Networking","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127829452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, wireless sensor network (WSN) based monitoring systems have been applied in agricultural vermin control. For example, there are trap systems that capture vermin by automatically closing their gates. These systems need to monitor vermin that approach farmland. Ropeway camera monitoring systems (RCMSs) have been proposed as vermin monitoring mechanisms. In an RCMS, cameras can move along ropes stretched between trees or poles. However, a problem in RCMSs is that obstacles lead to poor visibility, and cameras cannot monitor areas effectively. Therefore, it is crucial to estimate locations of obstacles such as tree trunks. When estimating locations of obstacles using simultaneous localization and mapping (SLAM), it is difficult to extract feature points in dense vegetation due to noise and brightness issues. As a result, feature points are sometimes falsely detected in locations where there are no obstacles. In order to improve SLAM accuracy, falsely-detected feature points must be identified. In this study, we propose a method to estimate obstacle-free areas for an RCMS. The proposed method can determine falsely-detected feature points in estimated obstacle-free areas, and reduce errors in SLAM. The proposed method determines the largest obstacle-free areas, while reducing the number of camera shots. A camera in an RCMS also shoots other cameras while moving along its rope. When the camera captures the other cameras, we find that there are no obstacles between two cameras. The proposed method effectively identifies obstacle-free areas by moving two cameras simultaneously. From the results of a simulation with two parallel ropes, we confirmed that the proposed method determines approximately 92% of obstacle-free areas, compared with the brute-force algorithm.
近年来,基于无线传感器网络(WSN)的监测系统在农业害虫防治中得到了广泛的应用。例如,有陷阱系统,通过自动关闭大门捕捉害虫。这些系统需要监测接近农田的害虫。索道摄像机监控系统(rcms)已被提出作为害虫监测机制。在RCMS中,摄像机可以沿着树或杆子之间的绳子移动。然而,rcms中的一个问题是障碍物导致能见度低,并且摄像机不能有效地监控区域。因此,估计障碍物(如树干)的位置是至关重要的。在利用SLAM (simultaneous localization and mapping)方法估计障碍物位置时,由于噪声和亮度问题,在茂密植被中难以提取特征点。因此,在没有障碍物的地方,特征点有时会被错误地检测到。为了提高SLAM的精度,必须识别被误检的特征点。在本研究中,我们提出了一种估算RCMS无障碍区域的方法。该方法可以在估计的无障碍物区域中确定被误检的特征点,减少SLAM的误差。该方法确定了最大的无障碍区域,同时减少了相机拍摄的次数。RCMS中的摄像机在沿着绳索移动的同时也可以拍摄其他摄像机。当摄像机捕捉到其他摄像机时,我们发现两个摄像机之间没有障碍物。该方法通过同时移动两个摄像头,有效识别无障碍物区域。通过两根平行绳索的仿真结果,我们证实,与暴力算法相比,该方法确定了约92%的无障碍区域。
{"title":"Reducing Falsely-detected Feature Points of SLAM by Estimating Obstacle-free Area for RCMSs","authors":"Kei Nihonyanagi, R. Katsuma, K. Yasumoto","doi":"10.1145/3427477.3428187","DOIUrl":"https://doi.org/10.1145/3427477.3428187","url":null,"abstract":"In recent years, wireless sensor network (WSN) based monitoring systems have been applied in agricultural vermin control. For example, there are trap systems that capture vermin by automatically closing their gates. These systems need to monitor vermin that approach farmland. Ropeway camera monitoring systems (RCMSs) have been proposed as vermin monitoring mechanisms. In an RCMS, cameras can move along ropes stretched between trees or poles. However, a problem in RCMSs is that obstacles lead to poor visibility, and cameras cannot monitor areas effectively. Therefore, it is crucial to estimate locations of obstacles such as tree trunks. When estimating locations of obstacles using simultaneous localization and mapping (SLAM), it is difficult to extract feature points in dense vegetation due to noise and brightness issues. As a result, feature points are sometimes falsely detected in locations where there are no obstacles. In order to improve SLAM accuracy, falsely-detected feature points must be identified. In this study, we propose a method to estimate obstacle-free areas for an RCMS. The proposed method can determine falsely-detected feature points in estimated obstacle-free areas, and reduce errors in SLAM. The proposed method determines the largest obstacle-free areas, while reducing the number of camera shots. A camera in an RCMS also shoots other cameras while moving along its rope. When the camera captures the other cameras, we find that there are no obstacles between two cameras. The proposed method effectively identifies obstacle-free areas by moving two cameras simultaneously. From the results of a simulation with two parallel ropes, we confirmed that the proposed method determines approximately 92% of obstacle-free areas, compared with the brute-force algorithm.","PeriodicalId":435827,"journal":{"name":"Adjunct Proceedings of the 2021 International Conference on Distributed Computing and Networking","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115266018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Secure Socket Shell (SSH) allows users to connect and access the system remotely through a publicly exposed interface. These systems often become the target of attacks where an intruder attempts to break into a system by guessing login credentials. These login attempts are generally recorded into a log file by the server. Our contribution in this paper is twofold. First we report on a case study using logs of an SSH server deployed in a production environment. Using a dataset collected over a span of one month with more than one hundred thousand connection records, we study various types of failed login attempts, common usernames being attempted, recurrence of attack sources over time and geographical location of attackers. Our case study reveals that attackers attempt various methods to break into the system, there are few common usernames which were tried persistently, origin of attacks are well spread and more than a handful number of sources make repeated attempts to break into the system spanning weeks. As a second contribution, we propose a method to differentiate failed and successful login attempts using network flow level statistics and subsequently use them to detect attacks. We experiment with flow records labelled with ground truth and show that proposed method is able to identify logins which are failed as well as successful.
{"title":"Who is Trying to Compromise Your SSH Server ? An Analysis of Authentication Logs and Detection of Bruteforce Attacks","authors":"Pratibha Khandait, Namrata Tiwari, N. Hubballi","doi":"10.1145/3427477.3429772","DOIUrl":"https://doi.org/10.1145/3427477.3429772","url":null,"abstract":"Secure Socket Shell (SSH) allows users to connect and access the system remotely through a publicly exposed interface. These systems often become the target of attacks where an intruder attempts to break into a system by guessing login credentials. These login attempts are generally recorded into a log file by the server. Our contribution in this paper is twofold. First we report on a case study using logs of an SSH server deployed in a production environment. Using a dataset collected over a span of one month with more than one hundred thousand connection records, we study various types of failed login attempts, common usernames being attempted, recurrence of attack sources over time and geographical location of attackers. Our case study reveals that attackers attempt various methods to break into the system, there are few common usernames which were tried persistently, origin of attacks are well spread and more than a handful number of sources make repeated attempts to break into the system spanning weeks. As a second contribution, we propose a method to differentiate failed and successful login attempts using network flow level statistics and subsequently use them to detect attacks. We experiment with flow records labelled with ground truth and show that proposed method is able to identify logins which are failed as well as successful.","PeriodicalId":435827,"journal":{"name":"Adjunct Proceedings of the 2021 International Conference on Distributed Computing and Networking","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114558009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The dispersion problem on graphs asks k ≤ n robots initially placed arbitrarily on the nodes of an n-node anonymous graph to reposition autonomously to reach a configuration with each robot on a distinct node. This problem is of interest due to its relationship to many fundamental robot coordination problems, such as exploration, scattering, load balancing, relocation of self-driven electric cars (robots) to recharge stations (nodes), etc. The objective of this problem is to minimize simultaneously (or provide trade-off between) two fundamental performance metrics: (i) time to achieve dispersion and (ii) memory needed at each robot. The literature solved this problem on arbitrary graphs considering fault-free robots. In this paper, we study dispersion on arbitrary graphs considering crash faulty robots – a robot which has crashed vanishes from the system along with the information it carried. We present a deterministic O((min (m, kΔ) · f) time algorithm achieving dispersion with O(log (max (k, Δ))) bits memory at each robot starting from rooted initial configurations such that all k robots are on a single node, where m is the number of edges, f ≤ k is the number of crashes, and Δ is the maximum degree of the graph. When Δ and f are both O(1), time complexity of our algorithm asymptotically matches the lower bound Ω(k) and when Δ and f are both polylog(n), it is polylog(n) factor away from the lower bound Ω(k). The memory bound is asymptotically optimal. To the best of our knowledge, this is the first result for dispersion with faults in arbitrary graphs, even when starting from rooted initial configurations.
{"title":"Dispersion of Mobile Robots Tolerating Faults","authors":"D. Pattanayak, Gokarna Sharma, P. Mandal","doi":"10.1145/3427477.3429464","DOIUrl":"https://doi.org/10.1145/3427477.3429464","url":null,"abstract":"The dispersion problem on graphs asks k ≤ n robots initially placed arbitrarily on the nodes of an n-node anonymous graph to reposition autonomously to reach a configuration with each robot on a distinct node. This problem is of interest due to its relationship to many fundamental robot coordination problems, such as exploration, scattering, load balancing, relocation of self-driven electric cars (robots) to recharge stations (nodes), etc. The objective of this problem is to minimize simultaneously (or provide trade-off between) two fundamental performance metrics: (i) time to achieve dispersion and (ii) memory needed at each robot. The literature solved this problem on arbitrary graphs considering fault-free robots. In this paper, we study dispersion on arbitrary graphs considering crash faulty robots – a robot which has crashed vanishes from the system along with the information it carried. We present a deterministic O((min (m, kΔ) · f) time algorithm achieving dispersion with O(log (max (k, Δ))) bits memory at each robot starting from rooted initial configurations such that all k robots are on a single node, where m is the number of edges, f ≤ k is the number of crashes, and Δ is the maximum degree of the graph. When Δ and f are both O(1), time complexity of our algorithm asymptotically matches the lower bound Ω(k) and when Δ and f are both polylog(n), it is polylog(n) factor away from the lower bound Ω(k). The memory bound is asymptotically optimal. To the best of our knowledge, this is the first result for dispersion with faults in arbitrary graphs, even when starting from rooted initial configurations.","PeriodicalId":435827,"journal":{"name":"Adjunct Proceedings of the 2021 International Conference on Distributed Computing and Networking","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122022257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gautam Srivastava, G. T. Reddy, N. Deepa, B. Prabadevi, Praveen Kumar Reddy Maddikunta
In recent years, there has been a rapid increase in the applications generating sensitive and personal information based on the Internet of Things (IoT). Due to the sensitive nature of the data there is a huge surge in intruders stealing the data from these applications. Hence a strong intrusion detection systems which can detect the intruders is the need of the hour to build a strong defence systems against the intruders. In this work, a Crow-Search based ensemble classifier is used to classify IoT- based UNSW-NB15 dataset. Firstly, the most significant features are selected from the dataset using Crow-Search algorithm, later these features are fed to the ensemble classifier based on Linear Regression, Random Forest and XGBoost algorithms for training. The performance of the proposed model is then evaluated against the state-of-the-art models to check for its effectiveness. The experimental results prove that the proposed model performs better than the other considered models.
{"title":"An ensemble model for intrusion detection in the Internet of Softwarized Things","authors":"Gautam Srivastava, G. T. Reddy, N. Deepa, B. Prabadevi, Praveen Kumar Reddy Maddikunta","doi":"10.1145/3427477.3429987","DOIUrl":"https://doi.org/10.1145/3427477.3429987","url":null,"abstract":"In recent years, there has been a rapid increase in the applications generating sensitive and personal information based on the Internet of Things (IoT). Due to the sensitive nature of the data there is a huge surge in intruders stealing the data from these applications. Hence a strong intrusion detection systems which can detect the intruders is the need of the hour to build a strong defence systems against the intruders. In this work, a Crow-Search based ensemble classifier is used to classify IoT- based UNSW-NB15 dataset. Firstly, the most significant features are selected from the dataset using Crow-Search algorithm, later these features are fed to the ensemble classifier based on Linear Regression, Random Forest and XGBoost algorithms for training. The performance of the proposed model is then evaluated against the state-of-the-art models to check for its effectiveness. The experimental results prove that the proposed model performs better than the other considered models.","PeriodicalId":435827,"journal":{"name":"Adjunct Proceedings of the 2021 International Conference on Distributed Computing and Networking","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121480913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, video surveillance technology has become pervasive in every sphere. The manual generation of videos’ descriptions requires enormous time and labor, and sometimes essential aspects of videos are overlooked in human summaries. The present work is an attempt towards the automated description generation of Surveillance Video. The proposed method consists of the extraction of key-frames from a surveillance video, objects detection in the key-frames, natural language (English) description generation of the key-frames, and summarizing the descriptions. The key-frames are identified based on a structural similarity index measure. Object detection in a key-frame is performed using the architecture of Single Shot Detection. We used Long Short Term Memory (LSTM) to generate captions from frames. Translation Error Rate (TER) is used to identify and remove duplicate event descriptions. Term frequency-inverse document frequency (TF-IDF) is used to rank the event descriptions generated from a video, and the top-ranked the description is returned as the system generated a summary of the video. We evaluated the Microsoft Video Description Corpus (MSVD) data set to validate our proposed approach, and the system produces a Bilingual Evaluation Understudy (BLEU) score of 46.83.
{"title":"VDA: Deep Learning based Visual Data Analysis in Integrated Edge to Cloud Computing Environment","authors":"Atanu Mandal, Amir Sinaeepourfard, S. Naskar","doi":"10.1145/3427477.3429781","DOIUrl":"https://doi.org/10.1145/3427477.3429781","url":null,"abstract":"In recent years, video surveillance technology has become pervasive in every sphere. The manual generation of videos’ descriptions requires enormous time and labor, and sometimes essential aspects of videos are overlooked in human summaries. The present work is an attempt towards the automated description generation of Surveillance Video. The proposed method consists of the extraction of key-frames from a surveillance video, objects detection in the key-frames, natural language (English) description generation of the key-frames, and summarizing the descriptions. The key-frames are identified based on a structural similarity index measure. Object detection in a key-frame is performed using the architecture of Single Shot Detection. We used Long Short Term Memory (LSTM) to generate captions from frames. Translation Error Rate (TER) is used to identify and remove duplicate event descriptions. Term frequency-inverse document frequency (TF-IDF) is used to rank the event descriptions generated from a video, and the top-ranked the description is returned as the system generated a summary of the video. We evaluated the Microsoft Video Description Corpus (MSVD) data set to validate our proposed approach, and the system produces a Bilingual Evaluation Understudy (BLEU) score of 46.83.","PeriodicalId":435827,"journal":{"name":"Adjunct Proceedings of the 2021 International Conference on Distributed Computing and Networking","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133480570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}