Modern world smart devices are equipped with several sensors which continuously generate the data. Managing and analyzing these data efficiently is a key need of the current sensor world. Present applications require real-time analysis of past sensor data for decision making. The goal of this work is to efficiently process the spatio-temporal queries for sensor data. Spatio-Temporal Sensor Index STSI helps in managing the sensor details and leads to faster query processing. The types of queries that have been considered are; 1) Spatio-Time Travel, 2) Temporal Aggregation and 3) Time Travel, 4) Spatio-temporal Aggregation. Spatio-Temporal Sensor Database STSDB is built by including STSI index in HBase. The STSDB performance is compared with HBase on two parameters Data Insertion Time DIT, and Query Execution Time QET. The DIT of STSDB is almost identical as compared to HBase. While the QET averaged over all four types of queries show 49% improvement for STSDB over HBase. Both the performance parameters continue to show similar trends for scaled data in HBase and STSDB. STSDB is demonstrated in this work using smart city data.
{"title":"STSDB: spatio-temporal sensor database for smart city query processing","authors":"Utsav Vyas, P. Panchal, Mayank Patel, Minal Bhise","doi":"10.1145/3288599.3296015","DOIUrl":"https://doi.org/10.1145/3288599.3296015","url":null,"abstract":"Modern world smart devices are equipped with several sensors which continuously generate the data. Managing and analyzing these data efficiently is a key need of the current sensor world. Present applications require real-time analysis of past sensor data for decision making. The goal of this work is to efficiently process the spatio-temporal queries for sensor data. Spatio-Temporal Sensor Index STSI helps in managing the sensor details and leads to faster query processing. The types of queries that have been considered are; 1) Spatio-Time Travel, 2) Temporal Aggregation and 3) Time Travel, 4) Spatio-temporal Aggregation. Spatio-Temporal Sensor Database STSDB is built by including STSI index in HBase. The STSDB performance is compared with HBase on two parameters Data Insertion Time DIT, and Query Execution Time QET. The DIT of STSDB is almost identical as compared to HBase. While the QET averaged over all four types of queries show 49% improvement for STSDB over HBase. Both the performance parameters continue to show similar trends for scaled data in HBase and STSDB. STSDB is demonstrated in this work using smart city data.","PeriodicalId":346177,"journal":{"name":"Proceedings of the 20th International Conference on Distributed Computing and Networking","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127113674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyogi Sim, A. Paul, E. Tilevich, A. Butt, Muhammad Shahzad
Many Internet of Things (IoT) devices are resource-poor, possessing limited memory, disk space, and processor capacity. To accommodate such resource scarcity, IoT software cannot include any extraneous functionalities not used in operating the underlying device. Although legacy systems software contains numerous functionalities that can be reused in IoT applications, these functionalities are exposed as part of a larger codebase with multiple complex dependencies and a heavy runtime footprint. To enable programmers to effectively reuse extant systems software in IoT applications, this paper presents Cslim, a cross-package function extraction tool for C. Cslim extracts programmer-specified functions from a source package and generates new source files for a target package, thereby enabling the reuse of systems software in resource-poor execution environments, such as the IoT devices. Cslim resolves all dependencies by recursively extracting required functions, while bypassing the complexities of preprocessor macro variabilities by operating on preprocessed source files. Furthermore, Cslim efficiently traverses and resolves the calling dependencies by maintaining an in-memory relational database. Finally, Cslim is easy to use, as it requires neither manual intervention nor source code modifications. Our prototype implementation of Cslim has successfully extracted a set of functions from SQLite and GlusterFS, producing slimmed down executables that can be deployed on IoT devices.
{"title":"Cslim: automated extraction of IoT functionalities from legacy C codebases","authors":"Hyogi Sim, A. Paul, E. Tilevich, A. Butt, Muhammad Shahzad","doi":"10.1145/3288599.3296013","DOIUrl":"https://doi.org/10.1145/3288599.3296013","url":null,"abstract":"Many Internet of Things (IoT) devices are resource-poor, possessing limited memory, disk space, and processor capacity. To accommodate such resource scarcity, IoT software cannot include any extraneous functionalities not used in operating the underlying device. Although legacy systems software contains numerous functionalities that can be reused in IoT applications, these functionalities are exposed as part of a larger codebase with multiple complex dependencies and a heavy runtime footprint. To enable programmers to effectively reuse extant systems software in IoT applications, this paper presents Cslim, a cross-package function extraction tool for C. Cslim extracts programmer-specified functions from a source package and generates new source files for a target package, thereby enabling the reuse of systems software in resource-poor execution environments, such as the IoT devices. Cslim resolves all dependencies by recursively extracting required functions, while bypassing the complexities of preprocessor macro variabilities by operating on preprocessed source files. Furthermore, Cslim efficiently traverses and resolves the calling dependencies by maintaining an in-memory relational database. Finally, Cslim is easy to use, as it requires neither manual intervention nor source code modifications. Our prototype implementation of Cslim has successfully extracted a set of functions from SQLite and GlusterFS, producing slimmed down executables that can be deployed on IoT devices.","PeriodicalId":346177,"journal":{"name":"Proceedings of the 20th International Conference on Distributed Computing and Networking","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126025914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes an ongoing project to develop a Python software package, IoTPy, that helps beginning programmers build modular applications that process streams of data collected from sensors, social media and other sources, and to reason about the correctness of their applications in a compositional fashion. IoTPy helps build streaming applications in four ways: (1) enables the construction of non-terminating applications that continuously process endless streams of data by encapsulating terminating programs; (2) supports computation throughout a network of nodes from sensors at the edges of the network to the cloud and back to actuators at the edge; (3) allows users to separate concerns of the logic of an application from the parallel hardware on which the application runs, and (4) supports proofs and testing of the correct behavior of a composition from the specifications of its components.
{"title":"Compositional structures for streaming applications","authors":"K. Chandy, J. Bunn","doi":"10.1145/3288599.3288642","DOIUrl":"https://doi.org/10.1145/3288599.3288642","url":null,"abstract":"This paper describes an ongoing project to develop a Python software package, IoTPy, that helps beginning programmers build modular applications that process streams of data collected from sensors, social media and other sources, and to reason about the correctness of their applications in a compositional fashion. IoTPy helps build streaming applications in four ways: (1) enables the construction of non-terminating applications that continuously process endless streams of data by encapsulating terminating programs; (2) supports computation throughout a network of nodes from sensors at the edges of the network to the cloud and back to actuators at the edge; (3) allows users to separate concerns of the logic of an application from the parallel hardware on which the application runs, and (4) supports proofs and testing of the correct behavior of a composition from the specifications of its components.","PeriodicalId":346177,"journal":{"name":"Proceedings of the 20th International Conference on Distributed Computing and Networking","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126782996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A four-wheel steering four-wheel drive (4WS4WD) electric vehicle has a steering motor and a driving motor for each wheel, for a total of eight motors. An earlier work of the authors [2] presented a multi-input multi-output (MIMO) path-tracking control system (PTCS) for an autonomous version of this vehicle. The practical implementation of the PTCS planned by the authors has these nine modules communicating and forming feedback loops over a Controller Area Network (CAN)-based serial link, thereby forming a networked control system (NCS). However, the MIMO nonlinear loops in the PTCS turn the selection of the sampling period TS for a digital implementation, while factoring the time delays introduced by the communication, into a non-trivial task. This work solves the difficulty of MIMO nonlinear loops by finding a SISO representation of the MIMO NCS using a procedure that is applicable to a class of MIMO NCS that evince a certain symmetry. This work solves the problem of conservativeness of the controller and order of controller by systematically accounting for the time delays caused by the communication and by controller code execution. It then validates the choice of TS through a hardware-in-the-loop simulation. The techniques shown in this work are promising for applications involving the coordination of multiple actuators and for CAN-based NCS. Almost all of the subsequent literature seems to have focussed on the challenges, and has not tried to construct an NCS where these challenges may be absent. In sharp contrast, this work focuses in the positives of the NCS architecture, and avoids the challenges by using the communication protocol carefully. As a consequence, this work shows another positive of the NCS architecture that seems to have been overlooked by all the existing literature: that of performance improvement. This paper shows that distributed processing can be used to reduce the sampling interval.
{"title":"CAN-based networked path-tracking control of a 4WS4WD electric vehicle: selection of sampling period and hardware-in-the-loop simulation","authors":"A. Singh, R. Potluri","doi":"10.1145/3288599.3299726","DOIUrl":"https://doi.org/10.1145/3288599.3299726","url":null,"abstract":"A four-wheel steering four-wheel drive (4WS4WD) electric vehicle has a steering motor and a driving motor for each wheel, for a total of eight motors. An earlier work of the authors [2] presented a multi-input multi-output (MIMO) path-tracking control system (PTCS) for an autonomous version of this vehicle. The practical implementation of the PTCS planned by the authors has these nine modules communicating and forming feedback loops over a Controller Area Network (CAN)-based serial link, thereby forming a networked control system (NCS). However, the MIMO nonlinear loops in the PTCS turn the selection of the sampling period TS for a digital implementation, while factoring the time delays introduced by the communication, into a non-trivial task. This work solves the difficulty of MIMO nonlinear loops by finding a SISO representation of the MIMO NCS using a procedure that is applicable to a class of MIMO NCS that evince a certain symmetry. This work solves the problem of conservativeness of the controller and order of controller by systematically accounting for the time delays caused by the communication and by controller code execution. It then validates the choice of TS through a hardware-in-the-loop simulation. The techniques shown in this work are promising for applications involving the coordination of multiple actuators and for CAN-based NCS. Almost all of the subsequent literature seems to have focussed on the challenges, and has not tried to construct an NCS where these challenges may be absent. In sharp contrast, this work focuses in the positives of the NCS architecture, and avoids the challenges by using the communication protocol carefully. As a consequence, this work shows another positive of the NCS architecture that seems to have been overlooked by all the existing literature: that of performance improvement. This paper shows that distributed processing can be used to reduce the sampling interval.","PeriodicalId":346177,"journal":{"name":"Proceedings of the 20th International Conference on Distributed Computing and Networking","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129276396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Mondal, Tunir Roy, Indrajit Bhattacharya, Sourav Bhattacharya, Indranil Das
This paper explores the characteristics of three established rumor propagation features i.e. temporal, structural and linguistic for collected tweets in a natural disaster i.e. Chennai Flood, 2015. The consequences of the features and the rationale behind such effects have been explored extensively for collected rumors and non-rumors of that disaster event.
{"title":"A study on rumor propagation trends and features in a post disaster situation","authors":"T. Mondal, Tunir Roy, Indrajit Bhattacharya, Sourav Bhattacharya, Indranil Das","doi":"10.1145/3288599.3295581","DOIUrl":"https://doi.org/10.1145/3288599.3295581","url":null,"abstract":"This paper explores the characteristics of three established rumor propagation features i.e. temporal, structural and linguistic for collected tweets in a natural disaster i.e. Chennai Flood, 2015. The consequences of the features and the rationale behind such effects have been explored extensively for collected rumors and non-rumors of that disaster event.","PeriodicalId":346177,"journal":{"name":"Proceedings of the 20th International Conference on Distributed Computing and Networking","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126935888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, an accurate missing data prediction method using Long Short-Term Memory (LSTM) based deep learning for health care is proposed. Physiological signal monitoring, especially with missing data, is a challenging task in health-care monitoring. The reliable and accurate acquisition of many physiological signals can help doctors to identify and detect potential health risks. In general, the missing data problem arises due to patient movement, faulty kits, incorrect observation or interference of the network. Subsequently, this problem leads to poorly diagnosed results. The ability of LSTM model to learn long-term dependencies enables it for efficient missing data prediction. In this paper, we proposed two LSTM model for 5-step and 10-step prediction. The dataset used is MIT-BIH normal person ECG data. The experimental results obtained using the LSTM method outperforms the Linear Regression and Gaussian Process Regression (GPR) method.
{"title":"An accurate missing data prediction method using LSTM based deep learning for health care","authors":"Hemant Verma, Sudhir Kumar","doi":"10.1145/3288599.3295580","DOIUrl":"https://doi.org/10.1145/3288599.3295580","url":null,"abstract":"In this paper, an accurate missing data prediction method using Long Short-Term Memory (LSTM) based deep learning for health care is proposed. Physiological signal monitoring, especially with missing data, is a challenging task in health-care monitoring. The reliable and accurate acquisition of many physiological signals can help doctors to identify and detect potential health risks. In general, the missing data problem arises due to patient movement, faulty kits, incorrect observation or interference of the network. Subsequently, this problem leads to poorly diagnosed results. The ability of LSTM model to learn long-term dependencies enables it for efficient missing data prediction. In this paper, we proposed two LSTM model for 5-step and 10-step prediction. The dataset used is MIT-BIH normal person ECG data. The experimental results obtained using the LSTM method outperforms the Linear Regression and Gaussian Process Regression (GPR) method.","PeriodicalId":346177,"journal":{"name":"Proceedings of the 20th International Conference on Distributed Computing and Networking","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114371546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed identity-, attribute- and reputation- management constitutes a major benefit that a properly designed permissioned blockchain system can provide. As a complementary built-in feature set, finely-granulated and time-window-constrained auditability mechanisms aid blockchain performance and scalability by eliminating the need to front-load core transaction processing with onerous communications and computational complexity, while still meeting the requirements of effective governance, risk & compliance management and containment against compromised entities. A prevalent aspect of this methodology is the capability to determine not only what entities should be considered trustworthy, but to what extent and with which relevant functionalities, where completion of tasks entails distributing communications across multiple components as means of addressing corroboration of claimed suitability in order to choose the most trustworthy available solution components matched against specific sub-task requirements.
{"title":"Exploration and impact of blockchain-enabled adaptive non-binary trust models","authors":"D. W. Kravitz","doi":"10.1145/3288599.3288639","DOIUrl":"https://doi.org/10.1145/3288599.3288639","url":null,"abstract":"Distributed identity-, attribute- and reputation- management constitutes a major benefit that a properly designed permissioned blockchain system can provide. As a complementary built-in feature set, finely-granulated and time-window-constrained auditability mechanisms aid blockchain performance and scalability by eliminating the need to front-load core transaction processing with onerous communications and computational complexity, while still meeting the requirements of effective governance, risk & compliance management and containment against compromised entities. A prevalent aspect of this methodology is the capability to determine not only what entities should be considered trustworthy, but to what extent and with which relevant functionalities, where completion of tasks entails distributing communications across multiple components as means of addressing corroboration of claimed suitability in order to choose the most trustworthy available solution components matched against specific sub-task requirements.","PeriodicalId":346177,"journal":{"name":"Proceedings of the 20th International Conference on Distributed Computing and Networking","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114814924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Network-on-Chip is adapted as a profitable framework for communication in on-chip multiprocessors. Congestion management using adaptive routing techniques become the major research focus in recent days. Hotspots are congested cores in multi-core systems, which has to deal with large amount of packetized data than other cores in the network. When a packet has to pass through hotspots, it will adversely affect the overall system performance. We identify the hotspot cores using counters, and propose an adaptive two-way routing algorithm by restricting some routes as in odd-even turn model to handle the presence of hotspots. The algorithm designed not only de-routes packets from current hotspots, but also reduces the possibility of nearby hotspots formation in the future. Experimental results using SPEC 2006 CPU benchmarks show that the chances of hotspots are large in highly congested traffic, and our algorithm gives about 14% average reduction in packet latency than existing routing methods in presence of hotspot cores.
{"title":"Odd-even based adaptive two-way routing in mesh NoCs for hotspot mitigation","authors":"R. Raj, C. Gayathri, Saidalavi Kalady, P. Jayaraj","doi":"10.1145/3288599.3288611","DOIUrl":"https://doi.org/10.1145/3288599.3288611","url":null,"abstract":"Network-on-Chip is adapted as a profitable framework for communication in on-chip multiprocessors. Congestion management using adaptive routing techniques become the major research focus in recent days. Hotspots are congested cores in multi-core systems, which has to deal with large amount of packetized data than other cores in the network. When a packet has to pass through hotspots, it will adversely affect the overall system performance. We identify the hotspot cores using counters, and propose an adaptive two-way routing algorithm by restricting some routes as in odd-even turn model to handle the presence of hotspots. The algorithm designed not only de-routes packets from current hotspots, but also reduces the possibility of nearby hotspots formation in the future. Experimental results using SPEC 2006 CPU benchmarks show that the chances of hotspots are large in highly congested traffic, and our algorithm gives about 14% average reduction in packet latency than existing routing methods in presence of hotspot cores.","PeriodicalId":346177,"journal":{"name":"Proceedings of the 20th International Conference on Distributed Computing and Networking","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126493809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Delay-tolerant network (DTN) has been employed as a viable option for exchanging situational information during post-disaster communication. In contrast to stable network, network reliability in DTN depends on the contact opportunities among the nodes (both mobile and static). Higher degree of contact opportunity implies improved message delivery with reduced delivery latency, which results in enhanced network reliability. In order to improve the degree of contact opportunity among the nodes the deployment of static relay nodes (Throwboxes) has been widely adopted. In order to deploy Throwboxes effectively, prior knowledge regarding network parameters like scale of the network, network topology and mobility pattern of nodes etc. is essential. However, availability of such parameters in post disaster scenarios becomes challenging. Therefore, in this work, we propose an encounter based Throwbox deployment strategy EnTER. The proposed strategy attempts to formulate an empirical relationship between the degree of contact opportunity and extent of network reliability in DTN. The extent of network reliability is measured in terms of Knowledge Sharing Ratio and Average Delivery Latency. This relationship helps in formulating a suitable strategy for deploying Throwboxes in a post-disaster scenario. Extensive simulation by implementing a realistic post-disaster scenario reveals that our proposed Throwbox deployment strategy outperforms one of the state-of-the-art Throwbox deployment strategies in terms of network reliability.
{"title":"EnTER: an encounter based trowbox deployment strategy for enhancing network reliability in post-disaster scenarios over DTN","authors":"S. Bhattacharjee, S. Bit","doi":"10.1145/3288599.3295593","DOIUrl":"https://doi.org/10.1145/3288599.3295593","url":null,"abstract":"Delay-tolerant network (DTN) has been employed as a viable option for exchanging situational information during post-disaster communication. In contrast to stable network, network reliability in DTN depends on the contact opportunities among the nodes (both mobile and static). Higher degree of contact opportunity implies improved message delivery with reduced delivery latency, which results in enhanced network reliability. In order to improve the degree of contact opportunity among the nodes the deployment of static relay nodes (Throwboxes) has been widely adopted. In order to deploy Throwboxes effectively, prior knowledge regarding network parameters like scale of the network, network topology and mobility pattern of nodes etc. is essential. However, availability of such parameters in post disaster scenarios becomes challenging. Therefore, in this work, we propose an encounter based Throwbox deployment strategy EnTER. The proposed strategy attempts to formulate an empirical relationship between the degree of contact opportunity and extent of network reliability in DTN. The extent of network reliability is measured in terms of Knowledge Sharing Ratio and Average Delivery Latency. This relationship helps in formulating a suitable strategy for deploying Throwboxes in a post-disaster scenario. Extensive simulation by implementing a realistic post-disaster scenario reveals that our proposed Throwbox deployment strategy outperforms one of the state-of-the-art Throwbox deployment strategies in terms of network reliability.","PeriodicalId":346177,"journal":{"name":"Proceedings of the 20th International Conference on Distributed Computing and Networking","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114793867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Researchers have proposed to set up "infrastructure-less" peer-to-peer opportunistic network (also known as Delay Tolerant Network) using smart phones carried by different victims or volunteers in post-disaster scenario. Volunteers may use this DTN to relay sensitive situational data. However, in such fragile network environment, some malicious nodes may try to intercept, manipulate data with the intention of corruption and fraud. Furthermore, an adversary node may compel a trusted node to compromise its security credentials or may physically capture the node. As a result, attackers get the authority to sign any message on behalf of the compromised node and can launch various attacks to perturb the network. To combat these attacks, we envision a compromise-tolerant DTN, where time-varying pseudonyms are used to obscure the actual identity and safeguard the privacy of genuine nodes. Unique implicit session key agreement facilitates the establishment of credential-free secure communication session between two legitimate nodes and protects the data from being revealed to the adversaries. Periodic certificate revocation scheme restricts use of any compromised credentials beyond a certain time. We evaluate iSecure scheme using ONE simulator to understand feasibility, performance and overhead.
{"title":"iSecure: imperceptible and secure peer-to-peer communication of post-disaster situational data over opportunistic DTN","authors":"Chandrima Chakrabarti, Siuli Roy","doi":"10.1145/3288599.3295585","DOIUrl":"https://doi.org/10.1145/3288599.3295585","url":null,"abstract":"Researchers have proposed to set up \"infrastructure-less\" peer-to-peer opportunistic network (also known as Delay Tolerant Network) using smart phones carried by different victims or volunteers in post-disaster scenario. Volunteers may use this DTN to relay sensitive situational data. However, in such fragile network environment, some malicious nodes may try to intercept, manipulate data with the intention of corruption and fraud. Furthermore, an adversary node may compel a trusted node to compromise its security credentials or may physically capture the node. As a result, attackers get the authority to sign any message on behalf of the compromised node and can launch various attacks to perturb the network. To combat these attacks, we envision a compromise-tolerant DTN, where time-varying pseudonyms are used to obscure the actual identity and safeguard the privacy of genuine nodes. Unique implicit session key agreement facilitates the establishment of credential-free secure communication session between two legitimate nodes and protects the data from being revealed to the adversaries. Periodic certificate revocation scheme restricts use of any compromised credentials beyond a certain time. We evaluate iSecure scheme using ONE simulator to understand feasibility, performance and overhead.","PeriodicalId":346177,"journal":{"name":"Proceedings of the 20th International Conference on Distributed Computing and Networking","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124691631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}