Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.292
Dennis Titze, Michael Lux, J. Schütte
Android apps often include libraries supporting certain features, or allowing rapid app development. Due to Android's system design, libraries are not easily distinguishable from the app's core code. But detecting libraries in apps is needed especially in app analysis, e.g., to determine if functionality is executed in the app, or in the code of the library.Previous approaches detected libraries in ways which are susceptible to code obfuscation. For some approaches, even simple obfuscation will cause unrecognised libraries.Our approach - Ordol - builds upon approaches from plagiarism detection to detect a specific library version inside an app in an obfuscation-resilient manner. We show that Ordol can cope well with obfuscated code and can be easily applied to real life apps.
{"title":"Ordol: Obfuscation-Resilient Detection of Libraries in Android Applications","authors":"Dennis Titze, Michael Lux, J. Schütte","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.292","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.292","url":null,"abstract":"Android apps often include libraries supporting certain features, or allowing rapid app development. Due to Android's system design, libraries are not easily distinguishable from the app's core code. But detecting libraries in apps is needed especially in app analysis, e.g., to determine if functionality is executed in the app, or in the code of the library.Previous approaches detected libraries in ways which are susceptible to code obfuscation. For some approaches, even simple obfuscation will cause unrecognised libraries.Our approach - Ordol - builds upon approaches from plagiarism detection to detect a specific library version inside an app in an obfuscation-resilient manner. We show that Ordol can cope well with obfuscated code and can be easily applied to real life apps.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132152599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.230
Li Li, Tegawendé F. Bissyandé, Jacques Klein
App updates and repackaging are recurrent in the Android ecosystem, filling markets with similar apps that must be identified and analysed to accelerate user adoption, improve development efforts, and prevent malware spreading. Despite the existence of several approaches to improve the scalability of detecting repackaged/cloned apps, researchers and practitioners are eventually faced with the need for a comprehensive pairwise comparison to understand and validate the similarities among apps. This paper describes the design of SimiDroid, a framework for multi-level comparison of Android apps. SimiDroid is built with the aim to support the understanding of similarities/changes among app versions and among repackaged apps. In particular, we demonstrate the need and usefulness of such a framework based on different case studies implementing different analysing scenarios for revealing various insights on how repackaged apps are built. We further show that the similarity comparison plugins implemented in SimiDroid yield more accurate results than the state-of-the-art.
{"title":"SimiDroid: Identifying and Explaining Similarities in Android Apps","authors":"Li Li, Tegawendé F. Bissyandé, Jacques Klein","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.230","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.230","url":null,"abstract":"App updates and repackaging are recurrent in the Android ecosystem, filling markets with similar apps that must be identified and analysed to accelerate user adoption, improve development efforts, and prevent malware spreading. Despite the existence of several approaches to improve the scalability of detecting repackaged/cloned apps, researchers and practitioners are eventually faced with the need for a comprehensive pairwise comparison to understand and validate the similarities among apps. This paper describes the design of SimiDroid, a framework for multi-level comparison of Android apps. SimiDroid is built with the aim to support the understanding of similarities/changes among app versions and among repackaged apps. In particular, we demonstrate the need and usefulness of such a framework based on different case studies implementing different analysing scenarios for revealing various insights on how repackaged apps are built. We further show that the similarity comparison plugins implemented in SimiDroid yield more accurate results than the state-of-the-art.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115270876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.361
Robin Ankele, A. Simpson
Novel trusted hardware extensions such as Intel's SGX enable user-space applications to be protected against potentially malicious operating systems. Moreover, SGX supports strong attestation guarantees, whereby remote parties can be convinced of the trustworthy nature of the executing user-space application. These developments are particularly interesting in the context of large-scale privacy-preserving data mining. In a typical data mining scenario, mutually distrustful parties have to share potentially sensitive data with an untrusted server, which in turn computes a data mining operation and returns the result to the clients. Generally, such collaborative tasks are referred to as secure multi-party computation (MPC) problems. Privacy-preserving distributed data mining has the additional requirement of (output) privacy preservation (which typically is achieved by the addition of random noise to the function output); additionally, it limits the general purpose functionality to distinct data mining operations. To solve these problems in a scalable and efficient manner, the concept of a Trustworthy Remote Entity (TRE) was recently introduced. We report upon the performance of a SGX-based TRE and compare our results to popular secure MPC frameworks. Due to limitations of the MPC frameworks, we benchmarked only simple operations (and argue that more complex data mining operations can be established by composing several basic operations). We consider both a two-party setting (where we iterate over the number of operations) and a multi-party setting (where we iterate over the number of participants).
{"title":"On the Performance of a Trustworthy Remote Entity in Comparison to Secure Multi-party Computation","authors":"Robin Ankele, A. Simpson","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.361","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.361","url":null,"abstract":"Novel trusted hardware extensions such as Intel's SGX enable user-space applications to be protected against potentially malicious operating systems. Moreover, SGX supports strong attestation guarantees, whereby remote parties can be convinced of the trustworthy nature of the executing user-space application. These developments are particularly interesting in the context of large-scale privacy-preserving data mining. In a typical data mining scenario, mutually distrustful parties have to share potentially sensitive data with an untrusted server, which in turn computes a data mining operation and returns the result to the clients. Generally, such collaborative tasks are referred to as secure multi-party computation (MPC) problems. Privacy-preserving distributed data mining has the additional requirement of (output) privacy preservation (which typically is achieved by the addition of random noise to the function output); additionally, it limits the general purpose functionality to distinct data mining operations. To solve these problems in a scalable and efficient manner, the concept of a Trustworthy Remote Entity (TRE) was recently introduced. We report upon the performance of a SGX-based TRE and compare our results to popular secure MPC frameworks. Due to limitations of the MPC frameworks, we benchmarked only simple operations (and argue that more complex data mining operations can be established by composing several basic operations). We consider both a two-party setting (where we iterate over the number of operations) and a multi-party setting (where we iterate over the number of participants).","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114109986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DNS tunnel is a typical Internet covert channel used by attackers or bots to evade the malicious activities detection. The stolen information is encoded and encapsulated into the DNS packets to transfer. Since DNS traffic is common, most of the firewalls directly allow it to pass and IDS does not trigger an alarm with it. The popular signature-based detection methods and threshold-based methods are not flexible and make high false alarms. The approaches based on characters distribution features also do not perform well, because attackers can modify the encoding method to disturb the characters distributions.In this paper, we propose an effective and applicable DNS tunnel detection mechanism. The prototype system is deployed at the Recursive DNS for tunnel identification. We use four kinds of features including time-interval features, request packet size features, record type features and subdomain entropy features. We evaluate the performance of our proposal with Support Vector Machine, Decision Tree and Logistical Regression. The experiments show that the method can achieve high detection accuracy of 99.96%.
{"title":"Detecting DNS Tunnel through Binary-Classification Based on Behavior Features","authors":"Jingkun Liu, Shuhao Li, Yongzheng Zhang, Jun Xiao, Peng Chang, Chengwei Peng","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.256","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.256","url":null,"abstract":"DNS tunnel is a typical Internet covert channel used by attackers or bots to evade the malicious activities detection. The stolen information is encoded and encapsulated into the DNS packets to transfer. Since DNS traffic is common, most of the firewalls directly allow it to pass and IDS does not trigger an alarm with it. The popular signature-based detection methods and threshold-based methods are not flexible and make high false alarms. The approaches based on characters distribution features also do not perform well, because attackers can modify the encoding method to disturb the characters distributions.In this paper, we propose an effective and applicable DNS tunnel detection mechanism. The prototype system is deployed at the Recursive DNS for tunnel identification. We use four kinds of features including time-interval features, request packet size features, record type features and subdomain entropy features. We evaluate the performance of our proposal with Support Vector Machine, Decision Tree and Logistical Regression. The experiments show that the method can achieve high detection accuracy of 99.96%.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"14 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123675391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.346
Ava Ahadipour, Martin Schanzenbach
In distributed environments, entities are distributed among different security domains and they do not have prior knowledge of one another. In this setting, distributed systems and their security components such as entities, certificates, credentials, policies and trust values are dynamic and constantly changing. Thus, access control models and trust approaches are necessary to support the dynamic and distributed features of such systems and their components. The objective of this paper is to present a comprehensive survey about the security research in distributed systems. We have reviewed the dynamic and distributed nature of the components and evaluation methods of major authorization systems and access control models in existing literature. Based on this overview, we present a survey of selected trust schemes. We provide a categorization for recommendation-based and reputation-based trust models based on trust evaluation. Additionally, we use credential or certificate storage and chain discovery methods for categorizing evidencebased and policy-based trust models. This work can be used as a reference guide to understand authorization and trust management and to further research fully decentralized and distributed authorization systems.
{"title":"A Survey on Authorization in Distributed Systems: Information Storage, Data Retrieval and Trust Evaluation","authors":"Ava Ahadipour, Martin Schanzenbach","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.346","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.346","url":null,"abstract":"In distributed environments, entities are distributed among different security domains and they do not have prior knowledge of one another. In this setting, distributed systems and their security components such as entities, certificates, credentials, policies and trust values are dynamic and constantly changing. Thus, access control models and trust approaches are necessary to support the dynamic and distributed features of such systems and their components. The objective of this paper is to present a comprehensive survey about the security research in distributed systems. We have reviewed the dynamic and distributed nature of the components and evaluation methods of major authorization systems and access control models in existing literature. Based on this overview, we present a survey of selected trust schemes. We provide a categorization for recommendation-based and reputation-based trust models based on trust evaluation. Additionally, we use credential or certificate storage and chain discovery methods for categorizing evidencebased and policy-based trust models. This work can be used as a reference guide to understand authorization and trust management and to further research fully decentralized and distributed authorization systems.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122412578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.219
Wubin Pan, Guang Cheng, Yongning Tang
SSL/TLS protocol is widely used for secure web applications (i.e., HTTPS). Classifying encrypted SSL/TLS based applications is an important but challenging task for network management. Traditional traffic classification methods are incapable of accomplishing this task. Several recently proposed approaches that focused on discriminating defining fingerprints among various SSL/TLS applications have also shown various limitations. In this paper, we design a Weighted ENsemble Classifier (WENC) to tackle these limitations. WENC studies the characteristics of various sub-flows during the HTTPS handshake process and the following data transmission period. To increase the fingerprint recognizability, we propose to establish a second-order Markov chain model with a fingerprint variable jointly considering the packet length and the message type during the process of HTTPS handshake. Furthermore, the series of the packet lengths of application data is modeled as HMM with optimal emission probability. Finally, a weighted ensemble strategy is devised to accommodate the advantages of several approaches as a unified one. Experimental results show that the classification accuracy of the proposed method reaches 90%, with an 11% improvement on average comparing to the state-of-the-art methods.
{"title":"WENC: HTTPS Encrypted Traffic Classification Using Weighted Ensemble Learning and Markov Chain","authors":"Wubin Pan, Guang Cheng, Yongning Tang","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.219","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.219","url":null,"abstract":"SSL/TLS protocol is widely used for secure web applications (i.e., HTTPS). Classifying encrypted SSL/TLS based applications is an important but challenging task for network management. Traditional traffic classification methods are incapable of accomplishing this task. Several recently proposed approaches that focused on discriminating defining fingerprints among various SSL/TLS applications have also shown various limitations. In this paper, we design a Weighted ENsemble Classifier (WENC) to tackle these limitations. WENC studies the characteristics of various sub-flows during the HTTPS handshake process and the following data transmission period. To increase the fingerprint recognizability, we propose to establish a second-order Markov chain model with a fingerprint variable jointly considering the packet length and the message type during the process of HTTPS handshake. Furthermore, the series of the packet lengths of application data is modeled as HMM with optimal emission probability. Finally, a weighted ensemble strategy is devised to accommodate the advantages of several approaches as a unified one. Experimental results show that the classification accuracy of the proposed method reaches 90%, with an 11% improvement on average comparing to the state-of-the-art methods.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121041624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.307
Dominique Fleurbaaij, M. Scanlon, Nhien-An Le-Khac
In recent years the use of digital communication has increased. This also increased the chance to find privileged data in the digital evidence. Privileged data is protected by law from viewing by anyone other than the client. It is up to the digital investigator to handle this privileged data properly without being able to view the contents. Procedures on handling this information are available, but do not provide any practical information nor is it known how effective filtering is. The objective of this paper is to describe the handling of privileged data in the current digital forensic tools and the creation of a script within the digital forensic tool Nuix. The script automates the handling of privileged data to minimize the exposure of the contents to the digital investigator. The script also utilizes technology within Nuix that extends the automated search of identical privileged document to relate files based on their contents. A comparison of the 'traditional' ways of filtering within the digital forensic tools and the script written in Nuix showed that digital forensic tools are still limited when used on privileged data. The script manages to increase the effectiveness as direct result of the use of relations based on file content.
{"title":"Privileged Data Within Digital Evidence","authors":"Dominique Fleurbaaij, M. Scanlon, Nhien-An Le-Khac","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.307","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.307","url":null,"abstract":"In recent years the use of digital communication has increased. This also increased the chance to find privileged data in the digital evidence. Privileged data is protected by law from viewing by anyone other than the client. It is up to the digital investigator to handle this privileged data properly without being able to view the contents. Procedures on handling this information are available, but do not provide any practical information nor is it known how effective filtering is. The objective of this paper is to describe the handling of privileged data in the current digital forensic tools and the creation of a script within the digital forensic tool Nuix. The script automates the handling of privileged data to minimize the exposure of the contents to the digital investigator. The script also utilizes technology within Nuix that extends the automated search of identical privileged document to relate files based on their contents. A comparison of the 'traditional' ways of filtering within the digital forensic tools and the script written in Nuix showed that digital forensic tools are still limited when used on privileged data. The script manages to increase the effectiveness as direct result of the use of relations based on file content.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128303956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.358
Fan Jin, V. Varadharajan, U. Tupakula
Cognitive radio (CR) can improve the utilization of the spectrum by making use of licensed spectrum in an opportunistic manner. The sensing reports from all the CR nodes are sent to a Fusion Centre (FC) which aggregates these reports and takes decision about the presence of the PU, based on some decision rules. Such a collaborative sensing mechanism forms the foundation of any centralised CRN. However, this collaborative sensing mechanism provides more opportunities for malicious users (MUs) hiding in the legal users to launch spectrum sensing data falsification (SSDF) attacks. In an SSDF attack, some malicious users intentionally report incorrect local sensing results to the FC and disrupt the global decision-making process. To mitigate SSDF attacks, an Eclat algorithm based detection strategy is proposed in this paper for finding out the colluding malicious nodes. Simulation results show that the sensing performance of the scheme is better than the traditional majority based voting decision in the presence of SSDF attacks.
{"title":"An Eclat Algorithm Based Energy Detection for Cognitive Radio Networks","authors":"Fan Jin, V. Varadharajan, U. Tupakula","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.358","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.358","url":null,"abstract":"Cognitive radio (CR) can improve the utilization of the spectrum by making use of licensed spectrum in an opportunistic manner. The sensing reports from all the CR nodes are sent to a Fusion Centre (FC) which aggregates these reports and takes decision about the presence of the PU, based on some decision rules. Such a collaborative sensing mechanism forms the foundation of any centralised CRN. However, this collaborative sensing mechanism provides more opportunities for malicious users (MUs) hiding in the legal users to launch spectrum sensing data falsification (SSDF) attacks. In an SSDF attack, some malicious users intentionally report incorrect local sensing results to the FC and disrupt the global decision-making process. To mitigate SSDF attacks, an Eclat algorithm based detection strategy is proposed in this paper for finding out the colluding malicious nodes. Simulation results show that the sensing performance of the scheme is better than the traditional majority based voting decision in the presence of SSDF attacks.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"06 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127373134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.251
Lyes Touati
The Internet of Things (IoT) is a new paradigm in which every-day objects are interconnected between each other and to the Internet. This paradigm is receiving much attention of the scientific community and it is applied in many fields. In some applications, it is useful to prove that a number of objects are simultaneously present in a group. For example, an individual might want to authorize NFC payment with his mobile only if k of his devices are present to ensure that he is the right person. This principle is known as Grouping-Proofs. However, existing Grouping-Proofs schemes are mostly designed for RFID systems and don’t fulfill the IoT characteristics. In this paper, we propose a Threshold Grouping-Proofs for IoT applications. Our scheme uses the Key-Policy Attribute-Based Encryption (KP-ABE) protocol to encrypt a message so that it can be decrypted only if at least k objects are simultaneously present in the same location. A security analysis and performance evaluation is conducted to show the effectiveness of our proposal solution.
{"title":"Grouping-Proofs Based Access Control Using KP-ABE for IoT Applications","authors":"Lyes Touati","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.251","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.251","url":null,"abstract":"The Internet of Things (IoT) is a new paradigm in which every-day objects are interconnected between each other and to the Internet. This paradigm is receiving much attention of the scientific community and it is applied in many fields. In some applications, it is useful to prove that a number of objects are simultaneously present in a group. For example, an individual might want to authorize NFC payment with his mobile only if k of his devices are present to ensure that he is the right person. This principle is known as Grouping-Proofs. However, existing Grouping-Proofs schemes are mostly designed for RFID systems and don’t fulfill the IoT characteristics. In this paper, we propose a Threshold Grouping-Proofs for IoT applications. Our scheme uses the Key-Policy Attribute-Based Encryption (KP-ABE) protocol to encrypt a message so that it can be decrypted only if at least k objects are simultaneously present in the same location. A security analysis and performance evaluation is conducted to show the effectiveness of our proposal solution.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128530993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.223
C. Shao, Huiyun Li
Single event transients (SETs) have seriously deteriorated the reliability Integrated circuits (ICs), especially for those in mission- or security-critical applications. Detecting and locating SETs can be useful for fault analysis and future enhancement. Traditional SET detecting methods usually require special sensors embedded into the circuits, or radiation scanning with fine resolutions over the surface for inspection. In this paper, we establish the relationship between sparsity of SETs and the overall faults. Then we develop the method of compressed sensing to detect the location of SET in ICs, without any embed sensors or imaging procession. A case study on a cryptographic IC by logic simulation is demonstrated. It verifies that the proposed method has two main advantages: 1) the SET sensitive area can be accurately identified. 2) The sampling rate is reduced by 70%, therefore the test efficiency is largely enhanced with negligible hardware overhead.
{"title":"Detection of Single Event Transients Based on Compressed Sensing","authors":"C. Shao, Huiyun Li","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.223","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.223","url":null,"abstract":"Single event transients (SETs) have seriously deteriorated the reliability Integrated circuits (ICs), especially for those in mission- or security-critical applications. Detecting and locating SETs can be useful for fault analysis and future enhancement. Traditional SET detecting methods usually require special sensors embedded into the circuits, or radiation scanning with fine resolutions over the surface for inspection. In this paper, we establish the relationship between sparsity of SETs and the overall faults. Then we develop the method of compressed sensing to detect the location of SET in ICs, without any embed sensors or imaging procession. A case study on a cryptographic IC by logic simulation is demonstrated. It verifies that the proposed method has two main advantages: 1) the SET sensitive area can be accurately identified. 2) The sampling rate is reduced by 70%, therefore the test efficiency is largely enhanced with negligible hardware overhead.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127381143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}