Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.280
F. Maurer, Till Neudecker, Martin Florian
Bitcoin, the arguably most popular cryptocurrency to date, allows users to perform transactions using freely chosen pseudonymous addresses. Previous research, however, suggests that these pseudonyms can easily be linked, implying a lower level of privacy than originally expected. To obfuscate the links between pseudonyms, different mixing methods have been proposed. One of the first approaches is the CoinJoin concept, where multiple users merge their transactions into one larger transaction. In theory, CoinJoin can be used to mix and transact bitcoins simultaneously, in one step. Yet, it is expected that differing bitcoin amounts would allow an attacker to derive the original single transactions. Solutions based on CoinJoin therefore prescribe the use of fixed bitcoin amounts and cannot be used to perform arbitrary transactions.In this paper, we define a model for CoinJoin transactions and metrics that allow conclusions about the provided anonymity. We generate and analyze CoinJoin transactions and show that with differing, representative amounts they generally do not provide any significant anonymity gains. As a solution to this problem, we present an output splitting approach that introduces sufficient ambiguity to effectively prevent linking in CoinJoin transactions. Furthermore, we discuss how this approach could be used in Bitcoin today.
{"title":"Anonymous CoinJoin Transactions with Arbitrary Values","authors":"F. Maurer, Till Neudecker, Martin Florian","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.280","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.280","url":null,"abstract":"Bitcoin, the arguably most popular cryptocurrency to date, allows users to perform transactions using freely chosen pseudonymous addresses. Previous research, however, suggests that these pseudonyms can easily be linked, implying a lower level of privacy than originally expected. To obfuscate the links between pseudonyms, different mixing methods have been proposed. One of the first approaches is the CoinJoin concept, where multiple users merge their transactions into one larger transaction. In theory, CoinJoin can be used to mix and transact bitcoins simultaneously, in one step. Yet, it is expected that differing bitcoin amounts would allow an attacker to derive the original single transactions. Solutions based on CoinJoin therefore prescribe the use of fixed bitcoin amounts and cannot be used to perform arbitrary transactions.In this paper, we define a model for CoinJoin transactions and metrics that allow conclusions about the provided anonymity. We generate and analyze CoinJoin transactions and show that with differing, representative amounts they generally do not provide any significant anonymity gains. As a solution to this problem, we present an output splitting approach that introduces sufficient ambiguity to effectively prevent linking in CoinJoin transactions. Furthermore, we discuss how this approach could be used in Bitcoin today.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130527979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.324
Chen Hu, J. Miao, Zhuo Su, X. Shi, Qiang Chen, Xiaonan Luo
High-precision attribute prediction is a challenging issue due to the complex object and scene variations. Targeting on enhancing attribute prediction precision, we propose an Enhanced Attribute Prediction-Latent Dirichlet Allocation (EAP-LDA) model to address this issue. EAP-LDA model enhances the attribute prediction precision in two steps: classification adaptation and prediction enhancement. In classification adaptation, we transfer image low-level features to mid-level features (attributes) by the SVM classifiers, which are trained using the low-level features extracted from images. In prediction enhancement, we first exploit its advantages in extracting and analyzing the topic information between image samples and attributes by the LDA topic model. We then use a strategy to search the nearest neighbor image collection from test datasets by KNN. Finally, we evaluate the accuracy onHAT datasets and demonstrate significant improvement over the baseline algorithm.
{"title":"Precision-Enhanced Image Attribute Prediction Model","authors":"Chen Hu, J. Miao, Zhuo Su, X. Shi, Qiang Chen, Xiaonan Luo","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.324","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.324","url":null,"abstract":"High-precision attribute prediction is a challenging issue due to the complex object and scene variations. Targeting on enhancing attribute prediction precision, we propose an Enhanced Attribute Prediction-Latent Dirichlet Allocation (EAP-LDA) model to address this issue. EAP-LDA model enhances the attribute prediction precision in two steps: classification adaptation and prediction enhancement. In classification adaptation, we transfer image low-level features to mid-level features (attributes) by the SVM classifiers, which are trained using the low-level features extracted from images. In prediction enhancement, we first exploit its advantages in extracting and analyzing the topic information between image samples and attributes by the LDA topic model. We then use a strategy to search the nearest neighbor image collection from test datasets by KNN. Finally, we evaluate the accuracy onHAT datasets and demonstrate significant improvement over the baseline algorithm.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132032390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.241
Chengwei Peng, Xiao-chun Yun, Yongzheng Zhang, Shuhao Li, Jun Xiao
Malicious domains play a vital component in various cyber crimes. Most of the prior works depend on DNS A (address) records to detect the malicious domains, which are directly resolved to IP addresses. In this paper, we propose a malicious domain detection method focusing on the domains that are not resolved to IP addresses directly but only appear in DNS CNAME (canonical name) records. This kind of domains occupy 18.39% of the total domains in our 1530-days-long DNS traffic dataset collected from 217 DNS servers. In addition, the real-world dataset shows that domains connected with malicious ones through DNS CNAME records tend to be malicious too. Based on this observation, our proposal can identify the illegal domains by computing their maliciousness probabilities. The experiments demonstrate the high detection performance of our solution. It achieves the accuracy, on average, over 97.25% true positive rate with less than 0.027% false positive rate. Moreover, the proposal performs near real time detections. Our work can help network attack defenders to build a more robust domain monitoring system.
{"title":"Discovering Malicious Domains through Alias-Canonical Graph","authors":"Chengwei Peng, Xiao-chun Yun, Yongzheng Zhang, Shuhao Li, Jun Xiao","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.241","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.241","url":null,"abstract":"Malicious domains play a vital component in various cyber crimes. Most of the prior works depend on DNS A (address) records to detect the malicious domains, which are directly resolved to IP addresses. In this paper, we propose a malicious domain detection method focusing on the domains that are not resolved to IP addresses directly but only appear in DNS CNAME (canonical name) records. This kind of domains occupy 18.39% of the total domains in our 1530-days-long DNS traffic dataset collected from 217 DNS servers. In addition, the real-world dataset shows that domains connected with malicious ones through DNS CNAME records tend to be malicious too. Based on this observation, our proposal can identify the illegal domains by computing their maliciousness probabilities. The experiments demonstrate the high detection performance of our solution. It achieves the accuracy, on average, over 97.25% true positive rate with less than 0.027% false positive rate. Moreover, the proposal performs near real time detections. Our work can help network attack defenders to build a more robust domain monitoring system.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125534257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/TRUSTCOM/BIGDATASE/ICESS.2017.293
Ana Nieto, Ruben Rios, Javier López
The Internet of Things (IoT) brings new challenges to digital forensics. Given the number and heterogeneity of devices in such scenarios, it bring extremely difficult to carry out investigations without the cooperation of individuals. Even if they are not directly involved in the offense, their devices can yield digital evidence that might provide useful clarification in an investigation. However, when providing such evidence they may leak sensitive personal information. This paper proposes PRoFIT; a new model for IoT-forensics that takes privacy into consideration by incorporating the requirements of ISO/IEC 29100:2011 throughout the investigation life cycle. PRoFIT is intended to lay the groundwork for the voluntary cooperation of individuals in cyber crime investigations.
{"title":"A Methodology for Privacy-Aware IoT-Forensics","authors":"Ana Nieto, Ruben Rios, Javier López","doi":"10.1109/TRUSTCOM/BIGDATASE/ICESS.2017.293","DOIUrl":"https://doi.org/10.1109/TRUSTCOM/BIGDATASE/ICESS.2017.293","url":null,"abstract":"The Internet of Things (IoT) brings new challenges to digital forensics. Given the number and heterogeneity of devices in such scenarios, it bring extremely difficult to carry out investigations without the cooperation of individuals. Even if they are not directly involved in the offense, their devices can yield digital evidence that might provide useful clarification in an investigation. However, when providing such evidence they may leak sensitive personal information. This paper proposes PRoFIT; a new model for IoT-forensics that takes privacy into consideration by incorporating the requirements of ISO/IEC 29100:2011 throughout the investigation life cycle. PRoFIT is intended to lay the groundwork for the voluntary cooperation of individuals in cyber crime investigations.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128005220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Android access control granularity based on its permission mechanism is relatively coarse, which cannot effectively protect the user privacy. Many Android applications do not strictly abide by the principle of least privilege (PLP). Both benign and malicious apps may request more permissions than those they really use. We rethink previous permission over-claim problem of Android applications, and extend it to three kinds of problems: Explicit Permission Over-claim, Implicit Permission Over-claim and Ad Library Permission Over-claim. The latter two problems are new that have not been raised by any previous work. Static analysis is to decompile the applications to generate intermediate code and then analyze the usage of permissions. Our static analysis on 10710 applications shows that 76.08% of them may have Explicit Permission Over-claim problem, among those there are 424 applications that have sensitive permissions, which are only used in the advertisement library’s code of the applications rather than developer’s own code. They have Ad Library Permission Over-claim problem. The main idea of our semantic analysis is to calculate the semantic similarity between apps’ descriptions and function phrases. If the similarity exceeds a certain threshold, the app is considered relevant to the corresponding function. We compare the results of the semantic analysis with those of manual reading of 102 Android application descriptions. The F-measures of the three chosen functions are 80.82%, 70.48% and 89.62%, respectively. The evaluation results show our method can efficiently detect the above three kinds of permission over claim problems which indicates that our method would be helpful for normal users to have a clear understanding of permission usage of Android applications.
{"title":"Detecting Permission Over-claim of Android Applications with Static and Semantic Analysis Approach","authors":"Junwei Tang, Ruixuan Li, Hongmu Han, Heng Zhang, X. Gu","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.303","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.303","url":null,"abstract":"Android access control granularity based on its permission mechanism is relatively coarse, which cannot effectively protect the user privacy. Many Android applications do not strictly abide by the principle of least privilege (PLP). Both benign and malicious apps may request more permissions than those they really use. We rethink previous permission over-claim problem of Android applications, and extend it to three kinds of problems: Explicit Permission Over-claim, Implicit Permission Over-claim and Ad Library Permission Over-claim. The latter two problems are new that have not been raised by any previous work. Static analysis is to decompile the applications to generate intermediate code and then analyze the usage of permissions. Our static analysis on 10710 applications shows that 76.08% of them may have Explicit Permission Over-claim problem, among those there are 424 applications that have sensitive permissions, which are only used in the advertisement library’s code of the applications rather than developer’s own code. They have Ad Library Permission Over-claim problem. The main idea of our semantic analysis is to calculate the semantic similarity between apps’ descriptions and function phrases. If the similarity exceeds a certain threshold, the app is considered relevant to the corresponding function. We compare the results of the semantic analysis with those of manual reading of 102 Android application descriptions. The F-measures of the three chosen functions are 80.82%, 70.48% and 89.62%, respectively. The evaluation results show our method can efficiently detect the above three kinds of permission over claim problems which indicates that our method would be helpful for normal users to have a clear understanding of permission usage of Android applications.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133591372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.312
Chaoneng Xiang, Duo Liu, Shiming Li, Xiao Zhu, Yang Li, Jinting Ren, Liang Liang
A variety of applications (App) installed on mobile systems such as smartphones enrich our lives, but make it more difficult to the system management. For example, finding the specific Apps becomes more inconvenient due to more Apps installed on smartphones, and App response time could become longer because of the gap between more, larger Apps and limited memory capacity. Recent work has proposed several methods of predicting next used Apps (here in after appprediction) to solve the issues, but faces the problems of the low prediction accuracy and high training costs. Especially, applying app-prediction to memory management (such as LMK) and App prelaunching has high requirements for the prediction accuracy and training costs. In this paper, we propose an app-prediction framework, named HiNextApp, to improve the app-prediction accuracy and reduce training costs in mobile systems. HiNextApp is based on contextual information, and can adjust the size of prediction periods adaptively. The framework mainly consists of two parts: non-uniform bayes model and an elastic algorithm. The experimental results show that HiNextApp can effectively improve the prediction accuracy and reduce training times. Besides, compared with traditional bayes model, the overhead of our framework is relatively low.
{"title":"HiNextApp: A Context-Aware and Adaptive Framework for App Prediction in Mobile Systems","authors":"Chaoneng Xiang, Duo Liu, Shiming Li, Xiao Zhu, Yang Li, Jinting Ren, Liang Liang","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.312","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.312","url":null,"abstract":"A variety of applications (App) installed on mobile systems such as smartphones enrich our lives, but make it more difficult to the system management. For example, finding the specific Apps becomes more inconvenient due to more Apps installed on smartphones, and App response time could become longer because of the gap between more, larger Apps and limited memory capacity. Recent work has proposed several methods of predicting next used Apps (here in after appprediction) to solve the issues, but faces the problems of the low prediction accuracy and high training costs. Especially, applying app-prediction to memory management (such as LMK) and App prelaunching has high requirements for the prediction accuracy and training costs. In this paper, we propose an app-prediction framework, named HiNextApp, to improve the app-prediction accuracy and reduce training costs in mobile systems. HiNextApp is based on contextual information, and can adjust the size of prediction periods adaptively. The framework mainly consists of two parts: non-uniform bayes model and an elastic algorithm. The experimental results show that HiNextApp can effectively improve the prediction accuracy and reduce training times. Besides, compared with traditional bayes model, the overhead of our framework is relatively low.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133922501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.340
Qingru Li, Zhiyuan Tan, Aruna Jamdagni, P. Nanda, Xiangjian He, Wei Han
This paper proposes an anomaly-based Intrusion Detection System (IDS), which flags anomalous network traffic with a distance-based classifier. A polynomial approach was designed and applied in this work to extract hidden correlations from traffic related statistics in order to provide distinguishing features for detection. The proposed IDS was evaluated using the well-known KDD Cup 99 data set. Evaluation results show that the proposed system achieved better detection rates on KDD Cup 99 data set in comparison with another two state-of-the-art detection schemes. Moreover, the computational complexity of the system has been analysed in this paper and shows similar to the two state-of-the-art schemes.
本文提出了一种基于异常的入侵检测系统(IDS),该系统使用基于距离的分类器标记异常网络流量。设计并应用了多项式方法从交通相关统计数据中提取隐藏的相关性,以便为检测提供区分特征。使用著名的KDD Cup 99数据集对所提出的IDS进行了评估。评估结果表明,与另外两种最先进的检测方案相比,该系统在KDD Cup 99数据集上取得了更好的检测率。此外,本文还分析了该系统的计算复杂度,并显示出与两种最先进的方案相似。
{"title":"An Intrusion Detection System Based on Polynomial Feature Correlation Analysis","authors":"Qingru Li, Zhiyuan Tan, Aruna Jamdagni, P. Nanda, Xiangjian He, Wei Han","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.340","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.340","url":null,"abstract":"This paper proposes an anomaly-based Intrusion Detection System (IDS), which flags anomalous network traffic with a distance-based classifier. A polynomial approach was designed and applied in this work to extract hidden correlations from traffic related statistics in order to provide distinguishing features for detection. The proposed IDS was evaluated using the well-known KDD Cup 99 data set. Evaluation results show that the proposed system achieved better detection rates on KDD Cup 99 data set in comparison with another two state-of-the-art detection schemes. Moreover, the computational complexity of the system has been analysed in this paper and shows similar to the two state-of-the-art schemes.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131882359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.328
Jorge Gonzalez-Lopez, Alberto Cano, S. Ventura
Multi-label learning is a challenging problem which has received growing attention in the research community over the last years. Hence, there is a growing demand of effective and scalable multi-label learning methods for larger datasets both in terms of number of instances and numbers of output labels. The use of ensemble classifiers is a popular approach for improving multi-label model accuracy, especially for datasets with high-dimensional label spaces. However, the increasing computational complexity of the algorithms in such ever-growing high-dimensional label spaces, requires new approaches to manage data effectively and efficiently in distributed computing environments. Spark is a framework based on MapReduce, a distributed programming model that offers a robust paradigm to handle large-scale datasets in a cluster of nodes. This paper focuses on multi-label ensembles and proposes a number of implementations through the use of parallel and distributed computing using Spark. Additionally, five different implementations are proposed and the impact on the performance of the ensemble is analyzed. The experimental study shows the benefits of using distributed implementations over the traditional single-node single-thread execution, in terms of performance over multiple metrics as well as significant speedup tested on 29 benchmark datasets.
{"title":"Large-Scale Multi-label Ensemble Learning on Spark","authors":"Jorge Gonzalez-Lopez, Alberto Cano, S. Ventura","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.328","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.328","url":null,"abstract":"Multi-label learning is a challenging problem which has received growing attention in the research community over the last years. Hence, there is a growing demand of effective and scalable multi-label learning methods for larger datasets both in terms of number of instances and numbers of output labels. The use of ensemble classifiers is a popular approach for improving multi-label model accuracy, especially for datasets with high-dimensional label spaces. However, the increasing computational complexity of the algorithms in such ever-growing high-dimensional label spaces, requires new approaches to manage data effectively and efficiently in distributed computing environments. Spark is a framework based on MapReduce, a distributed programming model that offers a robust paradigm to handle large-scale datasets in a cluster of nodes. This paper focuses on multi-label ensembles and proposes a number of implementations through the use of parallel and distributed computing using Spark. Additionally, five different implementations are proposed and the impact on the performance of the ensemble is analyzed. The experimental study shows the benefits of using distributed implementations over the traditional single-node single-thread execution, in terms of performance over multiple metrics as well as significant speedup tested on 29 benchmark datasets.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132147222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Network attack prevention is a critical research area of information security. Network attacks would become choked if attribution techniques are capable of tracing back to the attacker after the hacking event. Therefore, attributing these attacks to a person or organization turns into one of the important tasks when analysts attempt to profile the attacker behind attack traces. To facilitate this process, we research on the connections among attribution traces and propose methods based on probabilistic relevance. First, we present a two-layer NetworkTrace frame, then based on relevance patterns, we propose the existence probability of concerned subjects. At last, we quantify the connection relevance between subjects through a Ref algorithm. By means of analyzing the attribution traces extracted from APT1 report, we illustrate the effectiveness of the existence probability algorithm. Then, we demonstrate Ref's effectiveness in quantifying the relevancies between organization and its affinitive partners by analyzing the relevancies and draw relevance matrix between APT1 inodes. The results show the proposed NetworkTrace facilitates the evaluation of the plausibility relevance between different traceable subjects.
{"title":"NetworkTrace: Probabilistic Relevant Pattern Recognition Approach to Attribution Trace Analysis","authors":"Jian Xu, Xiao-chun Yun, Yongzheng Zhang, Yafei Sang, Zhenyu Cheng","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.301","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.301","url":null,"abstract":"Network attack prevention is a critical research area of information security. Network attacks would become choked if attribution techniques are capable of tracing back to the attacker after the hacking event. Therefore, attributing these attacks to a person or organization turns into one of the important tasks when analysts attempt to profile the attacker behind attack traces. To facilitate this process, we research on the connections among attribution traces and propose methods based on probabilistic relevance. First, we present a two-layer NetworkTrace frame, then based on relevance patterns, we propose the existence probability of concerned subjects. At last, we quantify the connection relevance between subjects through a Ref algorithm. By means of analyzing the attribution traces extracted from APT1 report, we illustrate the effectiveness of the existence probability algorithm. Then, we demonstrate Ref's effectiveness in quantifying the relevancies between organization and its affinitive partners by analyzing the relevancies and draw relevance matrix between APT1 inodes. The results show the proposed NetworkTrace facilitates the evaluation of the plausibility relevance between different traceable subjects.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131840755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1109/Trustcom/BigDataSE/ICESS.2017.297
Ibrahim Tariq Javed, Khalifa Toumi, N. Crespi
Web calling services are exposed to numerous social security threats in which context of communication is manipulated. A attacker establishes a communication session to send numerous simultaneous pre-recorded advertisement calls (Robocalls), distribute malicious files or viruses and uses false identity to conduct phishing. User identification alone is not sufficient to provide a high level of trust between communicating participants. Therefore, we propose ’ProtectCall’ a trust model that allows web calling services to estimate the trustworthiness and reputation of their users based on the evaluation of three parameters: authenticity, credibility and popularity. The main objective of ProtectCall is to protect web communication services from social security threats. ProtectCall allows users to make decisions based on the trustworthiness of their communicating participants.
{"title":"ProtectCall: Call Protection Based on User Reputation","authors":"Ibrahim Tariq Javed, Khalifa Toumi, N. Crespi","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.297","DOIUrl":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.297","url":null,"abstract":"Web calling services are exposed to numerous social security threats in which context of communication is manipulated. A attacker establishes a communication session to send numerous simultaneous pre-recorded advertisement calls (Robocalls), distribute malicious files or viruses and uses false identity to conduct phishing. User identification alone is not sufficient to provide a high level of trust between communicating participants. Therefore, we propose ’ProtectCall’ a trust model that allows web calling services to estimate the trustworthiness and reputation of their users based on the evaluation of three parameters: authenticity, credibility and popularity. The main objective of ProtectCall is to protect web communication services from social security threats. ProtectCall allows users to make decisions based on the trustworthiness of their communicating participants.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116358055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}