首页 > 最新文献

Proceedings of the 2020 10th International Conference on Communication and Network Security最新文献

英文 中文
Vulnerability Analysis of the Exposed Public IPs in a Higher Education Institution 某高等院校公网ip暴露的脆弱性分析
Agustín Chancusi, Paúl Diestra, Damián Nicolalde
Public IP addresses from a private or public higher education institution receive large amounts of network traffic. However, the data network is vulnerable to the possibility of security attacks. This study develops a case in a practical way based in the use of the Advance IP Scanner and Shodan software tools, and following a methodology that consists of discovering an education institution IP network and scanning its hosts of interest to then find the security vulnerabilities of the main network addresses. From a statistical universe consisting of the entire range of IP addresses in the institution's network, a group of hosts of interest were defined as a sample set for further examination. On that base, the aim of this study is to analyze and classify the obtained vulnerabilities information by severity of the vulnerability for each found host using the described methodology, in order to obtain statistics at a host level and at the entire network level of the vulnerabilities by severity and quantity. It is concluded that most of the hosts have vulnerabilities in their Apache servers’ HTTP daemons, and they cause in a high percentage of them having vulnerabilities at the Critical level.
来自私立或公立高等教育机构的公共IP地址接收大量的网络流量。然而,数据网络容易受到安全攻击的威胁。本研究以一种实用的方式开发了一个案例,基于使用高级IP扫描仪和Shodan软件工具,并遵循一种方法,包括发现一个教育机构的IP网络,扫描其感兴趣的主机,然后找到主网络地址的安全漏洞。从由机构网络中的整个IP地址范围组成的统计范围中,一组感兴趣的主机被定义为进一步检查的样本集。在此基础上,本研究的目的是利用所描述的方法,对所发现的每台主机进行漏洞严重程度的分析和分类,从而获得主机级和全网级漏洞严重程度和数量的统计数据。得出的结论是,大多数主机在其Apache服务器的HTTP守护进程中存在漏洞,并且它们导致其中很高比例的主机具有临界级别的漏洞。
{"title":"Vulnerability Analysis of the Exposed Public IPs in a Higher Education Institution","authors":"Agustín Chancusi, Paúl Diestra, Damián Nicolalde","doi":"10.1145/3442520.3442523","DOIUrl":"https://doi.org/10.1145/3442520.3442523","url":null,"abstract":"Public IP addresses from a private or public higher education institution receive large amounts of network traffic. However, the data network is vulnerable to the possibility of security attacks. This study develops a case in a practical way based in the use of the Advance IP Scanner and Shodan software tools, and following a methodology that consists of discovering an education institution IP network and scanning its hosts of interest to then find the security vulnerabilities of the main network addresses. From a statistical universe consisting of the entire range of IP addresses in the institution's network, a group of hosts of interest were defined as a sample set for further examination. On that base, the aim of this study is to analyze and classify the obtained vulnerabilities information by severity of the vulnerability for each found host using the described methodology, in order to obtain statistics at a host level and at the entire network level of the vulnerabilities by severity and quantity. It is concluded that most of the hosts have vulnerabilities in their Apache servers’ HTTP daemons, and they cause in a high percentage of them having vulnerabilities at the Critical level.","PeriodicalId":340416,"journal":{"name":"Proceedings of the 2020 10th International Conference on Communication and Network Security","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127910797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Slow Scan Attack Detection Based on Communication Behavior 基于通信行为的慢扫描攻击检测
Tomoya Yamashita, Daisuke Miyamoto, Y. Sekiya, Hiroshi Nakamura
We present a novel method for detecting slow scan attacks. Attackers collect information about vulnerabilities in hosts by scan attacks and then penetrate the systems based on the collected information. Detection of scan attacks is therefore useful to avoid the following attacks. The intrusion detection system (IDS) has been proposed for detecting scan attacks. However, it cannot detect slow scan attacks that are executed slowly over a long period. In this paper, we introduce novel features that are useful to distinguish the difference in the communication behavior between the scanning hosts and the benign hosts. Then, we propose the detection method using the features. Furthermore, through the experiments, we confirm the effectiveness of our method for detecting a slow scan attack.
提出了一种检测慢扫描攻击的新方法。攻击者通过扫描攻击的方式收集主机的漏洞信息,然后利用这些信息对系统进行渗透。因此,检测扫描攻击有助于避免以下攻击。提出了一种检测扫描攻击的入侵检测系统(IDS)。但不能检测长时间缓慢执行的慢扫描攻击。在本文中,我们引入了一些新的特征,这些特征有助于区分扫描主机和良性主机之间通信行为的差异。然后,我们提出了基于特征的检测方法。此外,通过实验验证了该方法检测慢扫描攻击的有效性。
{"title":"Slow Scan Attack Detection Based on Communication Behavior","authors":"Tomoya Yamashita, Daisuke Miyamoto, Y. Sekiya, Hiroshi Nakamura","doi":"10.1145/3442520.3442525","DOIUrl":"https://doi.org/10.1145/3442520.3442525","url":null,"abstract":"We present a novel method for detecting slow scan attacks. Attackers collect information about vulnerabilities in hosts by scan attacks and then penetrate the systems based on the collected information. Detection of scan attacks is therefore useful to avoid the following attacks. The intrusion detection system (IDS) has been proposed for detecting scan attacks. However, it cannot detect slow scan attacks that are executed slowly over a long period. In this paper, we introduce novel features that are useful to distinguish the difference in the communication behavior between the scanning hosts and the benign hosts. Then, we propose the detection method using the features. Furthermore, through the experiments, we confirm the effectiveness of our method for detecting a slow scan attack.","PeriodicalId":340416,"journal":{"name":"Proceedings of the 2020 10th International Conference on Communication and Network Security","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123932855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DIDarknet: A Contemporary Approach to Detect and Characterize the Darknet Traffic using Deep Image Learning DIDarknet:一种使用深度图像学习检测和表征暗网流量的当代方法
Arash Habibi Lashkari, Gurdip Kaur, Abir Rahali
Darknet traffic classification is significantly important to categorize real-time applications. Although there are notable efforts to classify darknet traffic which rely heavily on existing datasets and machine learning classifiers, there are extremely few efforts to detect and characterize darknet traffic using deep learning. This work proposes a novel approach, named DeepImage, which uses feature selection to pick the most important features to create a gray image and feed it to a two-dimensional convolutional neural network to detect and characterize darknet traffic. Two encrypted traffic datasets are merged to create a darknet dataset to evaluate the proposed approach which successfully characterizes darknet traffic with 86% accuracy.
暗网流量分类对实时应用进行分类具有重要意义。尽管在对严重依赖于现有数据集和机器学习分类器的暗网流量进行分类方面有显著的努力,但使用深度学习检测和表征暗网流量的努力却很少。这项工作提出了一种名为DeepImage的新方法,该方法使用特征选择来选择最重要的特征来创建灰度图像,并将其馈送到二维卷积神经网络以检测和表征暗网流量。将两个加密流量数据集合并为暗网数据集,以评估所提出的方法,该方法成功地表征了暗网流量,准确率为86%。
{"title":"DIDarknet: A Contemporary Approach to Detect and Characterize the Darknet Traffic using Deep Image Learning","authors":"Arash Habibi Lashkari, Gurdip Kaur, Abir Rahali","doi":"10.1145/3442520.3442521","DOIUrl":"https://doi.org/10.1145/3442520.3442521","url":null,"abstract":"Darknet traffic classification is significantly important to categorize real-time applications. Although there are notable efforts to classify darknet traffic which rely heavily on existing datasets and machine learning classifiers, there are extremely few efforts to detect and characterize darknet traffic using deep learning. This work proposes a novel approach, named DeepImage, which uses feature selection to pick the most important features to create a gray image and feed it to a two-dimensional convolutional neural network to detect and characterize darknet traffic. Two encrypted traffic datasets are merged to create a darknet dataset to evaluate the proposed approach which successfully characterizes darknet traffic with 86% accuracy.","PeriodicalId":340416,"journal":{"name":"Proceedings of the 2020 10th International Conference on Communication and Network Security","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117122464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
DIDroid: Android Malware Classification and Characterization Using Deep Image Learning DIDroid:使用深度图像学习的Android恶意软件分类和表征
Abir Rahali, Arash Habibi Lashkari, Gurdip Kaur, Laya Taheri, F. Gagnon, Frédéric Massicotte
The unrivaled threat of android malware is the root cause of various security problems on the internet. Although there are remarkable efforts in detection and classification of android malware based on machine learning techniques, a small number of attempts are made to classify and characterize it using deep learning. Detecting android malware in smartphones is an essential target for cyber community to get rid of menacing malware samples. This paper proposes an image-based deep neural network method to classify and characterize android malware samples taken from a huge malware dataset with 12 prominent malware categories and 191 eminent malware families. This work successfully demonstrates the use of deep image learning to classify and characterize android malware with an accuracy of 93.36% and log loss of less than 0.20 for training and testing set.
安卓恶意软件无与伦比的威胁是互联网上各种安全问题的根本原因。尽管在基于机器学习技术的android恶意软件检测和分类方面已经取得了显著的成就,但使用深度学习对其进行分类和表征的尝试较少。检测智能手机中的android恶意软件是网络社区清除恶意软件样本的重要目标。本文提出了一种基于图像的深度神经网络方法,对来自12个突出恶意软件类别和191个突出恶意软件家族的庞大恶意软件数据集的android恶意软件样本进行分类和表征。本工作成功地演示了使用深度图像学习对android恶意软件进行分类和表征,训练集和测试集的准确率为93.36%,日志损失小于0.20。
{"title":"DIDroid: Android Malware Classification and Characterization Using Deep Image Learning","authors":"Abir Rahali, Arash Habibi Lashkari, Gurdip Kaur, Laya Taheri, F. Gagnon, Frédéric Massicotte","doi":"10.1145/3442520.3442522","DOIUrl":"https://doi.org/10.1145/3442520.3442522","url":null,"abstract":"The unrivaled threat of android malware is the root cause of various security problems on the internet. Although there are remarkable efforts in detection and classification of android malware based on machine learning techniques, a small number of attempts are made to classify and characterize it using deep learning. Detecting android malware in smartphones is an essential target for cyber community to get rid of menacing malware samples. This paper proposes an image-based deep neural network method to classify and characterize android malware samples taken from a huge malware dataset with 12 prominent malware categories and 191 eminent malware families. This work successfully demonstrates the use of deep image learning to classify and characterize android malware with an accuracy of 93.36% and log loss of less than 0.20 for training and testing set.","PeriodicalId":340416,"journal":{"name":"Proceedings of the 2020 10th International Conference on Communication and Network Security","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116262373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Proof of Network Security Services: Enforcement of Security SLA through Outsourced Network Testing 网络安全服务证明:通过外包网络测试执行安全SLA
Sultan Alasmari, Weichao Wang, Yu Wang
Many companies outsource their network security functionality to third party service providers. To guarantee the quality of such services, a Security Service Level Agreement (SSLA) between the two parties often needs to be signed and enforced. Some mechanisms to verify the execution of the SSLA must be designed. In this paper, we propose a mechanism to allow a non-interest third party to help end customers verify the SSLA. Specifically, an end customer can carefully craft network traffic and conduct spontaneous and configurable verification of the SSLA with the help of a group of testers. While the basic idea is straightforward, multiple methods must be designed to guarantee the execution of the testing procedure. For example, we need to prevent the testing sites from being abused for network attacks. We describe our approaches in details. Our analysis and quantitative results show that our approach can effectively help end customers verify the execution of network security SLA.
许多公司将其网络安全功能外包给第三方服务提供商。为了保证此类服务的质量,通常需要双方签署并执行安全服务水平协议(SSLA)。必须设计一些机制来验证SSLA的执行。在本文中,我们提出了一种机制,允许非利益第三方帮助最终客户验证SSLA。具体来说,终端客户可以在一组测试人员的帮助下仔细地制作网络流量,并对SSLA进行自发的、可配置的验证。虽然基本思想很简单,但必须设计多种方法来保证测试过程的执行。例如,我们需要防止测试站点被滥用进行网络攻击。我们详细描述了我们的方法。我们的分析和量化结果表明,我们的方法可以有效地帮助最终客户验证网络安全SLA的执行情况。
{"title":"Proof of Network Security Services: Enforcement of Security SLA through Outsourced Network Testing","authors":"Sultan Alasmari, Weichao Wang, Yu Wang","doi":"10.1145/3442520.3442533","DOIUrl":"https://doi.org/10.1145/3442520.3442533","url":null,"abstract":"Many companies outsource their network security functionality to third party service providers. To guarantee the quality of such services, a Security Service Level Agreement (SSLA) between the two parties often needs to be signed and enforced. Some mechanisms to verify the execution of the SSLA must be designed. In this paper, we propose a mechanism to allow a non-interest third party to help end customers verify the SSLA. Specifically, an end customer can carefully craft network traffic and conduct spontaneous and configurable verification of the SSLA with the help of a group of testers. While the basic idea is straightforward, multiple methods must be designed to guarantee the execution of the testing procedure. For example, we need to prevent the testing sites from being abused for network attacks. We describe our approaches in details. Our analysis and quantitative results show that our approach can effectively help end customers verify the execution of network security SLA.","PeriodicalId":340416,"journal":{"name":"Proceedings of the 2020 10th International Conference on Communication and Network Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122615345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Outsourced Secure ID3 Decision Tree Algorithm over Horizontally Partitioned Datasets with Consortium Blockchain 基于Consortium Blockchain的水平分区数据集外包安全ID3决策树算法
Ming Yang, Xuexian Hu, Jianghong Wei, Qihui Zhang, Wenfen Liu
Due to the capacity of storing massive data and providing huge computing resources, cloud computing has been a desirable platform to assist machine learning in multiple-data-owners scenarios. However, the issue of data privacy is far from being well solved and thus has been a general concern in the cloud-assisted machine learning. For example, in the existing cloud-assisted decision tree classification algorithms, it is very hard to guarantee data privacy since all data owners have to aggregate their data to the cloud platform for model training. In this paper, we investigate the possibility of training a decision tree in the scenario that the distributed data are stored locally in each data owner, where the privacy of the original data can be guaranteed in a more intuitive approach. Specifically, we present a positive answer to the above issue by presenting a privacy-preserving ID3 training scheme using Gini index over horizontally partitioned datasets by multiple data owners. Since each data owner cannot directly divide the local dataset according to the best attributes selected, a consortium blockchain and a homomorphic encryption algorithm are employed to ensure the privacy and usability of the distributed data. Security analysis indicates that our scheme can preserve the privacy of the original data and the intermediate values. Moreover, extensive experiments show that our scheme can achieve the same result compared with the original ID3 decision tree algorithm while additionally preserving data privacy, and calculation time overhead and communication time overhead on data owners decrease greatly.
由于存储海量数据和提供巨大计算资源的能力,云计算已经成为在多数据所有者场景下辅助机器学习的理想平台。然而,数据隐私问题远未得到很好的解决,因此在云辅助机器学习中一直是一个普遍关注的问题。例如,在现有的云辅助决策树分类算法中,由于所有数据所有者都需要将数据聚合到云平台上进行模型训练,因此很难保证数据的隐私性。在本文中,我们研究了在分布式数据存储在每个数据所有者本地的情况下训练决策树的可能性,在这种情况下,原始数据的隐私可以以更直观的方式得到保证。具体来说,我们通过在多个数据所有者的水平分区数据集上使用基尼指数提出了一个保护隐私的ID3训练方案,为上述问题提供了一个积极的答案。由于每个数据所有者无法根据选择的最佳属性直接划分本地数据集,因此采用财团区块链和同态加密算法来保证分布式数据的隐私性和可用性。安全性分析表明,该方案可以保护原始数据和中间值的隐私性。此外,大量的实验表明,我们的方案在保护数据隐私的同时,可以达到与原始ID3决策树算法相同的结果,并且大大降低了数据所有者的计算时间开销和通信时间开销。
{"title":"Outsourced Secure ID3 Decision Tree Algorithm over Horizontally Partitioned Datasets with Consortium Blockchain","authors":"Ming Yang, Xuexian Hu, Jianghong Wei, Qihui Zhang, Wenfen Liu","doi":"10.1145/3442520.3442534","DOIUrl":"https://doi.org/10.1145/3442520.3442534","url":null,"abstract":"Due to the capacity of storing massive data and providing huge computing resources, cloud computing has been a desirable platform to assist machine learning in multiple-data-owners scenarios. However, the issue of data privacy is far from being well solved and thus has been a general concern in the cloud-assisted machine learning. For example, in the existing cloud-assisted decision tree classification algorithms, it is very hard to guarantee data privacy since all data owners have to aggregate their data to the cloud platform for model training. In this paper, we investigate the possibility of training a decision tree in the scenario that the distributed data are stored locally in each data owner, where the privacy of the original data can be guaranteed in a more intuitive approach. Specifically, we present a positive answer to the above issue by presenting a privacy-preserving ID3 training scheme using Gini index over horizontally partitioned datasets by multiple data owners. Since each data owner cannot directly divide the local dataset according to the best attributes selected, a consortium blockchain and a homomorphic encryption algorithm are employed to ensure the privacy and usability of the distributed data. Security analysis indicates that our scheme can preserve the privacy of the original data and the intermediate values. Moreover, extensive experiments show that our scheme can achieve the same result compared with the original ID3 decision tree algorithm while additionally preserving data privacy, and calculation time overhead and communication time overhead on data owners decrease greatly.","PeriodicalId":340416,"journal":{"name":"Proceedings of the 2020 10th International Conference on Communication and Network Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130990736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis on Entropy Sources based on Smartphone Sensors 基于智能手机传感器的熵源分析
Na Lv, Tianyu Chen, Yuan Ma
Random number generator (RNG) is the basic primitive in cryptography. The randomness of random numbers generated by RNGs is the base of the security of various cryptosystems implemented in network and communications. With the popularization of smart mobile devices (such as smartphones) and the surge in demand for cryptographic applications of such devices, research on providing random number services for mobile devices has attracted more and more attentions. As the important components of smartphones, sensors are used to collect data from user behaviors and environments, and some data sources have the non-deterministic properties. Currently, some work focuses on how to design sensor-based RNG towards smartphones, since no additional hardware is required by this method. It is critical to evaluate the quality of entropy sources which is the main source of randomness for RNGs. However, as far as we know, there is no work to systematically analyze the feasibility for utilizing the raw sensor data to generate random sequences, and how much the entropy contained in the data is. In this paper, we aim to providing an analysis method for quantifying the entropy in the raw data captured by sensors embedded in smartphones, and studying the feasibility of generating random numbers from the data. We establish several data collection models for some typical sensors with different scenarios and data sampling frequencies. Furthermore, we propose a universal entropy estimation scheme for multivariate data to quantify the entropy of the sensor data, and apply it on a type of Android smartphones. The experiments demonstrate that the raw data collected by the sensors has a considerable amount of entropy, and the ability of different sensors to provide entropy has a certain relationship with the usage scenarios of smartphones and the sampling frequency of sensor data. Particularly, when in a static scenario and the sampling frequency is 50Hz, we get a conservative entropy estimation for our testing smartphones based on the min-entropy, which is about 189bits/s, 13bits/s and 254bits/s for the accelerometer, gyroscope, and magnetometer respectively. While the randomness of sensor data in dynamic scenarios will increase compared to static scenarios, because the environment and the way that the user uses the smartphones actually exist differences each time, parts of which are unknowable to the attacker.
随机数生成器(RNG)是密码学中的基本基元。rng生成的随机数的随机性是网络和通信中实现的各种密码系统安全性的基础。随着智能移动设备(如智能手机)的普及和这些设备的加密应用需求的激增,为移动设备提供随机数服务的研究越来越受到关注。传感器作为智能手机的重要组成部分,用于收集用户行为和环境的数据,而一些数据源具有不确定性。目前,一些工作集中在如何针对智能手机设计基于传感器的RNG,因为这种方法不需要额外的硬件。熵源是随机数生成器随机性的主要来源,对熵源的质量进行评价至关重要。然而,据我们所知,目前还没有工作系统地分析利用传感器原始数据生成随机序列的可行性,以及数据中包含的熵有多大。在本文中,我们旨在提供一种分析方法来量化智能手机中嵌入的传感器捕获的原始数据中的熵,并研究从数据中生成随机数的可行性。针对不同场景和采样频率的典型传感器,建立了几种数据采集模型。此外,我们提出了一种多变量数据的通用熵估计方案来量化传感器数据的熵,并将其应用于一种Android智能手机。实验表明,传感器采集的原始数据具有相当大的熵,不同传感器提供熵的能力与智能手机的使用场景和传感器数据的采样频率有一定的关系。特别是,在静态场景下,采样频率为50Hz时,我们得到了基于最小熵的保守熵估计,加速度计,陀螺仪和磁力计分别约为189bits/s, 13bits/s和254bits/s。而与静态场景相比,动态场景中传感器数据的随机性会增加,因为用户每次使用智能手机的环境和方式实际上存在差异,其中一部分是攻击者不知道的。
{"title":"Analysis on Entropy Sources based on Smartphone Sensors","authors":"Na Lv, Tianyu Chen, Yuan Ma","doi":"10.1145/3442520.3442528","DOIUrl":"https://doi.org/10.1145/3442520.3442528","url":null,"abstract":"Random number generator (RNG) is the basic primitive in cryptography. The randomness of random numbers generated by RNGs is the base of the security of various cryptosystems implemented in network and communications. With the popularization of smart mobile devices (such as smartphones) and the surge in demand for cryptographic applications of such devices, research on providing random number services for mobile devices has attracted more and more attentions. As the important components of smartphones, sensors are used to collect data from user behaviors and environments, and some data sources have the non-deterministic properties. Currently, some work focuses on how to design sensor-based RNG towards smartphones, since no additional hardware is required by this method. It is critical to evaluate the quality of entropy sources which is the main source of randomness for RNGs. However, as far as we know, there is no work to systematically analyze the feasibility for utilizing the raw sensor data to generate random sequences, and how much the entropy contained in the data is. In this paper, we aim to providing an analysis method for quantifying the entropy in the raw data captured by sensors embedded in smartphones, and studying the feasibility of generating random numbers from the data. We establish several data collection models for some typical sensors with different scenarios and data sampling frequencies. Furthermore, we propose a universal entropy estimation scheme for multivariate data to quantify the entropy of the sensor data, and apply it on a type of Android smartphones. The experiments demonstrate that the raw data collected by the sensors has a considerable amount of entropy, and the ability of different sensors to provide entropy has a certain relationship with the usage scenarios of smartphones and the sampling frequency of sensor data. Particularly, when in a static scenario and the sampling frequency is 50Hz, we get a conservative entropy estimation for our testing smartphones based on the min-entropy, which is about 189bits/s, 13bits/s and 254bits/s for the accelerometer, gyroscope, and magnetometer respectively. While the randomness of sensor data in dynamic scenarios will increase compared to static scenarios, because the environment and the way that the user uses the smartphones actually exist differences each time, parts of which are unknowable to the attacker.","PeriodicalId":340416,"journal":{"name":"Proceedings of the 2020 10th International Conference on Communication and Network Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129273882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Locust: Highly Concurrent DHT Experimentation Framework for Security Evaluations 蝗虫:用于安全评估的高度并发DHT实验框架
Florian Adamsky, Daniel Kaiser, M. Steglich, T. Engel
Distributed Hash Table (DHT) protocols, such as Kademlia, provide a decentralized key-value lookup which is nowadays integrated into a wide variety of applications, such as Ethereum, InterPlanetary File System (IPFS), and BitTorrent. However, many security issues in DHT protocols have not been solved yet. DHT networks are typically evaluated using mathematical models or simulations, often abstracting away from artefacts that can be relevant for security and/or performance. Experiments capturing these artefacts are typically run with too few nodes. In this paper, we provide Locust, a novel highly concurrent DHT experimentation framework written in Elixir, which is designed for security evaluations. This framework allows running experiments with a full DHT implementation and around 4,000 nodes on a single machine including an adjustable churn rate; thus yielding a favourable trade-off between the number of analysed nodes and being realistic. We evaluate our framework in terms of memory consumption, processing power, and network traffic.
分布式哈希表(DHT)协议,如Kademlia,提供了一个分散的键值查找,现在被集成到各种各样的应用程序中,如以太坊,星际文件系统(IPFS)和BitTorrent。然而,DHT协议中的许多安全问题尚未得到解决。DHT网络通常使用数学模型或模拟进行评估,通常从可能与安全性和/或性能相关的工件中抽象出来。捕获这些工件的实验通常在节点太少的情况下运行。在本文中,我们提供了Locust,这是一个用Elixir编写的新型高并发DHT实验框架,专为安全评估而设计。这个框架允许在一台机器上运行完整的DHT实现和大约4000个节点的实验,包括可调的流失率;因此,在分析节点的数量和现实之间产生了有利的权衡。我们根据内存消耗、处理能力和网络流量来评估框架。
{"title":"Locust: Highly Concurrent DHT Experimentation Framework for Security Evaluations","authors":"Florian Adamsky, Daniel Kaiser, M. Steglich, T. Engel","doi":"10.1145/3442520.3442531","DOIUrl":"https://doi.org/10.1145/3442520.3442531","url":null,"abstract":"Distributed Hash Table (DHT) protocols, such as Kademlia, provide a decentralized key-value lookup which is nowadays integrated into a wide variety of applications, such as Ethereum, InterPlanetary File System (IPFS), and BitTorrent. However, many security issues in DHT protocols have not been solved yet. DHT networks are typically evaluated using mathematical models or simulations, often abstracting away from artefacts that can be relevant for security and/or performance. Experiments capturing these artefacts are typically run with too few nodes. In this paper, we provide Locust, a novel highly concurrent DHT experimentation framework written in Elixir, which is designed for security evaluations. This framework allows running experiments with a full DHT implementation and around 4,000 nodes on a single machine including an adjustable churn rate; thus yielding a favourable trade-off between the number of analysed nodes and being realistic. We evaluate our framework in terms of memory consumption, processing power, and network traffic.","PeriodicalId":340416,"journal":{"name":"Proceedings of the 2020 10th International Conference on Communication and Network Security","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123191953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The analysis method of security vulnerability based on the knowledge graph 基于知识图谱的安全漏洞分析方法
Yongfu Wang, Ying Zhou, Xiaohai Zou, Quanqiang Miao, Wei Wang
Given the increasingly prominent network security issues, it is of great significance to deeply analyze the vulnerability of network space software and hardware resources. Although the existing Common Vulnerabilities and Exposures (CVE) security vulnerability database contains a wealth of vulnerability information, the information is poorly readable, the potential correlation is difficult to express intuitively, and the degree of visualization is insufficient. To solve the current problems, a method of constructing a knowledge graph of CVE security vulnerabilities is proposed. By acquiring raw data, ontology modeling, data extraction and import, the knowledge graph is imported into the Neo4j graph database to complete the construction of the CVE knowledge graph. Based on the knowledge graph, the in-depth analysis is performed from the cause dimension, time dimension and association dimension, and the results are displayed visually. Experiments show that this analysis method can intuitively and effectively mine the intrinsic value of CVE security vulnerability data.
在网络安全问题日益突出的今天,深入分析网络空间软硬件资源的脆弱性具有十分重要的意义。现有的CVE (Common Vulnerabilities and Exposures)安全漏洞数据库虽然包含了丰富的漏洞信息,但信息可读性差,潜在的相关性难以直观表达,可视化程度不足。针对目前存在的问题,提出了一种构建CVE安全漏洞知识图的方法。通过获取原始数据、本体建模、数据提取和导入,将知识图导入到Neo4j图形数据库中,完成CVE知识图的构建。在知识图谱的基础上,从原因维度、时间维度和关联维度进行深入分析,并以可视化的方式显示分析结果。实验表明,该分析方法能够直观有效地挖掘CVE安全漏洞数据的内在价值。
{"title":"The analysis method of security vulnerability based on the knowledge graph","authors":"Yongfu Wang, Ying Zhou, Xiaohai Zou, Quanqiang Miao, Wei Wang","doi":"10.1145/3442520.3442535","DOIUrl":"https://doi.org/10.1145/3442520.3442535","url":null,"abstract":"Given the increasingly prominent network security issues, it is of great significance to deeply analyze the vulnerability of network space software and hardware resources. Although the existing Common Vulnerabilities and Exposures (CVE) security vulnerability database contains a wealth of vulnerability information, the information is poorly readable, the potential correlation is difficult to express intuitively, and the degree of visualization is insufficient. To solve the current problems, a method of constructing a knowledge graph of CVE security vulnerabilities is proposed. By acquiring raw data, ontology modeling, data extraction and import, the knowledge graph is imported into the Neo4j graph database to complete the construction of the CVE knowledge graph. Based on the knowledge graph, the in-depth analysis is performed from the cause dimension, time dimension and association dimension, and the results are displayed visually. Experiments show that this analysis method can intuitively and effectively mine the intrinsic value of CVE security vulnerability data.","PeriodicalId":340416,"journal":{"name":"Proceedings of the 2020 10th International Conference on Communication and Network Security","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121032133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
TLS Encrypted Application Classification Using Machine Learning with Flow Feature Engineering 基于流特征工程的机器学习TLS加密应用分类
Onur Barut, Rebecca S. Zhu, Yan Luo, Tong Zhang
Network traffic classification has become increasingly important as the number of devices connected to the Internet is rapidly growing. Proportionally, the amount of encrypted traffic is also increasing, making payload based classification methods obsolete. Consequently, machine learning approaches have become crucial when user privacy is concerned. For this purpose, we propose an accurate, fast, and privacy preserved encrypted traffic classification approach with engineered flow feature extraction and appropriate feature selection. The proposed scheme achieves a 0.92899 macro-average F1 score and a 0.88313 macro-averaged mAP score for the encrypted traffic classification of Audio, Email, Chat, and Video classes derived from the non-vpn2016 dataset. Further experiments on the mixed non-encrypted and encrypted flow dataset with a data augmentation method called Synthetic Minority Over-Sampling Technique are conducted and the results are discussed for TLS-encrypted and mixed flows.
随着连接到Internet的设备数量的快速增长,网络流分类变得越来越重要。按比例,加密流量的数量也在增加,使基于有效负载的分类方法过时。因此,当涉及到用户隐私时,机器学习方法变得至关重要。为此,我们提出了一种精确、快速、保护隐私的加密流量分类方法,该方法采用工程化的流特征提取和适当的特征选择。该方案对来自非vpn2016数据集的音频、电子邮件、聊天和视频类的加密流分类实现了0.92899的宏观平均F1分数和0.88313的宏观平均mAP分数。在非加密和加密混合流数据集上,采用一种称为合成少数派过采样技术的数据增强方法进行了进一步的实验,并讨论了tls加密和混合流的结果。
{"title":"TLS Encrypted Application Classification Using Machine Learning with Flow Feature Engineering","authors":"Onur Barut, Rebecca S. Zhu, Yan Luo, Tong Zhang","doi":"10.1145/3442520.3442529","DOIUrl":"https://doi.org/10.1145/3442520.3442529","url":null,"abstract":"Network traffic classification has become increasingly important as the number of devices connected to the Internet is rapidly growing. Proportionally, the amount of encrypted traffic is also increasing, making payload based classification methods obsolete. Consequently, machine learning approaches have become crucial when user privacy is concerned. For this purpose, we propose an accurate, fast, and privacy preserved encrypted traffic classification approach with engineered flow feature extraction and appropriate feature selection. The proposed scheme achieves a 0.92899 macro-average F1 score and a 0.88313 macro-averaged mAP score for the encrypted traffic classification of Audio, Email, Chat, and Video classes derived from the non-vpn2016 dataset. Further experiments on the mixed non-encrypted and encrypted flow dataset with a data augmentation method called Synthetic Minority Over-Sampling Technique are conducted and the results are discussed for TLS-encrypted and mixed flows.","PeriodicalId":340416,"journal":{"name":"Proceedings of the 2020 10th International Conference on Communication and Network Security","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130648433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
Proceedings of the 2020 10th International Conference on Communication and Network Security
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1