首页 > 最新文献

2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)最新文献

英文 中文
Evaluating the Performance of Containerized Webservers against web servers on Virtual Machines using Bombardment and Siege 使用轰炸和围攻评估容器化web服务器对虚拟机上web服务器的性能
Daniel Ukene, H. Wimmer, Jongyeop Kim
Containerization is becoming an increasingly common aspect of DevOps. Adding a container layer increases the complexity and could impact system performance. This study explores the performance differences of the Apache and Nginx web servers on Virtual Machines (VMs) and Docker Containers with official web server images from Docker Hub. A sandbox environment was created with both containerized and non-containerized versions of the web servers, and their performance was analyzed using line graphs. The results showed differences in performance between VMs and Docker Containers, with some variation from previous research due to the virtualization being done locally rather than on the cloud. This study would be advantageous for organizations with on-premises infrastructure due to security or governing regulations.
容器化正在成为DevOps中越来越普遍的一个方面。添加容器层会增加复杂性,并可能影响系统性能。本研究使用来自Docker Hub的官方web服务器映像,探讨了Apache和Nginx web服务器在虚拟机(vm)和Docker容器上的性能差异。使用容器化和非容器化版本的web服务器创建了一个沙箱环境,并使用线形图分析了它们的性能。结果显示了vm和Docker容器之间的性能差异,由于虚拟化是在本地而不是在云上完成的,因此与之前的研究有一些差异。由于安全性或管理法规的原因,这项研究对于拥有本地基础设施的组织是有利的。
{"title":"Evaluating the Performance of Containerized Webservers against web servers on Virtual Machines using Bombardment and Siege","authors":"Daniel Ukene, H. Wimmer, Jongyeop Kim","doi":"10.1109/SERA57763.2023.10197818","DOIUrl":"https://doi.org/10.1109/SERA57763.2023.10197818","url":null,"abstract":"Containerization is becoming an increasingly common aspect of DevOps. Adding a container layer increases the complexity and could impact system performance. This study explores the performance differences of the Apache and Nginx web servers on Virtual Machines (VMs) and Docker Containers with official web server images from Docker Hub. A sandbox environment was created with both containerized and non-containerized versions of the web servers, and their performance was analyzed using line graphs. The results showed differences in performance between VMs and Docker Containers, with some variation from previous research due to the virtualization being done locally rather than on the cloud. This study would be advantageous for organizations with on-premises infrastructure due to security or governing regulations.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115992028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TIPICAL - Type Inference for Python In Critical Accuracy Level Python临界精度级别的典型类型推断
Jonathan Elkobi, Bernd Gruner, Tim Sonnekalb, C. Brust
Type inference methods based on deep learning are becoming increasingly popular as they aim to compensate for the drawbacks of static and dynamic analysis approaches, such as high uncertainty. However, their practical application is still debatable due to several intrinsic issues such as code from different software domains will involve data types that are unknown to the type inference system.In order to overcome these problems and gain high-confidence predictions, we thus present TIPICAL, a method that combines deep similarity learning with novelty detection. We show that our method can better predict data types in high confidence by successfully filtering out unknown and inaccurate predicted data types and achieving higher F1 scores to the state-of-the-art type inference method Type4Py. Additionally, we investigate how different software domains and data type frequencies may affect the results of our method.
基于深度学习的类型推断方法正变得越来越流行,因为它们旨在弥补静态和动态分析方法的缺点,例如高不确定性。然而,它们的实际应用仍然是有争议的,因为一些内在的问题,如来自不同软件领域的代码将涉及类型推断系统未知的数据类型。为了克服这些问题并获得高置信度的预测,我们提出了一种结合深度相似学习和新颖性检测的方法TIPICAL。通过成功过滤掉未知和不准确的预测数据类型,我们的方法可以在高置信度下更好地预测数据类型,并获得比最先进的类型推断方法Type4Py更高的F1分数。此外,我们研究了不同的软件域和数据类型频率如何影响我们方法的结果。
{"title":"TIPICAL - Type Inference for Python In Critical Accuracy Level","authors":"Jonathan Elkobi, Bernd Gruner, Tim Sonnekalb, C. Brust","doi":"10.1109/SERA57763.2023.10197800","DOIUrl":"https://doi.org/10.1109/SERA57763.2023.10197800","url":null,"abstract":"Type inference methods based on deep learning are becoming increasingly popular as they aim to compensate for the drawbacks of static and dynamic analysis approaches, such as high uncertainty. However, their practical application is still debatable due to several intrinsic issues such as code from different software domains will involve data types that are unknown to the type inference system.In order to overcome these problems and gain high-confidence predictions, we thus present TIPICAL, a method that combines deep similarity learning with novelty detection. We show that our method can better predict data types in high confidence by successfully filtering out unknown and inaccurate predicted data types and achieving higher F1 scores to the state-of-the-art type inference method Type4Py. Additionally, we investigate how different software domains and data type frequencies may affect the results of our method.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126282412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-level Adaptive Execution Tracing for Efficient Performance Analysis 用于高效性能分析的多级自适应执行跟踪
Mohammed Adib Khan, Naser Ezzati-Jivan
Troubleshooting system performance issues is a challenging task that requires a deep understanding of various factors that may impact system performance. This process involves analyzing trace logs from the kernel and user space using tools such as ftrace, strace, DTrace, or LTTng. However, pre-set tracing instrumentation can lead to missing important data where not enough components of the system include observability coverage. Also, having too much coverage may result in unnecessary noise in the data, making it extremely difficult to debug. This paper proposes an adaptive instrumentation technique for execution tracing, which dynamically makes decisions not only for which components to trace but also when to trace, thus reducing the risk of missing important data related to the performance problem and increasing the accuracy of debugging by reducing unwanted noises. Our case study results show that the proposed method is capable of handling tracing instrumentation dynamically for both kernel and application levels while maintaining a low overhead.
排除系统性能问题是一项具有挑战性的任务,需要深入了解可能影响系统性能的各种因素。这个过程包括使用ftrace、strace、DTrace或ltng等工具分析来自内核和用户空间的跟踪日志。然而,预先设置的跟踪工具可能导致丢失重要的数据,如果没有足够的系统组件包括可观察性覆盖。此外,过多的覆盖可能会导致数据中出现不必要的噪声,从而使调试变得极其困难。本文提出了一种用于执行跟踪的自适应检测技术,该技术不仅可以动态地决定跟踪哪些组件,还可以动态地决定何时跟踪,从而降低丢失与性能问题相关的重要数据的风险,并通过减少不必要的噪声来提高调试的准确性。我们的案例研究结果表明,所提出的方法能够动态地处理内核和应用程序级别的跟踪检测,同时保持较低的开销。
{"title":"Multi-level Adaptive Execution Tracing for Efficient Performance Analysis","authors":"Mohammed Adib Khan, Naser Ezzati-Jivan","doi":"10.1109/SERA57763.2023.10197790","DOIUrl":"https://doi.org/10.1109/SERA57763.2023.10197790","url":null,"abstract":"Troubleshooting system performance issues is a challenging task that requires a deep understanding of various factors that may impact system performance. This process involves analyzing trace logs from the kernel and user space using tools such as ftrace, strace, DTrace, or LTTng. However, pre-set tracing instrumentation can lead to missing important data where not enough components of the system include observability coverage. Also, having too much coverage may result in unnecessary noise in the data, making it extremely difficult to debug. This paper proposes an adaptive instrumentation technique for execution tracing, which dynamically makes decisions not only for which components to trace but also when to trace, thus reducing the risk of missing important data related to the performance problem and increasing the accuracy of debugging by reducing unwanted noises. Our case study results show that the proposed method is capable of handling tracing instrumentation dynamically for both kernel and application levels while maintaining a low overhead.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121742684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LSTM-AE for Anomaly Detection on Multivariate Telemetry Data 基于多变量遥测数据的LSTM-AE异常检测
Anes Abdennebi, Alp Tunçay, Cemal Yilmaz, Anil Koyuncu, Oktay Gungor
Organizations and companies that collect data generated by sales, transactions, client/server communications, IoT nodes, devices, engines, or any other data generating/exchanging source, need to analyze this data to reveal insights about the running activities on their systems. Since streaming data has multivariate variables bearing dependencies among each other that extend temporally (to previous time steps).Long-Short Term Memory (LSTM) is a variant of the Recurrent Neural Networks capable of learning long-term dependencies using previous timesteps of sequence-shape data. The LSTM model is a valid option to apply to our data for offline anomaly detection and help foresee future system incidents. Anything that negatively affects the system and the services provided via this system is considered an incident.Moreover, the raw input data might be noisy and improper for the model, leading to misleading predictions. A wiser choice is to use an LSTM Autoencoder (LSTM-AE) specialized for extracting meaningful features of the examined data and looking back several steps to preserve temporal dependencies.In our work, we developed two LSTM-AE models. We evaluated them in an industrial setup at Koçfinans (a finance company operating in Turkey), where they have a distributed system of several nodes running dozens of microservices. The outcome of this study shows that our trained LSTM-AE models succeeded in identifying the atypical behavior of offline data with high accuracies. Furthermore, after deploying the models, we identified the system failing at the exact times for the previous two reported failures. While after deployment, it launched cautions preceding the actual failure by a week, proving efficiency on online data. Our models achieved 99.7% accuracy and 89.1% as F1-score. Moreover, it shows potential in finding the proper LSTM-AE model architecture when time series data with temporal dependency property is fed to the model.
收集由销售、交易、客户/服务器通信、物联网节点、设备、引擎或任何其他数据生成/交换源生成的数据的组织和公司需要分析这些数据,以揭示有关其系统上运行活动的见解。因为流数据具有多变量变量,它们之间的依赖关系可以暂时地扩展(到以前的时间步骤)。长短期记忆(LSTM)是递归神经网络的一种变体,能够使用序列形状数据的先前时间步学习长期依赖关系。LSTM模型是一个有效的选择,可以应用于我们的数据进行离线异常检测,并帮助预测未来的系统事件。任何对系统和通过该系统提供的服务产生负面影响的事情都被视为事件。此外,原始输入数据可能有噪声并且不适合模型,从而导致误导性预测。更明智的选择是使用LSTM自动编码器(LSTM Autoencoder, LSTM- ae),专门用于提取已检查数据的有意义的特征,并回顾几个步骤以保持时间依赖性。在我们的工作中,我们开发了两个LSTM-AE模型。我们在kofinans(一家在土耳其运营的金融公司)的工业设置中对它们进行了评估,在那里他们有一个由几个节点组成的分布式系统,运行着数十个微服务。研究结果表明,我们训练的LSTM-AE模型成功地识别了离线数据的非典型行为,并且具有较高的精度。此外,在部署模型之后,我们在前两个报告的故障的准确时间确定了系统故障。而在部署后,它在实际故障前一周发布了警告,证明了在线数据的效率。我们的模型达到了99.7%的准确率和89.1%的f1得分。此外,当将具有时间依赖性的时间序列数据输入到模型中时,它显示了寻找合适的LSTM-AE模型体系结构的潜力。
{"title":"LSTM-AE for Anomaly Detection on Multivariate Telemetry Data","authors":"Anes Abdennebi, Alp Tunçay, Cemal Yilmaz, Anil Koyuncu, Oktay Gungor","doi":"10.1109/SERA57763.2023.10197673","DOIUrl":"https://doi.org/10.1109/SERA57763.2023.10197673","url":null,"abstract":"Organizations and companies that collect data generated by sales, transactions, client/server communications, IoT nodes, devices, engines, or any other data generating/exchanging source, need to analyze this data to reveal insights about the running activities on their systems. Since streaming data has multivariate variables bearing dependencies among each other that extend temporally (to previous time steps).Long-Short Term Memory (LSTM) is a variant of the Recurrent Neural Networks capable of learning long-term dependencies using previous timesteps of sequence-shape data. The LSTM model is a valid option to apply to our data for offline anomaly detection and help foresee future system incidents. Anything that negatively affects the system and the services provided via this system is considered an incident.Moreover, the raw input data might be noisy and improper for the model, leading to misleading predictions. A wiser choice is to use an LSTM Autoencoder (LSTM-AE) specialized for extracting meaningful features of the examined data and looking back several steps to preserve temporal dependencies.In our work, we developed two LSTM-AE models. We evaluated them in an industrial setup at Koçfinans (a finance company operating in Turkey), where they have a distributed system of several nodes running dozens of microservices. The outcome of this study shows that our trained LSTM-AE models succeeded in identifying the atypical behavior of offline data with high accuracies. Furthermore, after deploying the models, we identified the system failing at the exact times for the previous two reported failures. While after deployment, it launched cautions preceding the actual failure by a week, proving efficiency on online data. Our models achieved 99.7% accuracy and 89.1% as F1-score. Moreover, it shows potential in finding the proper LSTM-AE model architecture when time series data with temporal dependency property is fed to the model.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121952596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calibrating Cybersecurity Experiments: Evaluating Coverage Analysis for Fuzzing Benchmarks 校准网络安全实验:评估模糊基准的覆盖分析
J. Alves-Foss, Aditi Pokharel, Ronisha Shigdel, Jia Song
Computer science experimentation, whether it be for safety, reliability or cybersecurity, is an important part of scientific advancement. Evaluation of relative merits of various experiments typically requires well-calibrated benchmarks that can be used to measure the experimental results. This paper reviews current trends in using benchmarks in fuzzing experimental research for cybersecurity, specifically with metrics related to coverage analysis. Strengths and weaknesses of the current techniques are evaluated and suggestions for improving the current approaches are proposed. The end goal is to convince researchers that benchmarks for experimentation must be well documented, archived and calibrated so that the community knows how well the tools and techniques perform with respect to the possible maximum in the benchmark.
计算机科学实验,无论是安全性、可靠性还是网络安全,都是科学进步的重要组成部分。评估各种实验的相对优点通常需要经过良好校准的基准,可用于测量实验结果。本文回顾了在网络安全模糊实验研究中使用基准的当前趋势,特别是与覆盖分析相关的指标。对现有技术的优缺点进行了评价,并提出了改进现有方法的建议。最终目标是说服研究人员,实验的基准必须被很好地记录、存档和校准,以便社区了解工具和技术在基准中可能的最大值方面的表现。
{"title":"Calibrating Cybersecurity Experiments: Evaluating Coverage Analysis for Fuzzing Benchmarks","authors":"J. Alves-Foss, Aditi Pokharel, Ronisha Shigdel, Jia Song","doi":"10.1109/SERA57763.2023.10197736","DOIUrl":"https://doi.org/10.1109/SERA57763.2023.10197736","url":null,"abstract":"Computer science experimentation, whether it be for safety, reliability or cybersecurity, is an important part of scientific advancement. Evaluation of relative merits of various experiments typically requires well-calibrated benchmarks that can be used to measure the experimental results. This paper reviews current trends in using benchmarks in fuzzing experimental research for cybersecurity, specifically with metrics related to coverage analysis. Strengths and weaknesses of the current techniques are evaluated and suggestions for improving the current approaches are proposed. The end goal is to convince researchers that benchmarks for experimentation must be well documented, archived and calibrated so that the community knows how well the tools and techniques perform with respect to the possible maximum in the benchmark.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127737427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Customer Segmentation Using Credit Card Data Analysis 利用信用卡数据分析进行客户细分
S. Raj, Santanu Roy, Surajit Jana, Soumyadip Roy, Takaaki Goto, S. Sen
Customer segmentation is a separation of a market into multiple distinct groups of consumers who share the similar characteristics. Segmentation of market is an effective way to define and meet Customer needs and also to identify the future business plan. Unsupervised machine learning algorithms are suitable to analyze and identify the possible set of customers when the labeled data about the customers are no available. In this research work the spending of different customers who have credit cards are analyzed to segment them into different clusters and also to plan further business improvements based on the different characteristics of these identified clusters.
客户细分是指将市场划分为具有相似特征的多个不同的消费者群体。市场细分是定义和满足客户需求的有效方法,也是确定未来商业计划的有效方法。无监督机器学习算法适用于在没有客户标记数据时分析和识别可能的客户集。在这项研究工作中,我们分析了拥有信用卡的不同客户的消费情况,将他们划分为不同的集群,并根据这些已确定的集群的不同特征,规划进一步的业务改进。
{"title":"Customer Segmentation Using Credit Card Data Analysis","authors":"S. Raj, Santanu Roy, Surajit Jana, Soumyadip Roy, Takaaki Goto, S. Sen","doi":"10.1109/SERA57763.2023.10197704","DOIUrl":"https://doi.org/10.1109/SERA57763.2023.10197704","url":null,"abstract":"Customer segmentation is a separation of a market into multiple distinct groups of consumers who share the similar characteristics. Segmentation of market is an effective way to define and meet Customer needs and also to identify the future business plan. Unsupervised machine learning algorithms are suitable to analyze and identify the possible set of customers when the labeled data about the customers are no available. In this research work the spending of different customers who have credit cards are analyzed to segment them into different clusters and also to plan further business improvements based on the different characteristics of these identified clusters.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129964775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Commit Message Can Help: Security Patch Detection in Open Source Software via Transformer 提交消息可以帮助:安全补丁检测在开放源码软件通过变压器
Fei Zuo, Xin Zhang, Yuqi Song, J. Rhee, Jicheng Fu
As open source software is widely used, the vulnerabilities contained therein are also rapidly propagated to a large number of innocent applications. Even worse, many vulnerabilities in open-source projects are secretly fixed, which leads to affected software being unaware and thus exposed to risks. For the purpose of protecting deployed software, designing an effective patch classification system becomes more of a need than an option. To this end, some researchers take advantage of the recent advancements in natural language processing to learn both commit messages and code changes. However, they often incur high false positive rates. Not only that, existing works cannot yet answer how much the textual description (such as commit messages) alone can influence the final triage. In this paper, we propose a Transformer based patch classifier, which does not use any code changes as inputs. Surprisingly, the extensive experiment shows the proposed approach can significantly outperform other state-of-the-art work with a high precision of 93.0% and low false positive rate. Therefore, our research further confirms the critical importance of well-crafted commit messages for the later software maintenance. Finally, our case study also identifies 48 silent security patches, which can benefit those affected software.
随着开源软件的广泛使用,其中包含的漏洞也会迅速传播到大量无辜的应用程序中。更糟糕的是,开源项目中的许多漏洞被秘密修复,导致受影响的软件不知情,从而暴露在风险之中。为了保护已部署的软件,设计一个有效的补丁分类系统成为一种需要,而不是一种选择。为此,一些研究人员利用自然语言处理的最新进展来学习提交消息和代码更改。然而,它们经常导致高假阳性率。不仅如此,现有的工作还不能回答文本描述(如提交消息)单独对最终分类有多大影响。在本文中,我们提出了一个基于Transformer的补丁分类器,它不使用任何代码更改作为输入。令人惊讶的是,广泛的实验表明,所提出的方法可以显著优于其他最先进的工作,具有高达93.0%的高精度和低假阳性率。因此,我们的研究进一步证实了精心制作的提交消息对于后期软件维护的重要性。最后,我们的案例研究还确定了48个静默安全补丁,这些补丁可以使受影响的软件受益。
{"title":"Commit Message Can Help: Security Patch Detection in Open Source Software via Transformer","authors":"Fei Zuo, Xin Zhang, Yuqi Song, J. Rhee, Jicheng Fu","doi":"10.1109/SERA57763.2023.10197730","DOIUrl":"https://doi.org/10.1109/SERA57763.2023.10197730","url":null,"abstract":"As open source software is widely used, the vulnerabilities contained therein are also rapidly propagated to a large number of innocent applications. Even worse, many vulnerabilities in open-source projects are secretly fixed, which leads to affected software being unaware and thus exposed to risks. For the purpose of protecting deployed software, designing an effective patch classification system becomes more of a need than an option. To this end, some researchers take advantage of the recent advancements in natural language processing to learn both commit messages and code changes. However, they often incur high false positive rates. Not only that, existing works cannot yet answer how much the textual description (such as commit messages) alone can influence the final triage. In this paper, we propose a Transformer based patch classifier, which does not use any code changes as inputs. Surprisingly, the extensive experiment shows the proposed approach can significantly outperform other state-of-the-art work with a high precision of 93.0% and low false positive rate. Therefore, our research further confirms the critical importance of well-crafted commit messages for the later software maintenance. Finally, our case study also identifies 48 silent security patches, which can benefit those affected software.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132087638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
3 B (Block Byte Bit) Cipher Algorithm for Secure Socket Layer 安全套接字层的B(块字节位)密码算法
Y. Geum
This paper proposes an ECSSL(Elliptic Curve Secure Socket Layer) protocol that provides more securities and faster processing speed than an existing SSL(Secure Socket Layer) protocol. An ECSSL protocol consists of ECC(Elliptic Curve Cryptography), ThreeB(Block Byte Bit Cipher) algorithm to prevent from being eavesdropped, HMAC(Hash Message Authentication Code) algorithm to create digital signature using a shared secret key. In particular, as ThreeB uses byte exchange by using random technique. and bit-xor operation, compared with DES using fixed index table, the security and processing time of it are improved much better.
本文提出了一种椭圆曲线安全套接字层(ECSSL)协议,它比现有的安全套接字层(SSL)协议提供了更高的安全性和更快的处理速度。ECSSL协议由ECC(椭圆曲线加密)、防止被窃听的ThreeB(块字节比特密码)算法、使用共享密钥创建数字签名的HMAC(哈希消息认证码)算法组成。特别是,由于ThreeB使用随机技术进行字节交换。与使用固定索引表的DES相比,它的安全性和处理时间都有了很大的提高。
{"title":"3 B (Block Byte Bit) Cipher Algorithm for Secure Socket Layer","authors":"Y. Geum","doi":"10.1109/SERA57763.2023.10197799","DOIUrl":"https://doi.org/10.1109/SERA57763.2023.10197799","url":null,"abstract":"This paper proposes an ECSSL(Elliptic Curve Secure Socket Layer) protocol that provides more securities and faster processing speed than an existing SSL(Secure Socket Layer) protocol. An ECSSL protocol consists of ECC(Elliptic Curve Cryptography), ThreeB(Block Byte Bit Cipher) algorithm to prevent from being eavesdropped, HMAC(Hash Message Authentication Code) algorithm to create digital signature using a shared secret key. In particular, as ThreeB uses byte exchange by using random technique. and bit-xor operation, compared with DES using fixed index table, the security and processing time of it are improved much better.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125725257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Operation Management Method of Software Defined Perimeter for Promoting Zero-Trust Model 推广零信任模型的软件定义周界运行管理方法
S. Tanimoto, Sogen Hori, Hiroyuki Sato, Atsushi Kanai
Telework has been on the rise since the advent of COVID-19, and concerns have arisen about issues such as information leakage due to internal fraud. The zero-trust model is attracting attention as a countermeasure. This model reduces risk by constantly performing authentication and authorization, thus leading to improved security levels and safer operation. However, currently less than 40% of the companies in Japan have introduced zero trust into their security policies, mainly due to the lack of specific guidelines for operational management. We have therefore developed a security policy (service authorization conditions) for the software defined perimeter (SDP) zero-trust model as a universal operational management method to promote zero-trust implementation. Specifically, we simplify the time/place/occasion (TPO) conditions of users as T (inside/outside working hours), P (inside/outside the company, telework), and O (with/without visitors), resulting in 12 patterns, and for each of these TPO conditions, we propose detailed new service authorization conditions for SDP. The results of qualitative evaluation demonstrated the effectiveness of the proposed method. Our findings will contribute to the introduction of the zero-trust model and pave the way for safer and more secure corporate networks.
新型冠状病毒感染症(COVID-19病毒)之后,远程办公的势头日益高涨,内部欺诈导致信息泄露等问题令人担忧。作为对策,零信任模型备受关注。该模型通过不断执行身份验证和授权来降低风险,从而提高安全级别和更安全的操作。然而,目前日本只有不到40%的公司在其安全政策中引入了零信任,这主要是由于缺乏具体的运营管理指导方针。因此,我们为软件定义周界(SDP)零信任模型开发了一个安全策略(服务授权条件),作为促进零信任实现的通用操作管理方法。具体来说,我们将用户的时间/地点/场合(TPO)条件简化为T(工作时间内/工作时间外)、P(公司内/公司外、远程办公)和O(有/没有访客),得到12种模式,针对每种TPO条件,我们为SDP提出了详细的新服务授权条件。定性评价结果证明了该方法的有效性。我们的研究结果将有助于引入零信任模型,并为更安全、更可靠的企业网络铺平道路。
{"title":"Operation Management Method of Software Defined Perimeter for Promoting Zero-Trust Model","authors":"S. Tanimoto, Sogen Hori, Hiroyuki Sato, Atsushi Kanai","doi":"10.1109/SERA57763.2023.10197716","DOIUrl":"https://doi.org/10.1109/SERA57763.2023.10197716","url":null,"abstract":"Telework has been on the rise since the advent of COVID-19, and concerns have arisen about issues such as information leakage due to internal fraud. The zero-trust model is attracting attention as a countermeasure. This model reduces risk by constantly performing authentication and authorization, thus leading to improved security levels and safer operation. However, currently less than 40% of the companies in Japan have introduced zero trust into their security policies, mainly due to the lack of specific guidelines for operational management. We have therefore developed a security policy (service authorization conditions) for the software defined perimeter (SDP) zero-trust model as a universal operational management method to promote zero-trust implementation. Specifically, we simplify the time/place/occasion (TPO) conditions of users as T (inside/outside working hours), P (inside/outside the company, telework), and O (with/without visitors), resulting in 12 patterns, and for each of these TPO conditions, we propose detailed new service authorization conditions for SDP. The results of qualitative evaluation demonstrated the effectiveness of the proposed method. Our findings will contribute to the introduction of the zero-trust model and pave the way for safer and more secure corporate networks.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122233897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating Code Metrics in GitHub Repositories Related to Fake News and Misinformation 评估与假新闻和错误信息相关的GitHub存储库中的代码度量
Jason Duran, M. Sakib, Nasir U. Eisty, Francesca Spezzano
The surge of research on fake news and misinformation in the aftermath of the 2016 election has led to a significant increase in publicly available source code repositories. Our study aims to systematically analyze and evaluate the most relevant repositories and their Python source code in this area to improve awareness, quality, and understanding of these resources within the research community. Additionally, our work aims to measure the quality and complexity metrics of these repositories and identify their fundamental features to aid researchers in advancing the field’s knowledge in understanding and preventing the spread of misinformation on social media. As a result, we found that more popular fake news repositories and associated papers with higher citation counts tend to have more maintainable code measures, more complex code paths, a larger number of lines of code, a higher Halstead effort, and fewer comments. Utilizing these findings to devise efficient research and coding techniques to combat fake news, we can strive towards building a more knowledgeable and well-informed society.
2016年大选后,对假新闻和错误信息的研究激增,导致公开可用的源代码库大幅增加。我们的研究旨在系统地分析和评估该领域最相关的存储库及其Python源代码,以提高研究社区对这些资源的认识、质量和理解。此外,我们的工作旨在衡量这些存储库的质量和复杂性指标,并确定它们的基本特征,以帮助研究人员在理解和防止社交媒体上错误信息的传播方面推进该领域的知识。结果,我们发现更受欢迎的假新闻库和相关论文的引用次数更高,往往有更多可维护的代码度量,更复杂的代码路径,更多的代码行,更高的Halstead努力和更少的评论。利用这些发现,设计有效的研究和编码技术来打击假新闻,我们可以努力建设一个更有知识、更灵通的社会。
{"title":"Evaluating Code Metrics in GitHub Repositories Related to Fake News and Misinformation","authors":"Jason Duran, M. Sakib, Nasir U. Eisty, Francesca Spezzano","doi":"10.1109/SERA57763.2023.10197739","DOIUrl":"https://doi.org/10.1109/SERA57763.2023.10197739","url":null,"abstract":"The surge of research on fake news and misinformation in the aftermath of the 2016 election has led to a significant increase in publicly available source code repositories. Our study aims to systematically analyze and evaluate the most relevant repositories and their Python source code in this area to improve awareness, quality, and understanding of these resources within the research community. Additionally, our work aims to measure the quality and complexity metrics of these repositories and identify their fundamental features to aid researchers in advancing the field’s knowledge in understanding and preventing the spread of misinformation on social media. As a result, we found that more popular fake news repositories and associated papers with higher citation counts tend to have more maintainable code measures, more complex code paths, a larger number of lines of code, a higher Halstead effort, and fewer comments. Utilizing these findings to devise efficient research and coding techniques to combat fake news, we can strive towards building a more knowledgeable and well-informed society.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127308314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1