首页 > 最新文献

15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)最新文献

英文 中文
Effective Domain Modeling for Mobile Business AHMS (Adaptive Human Management Systems) requirements 移动业务AHMS(自适应人力管理系统)需求的有效领域建模
Haeng-Kon Kim
Software development projects tend to grow larger and more time consuming over time. Many companies have turned to software generation techniques to save time and costs. Software generation techniques take information from one area of the application, and make intelligent decisions to automatically generate a different area. Considerable achievements have been made in the areas of object-relational mappers to generate business objects from their relational database equivalents, and vice versa. There are also many products that can generate business objects and databases using the domain model of the application. Domain engineering is the foundation for emerging “product line” software development approaches and affects the maintainability, understandability, usability, and reusability characteristics of family of similar systems [1]. In this paper, we suggest a method that systematically defines, analyzes and designs a domain to enhance reusability effectively in Mobile Business Domain Modeling (MBDM) in AHMS(Adaptive Human Management Systems) requirements phase. For this, we extract information objectively that can be reused in a domain from the requirement analysis phase. We sustain and refine the information, and match them to artifacts of each phase in domain engineering. Through this method, reusable domain components and malleable domain architecture can be produced. In addition, we show the practical applicability and features of our approach.
随着时间的推移,软件开发项目趋向于变得更大、更耗时。许多公司已经转向软件生成技术,以节省时间和成本。软件生成技术从应用程序的一个区域获取信息,并做出智能决策以自动生成不同的区域。在对象-关系映射器领域已经取得了相当大的成就,可以从它们的关系数据库等效物生成业务对象,反之亦然。还有许多产品可以使用应用程序的域模型生成业务对象和数据库。领域工程是新兴的“产品线”软件开发方法的基础,影响着类似系统家族的可维护性、可理解性、可用性和可重用性特征[1]。本文提出了一种系统地定义、分析和设计领域的方法,以有效提高移动业务领域建模(MBDM)在AHMS(Adaptive Human Management Systems)需求阶段的可重用性。为此,我们从需求分析阶段客观地提取可以在域中重用的信息。我们维持和细化信息,并将它们与领域工程中每个阶段的工件相匹配。通过该方法,可以生成可重用的领域组件和具有延展性的领域体系结构。此外,我们还展示了我们的方法的实用性和特点。
{"title":"Effective Domain Modeling for Mobile Business AHMS (Adaptive Human Management Systems) requirements","authors":"Haeng-Kon Kim","doi":"10.1109/SNPD.2014.6888684","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888684","url":null,"abstract":"Software development projects tend to grow larger and more time consuming over time. Many companies have turned to software generation techniques to save time and costs. Software generation techniques take information from one area of the application, and make intelligent decisions to automatically generate a different area. Considerable achievements have been made in the areas of object-relational mappers to generate business objects from their relational database equivalents, and vice versa. There are also many products that can generate business objects and databases using the domain model of the application. Domain engineering is the foundation for emerging “product line” software development approaches and affects the maintainability, understandability, usability, and reusability characteristics of family of similar systems [1]. In this paper, we suggest a method that systematically defines, analyzes and designs a domain to enhance reusability effectively in Mobile Business Domain Modeling (MBDM) in AHMS(Adaptive Human Management Systems) requirements phase. For this, we extract information objectively that can be reused in a domain from the requirement analysis phase. We sustain and refine the information, and match them to artifacts of each phase in domain engineering. Through this method, reusable domain components and malleable domain architecture can be produced. In addition, we show the practical applicability and features of our approach.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"361 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124542867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An application of 3D spiral visualization to the Uchida-Kraepelin psychodiagnostic test 三维螺旋可视化在内田-克雷佩林心理诊断测试中的应用
Zhuang Heliang, F. Sugimoto, Chieko Kato, K. Tsuchida
Psychodiagnostic tests are used by hospitals, schools, companies, etc. Their effectiveness has been observed as tests that can reveal the internal and external status of the subject in detail. The data or the results of psychodiagnostic tests are very difficult to interpret by non-professionals. The resulting data is numerous, making it very difficult to read and interpret. The levels of understanding will vary depending on the cognitive ability and experience of the researcher or counselor interpreting the results. This study takes the results from psychodiagnostic tests and displays them as an easily understood 3D graph. The 3D graph is displayed as a spiral shape through `Processing'. The Uchida-Kraepelin psychodiagnostic test (U-K test) is used on over one million people each year by hospitals, schools, companies, etc. The purpose of this study is to use the 3D spiral to display the U-K test's calculation of workload and to emphasize the use of color on the 3D spiral visualization. A version with colors visible to colorblind people is used in conjunction with the 3D spiral visualization; the usage of color displays the U-K calculation of workload for non-colorblind people, while a colorblind-accessible version is used for colorblind people. The graphs produced in the study were given to professionals, non-professionals and colorblind people for evaluation.
医院、学校、公司等使用心理诊断测试。他们的有效性已经被观察作为测试,可以揭示主体的内部和外部状态的细节。非专业人士很难解释心理诊断测试的数据或结果。由此产生的数据非常多,很难阅读和解释。理解的水平将根据研究人员或咨询师解释结果的认知能力和经验而变化。这项研究从心理诊断测试中获取结果,并将其显示为易于理解的3D图表。通过“Processing”,3D图形显示为螺旋形状。内田-克雷佩林心理诊断测试(U-K测试)每年被医院、学校、公司等用于100多万人。本研究的目的是利用三维螺旋来显示U-K测验的工作量计算,并强调三维螺旋可视化中颜色的使用。色盲人士可以看到颜色的版本与3D螺旋可视化结合使用;颜色的使用显示了非色盲人士的工作量计算,而色盲人士则使用了色盲无障碍版本。研究中生成的图表分别给专业人士、非专业人士和色盲人士进行评估。
{"title":"An application of 3D spiral visualization to the Uchida-Kraepelin psychodiagnostic test","authors":"Zhuang Heliang, F. Sugimoto, Chieko Kato, K. Tsuchida","doi":"10.1109/SNPD.2014.6888734","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888734","url":null,"abstract":"Psychodiagnostic tests are used by hospitals, schools, companies, etc. Their effectiveness has been observed as tests that can reveal the internal and external status of the subject in detail. The data or the results of psychodiagnostic tests are very difficult to interpret by non-professionals. The resulting data is numerous, making it very difficult to read and interpret. The levels of understanding will vary depending on the cognitive ability and experience of the researcher or counselor interpreting the results. This study takes the results from psychodiagnostic tests and displays them as an easily understood 3D graph. The 3D graph is displayed as a spiral shape through `Processing'. The Uchida-Kraepelin psychodiagnostic test (U-K test) is used on over one million people each year by hospitals, schools, companies, etc. The purpose of this study is to use the 3D spiral to display the U-K test's calculation of workload and to emphasize the use of color on the 3D spiral visualization. A version with colors visible to colorblind people is used in conjunction with the 3D spiral visualization; the usage of color displays the U-K calculation of workload for non-colorblind people, while a colorblind-accessible version is used for colorblind people. The graphs produced in the study were given to professionals, non-professionals and colorblind people for evaluation.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123230342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative filtering recommendation algorithm based on item attributes 基于项目属性的协同过滤推荐算法
Mengxing Huang, Longfei Sun, Wencai Du
Aiming at the shortcomings of datasets sparsity and cold start in the traditional Item-based collaborative filtering recommendation algorithm, to improve the calculating accuracy of similarity and recommendation quality, taking attribute theory as theoretical basis, a collaborative filtering recommendation algorithm based on item attributes is proposed. Through analyzing the items, the attributes are listed and attribute weights are calculated, the similarity between items is calculated by taking advantage of attribute barycenter coordinate model and item attribute weights, and then produce recommendations forecasts. Finally, the experimental results show that the compared with traditional algorithm the proposed algorithm can effectively alleviate the user rating data sparsity problem and improve the quality of recommendation system.
针对传统基于项目的协同过滤推荐算法存在数据集稀疏性和冷启动等缺点,为提高相似度和推荐质量的计算精度,以属性理论为理论基础,提出了一种基于项目属性的协同过滤推荐算法。通过对项目进行分析,列出属性并计算属性权重,利用属性重心坐标模型和项目属性权重计算项目之间的相似度,进而产生推荐预测。最后,实验结果表明,与传统算法相比,本文提出的算法能有效缓解用户评分数据稀疏性问题,提高推荐系统的质量。
{"title":"Collaborative filtering recommendation algorithm based on item attributes","authors":"Mengxing Huang, Longfei Sun, Wencai Du","doi":"10.1109/SNPD.2014.6888678","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888678","url":null,"abstract":"Aiming at the shortcomings of datasets sparsity and cold start in the traditional Item-based collaborative filtering recommendation algorithm, to improve the calculating accuracy of similarity and recommendation quality, taking attribute theory as theoretical basis, a collaborative filtering recommendation algorithm based on item attributes is proposed. Through analyzing the items, the attributes are listed and attribute weights are calculated, the similarity between items is calculated by taking advantage of attribute barycenter coordinate model and item attribute weights, and then produce recommendations forecasts. Finally, the experimental results show that the compared with traditional algorithm the proposed algorithm can effectively alleviate the user rating data sparsity problem and improve the quality of recommendation system.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116838899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Converting PCAPs into Weka mineable data 将pcap转换为Weka可挖掘的数据
C. A. Fowler, R. Hammell
In today's world there is an unprecedented volume of information available to organizations of all sizes; the “information overload” problem is well documented. This problem is especially challenging in the world of network intrusion detection. In this realm, we must not only deal with sifting through vast amounts of data, but we must also do it in a timely manner even when at times we are not sure what exactly it is we are trying to find. In the grander scheme of our work we intend to demonstrate that several different data mining algorithms reporting to an overarching layer will yield more accurate results than anyone data mining application (or algorithm) acting on its own. The system will operate in the domain of offline network and computer forensic data mining, under the guidance of a hybrid intelligence/multi-agent, systems based, for interpretation and interpolation of the findings. Toward that end, in this paper we build upon earlier work, undertaking the steps required for generating and preparing suitably minable data. Specifically, we are concerned with extracting as much useful data as possible out of a PCAP (Packet capture) for importing into Weka. While a PCAP may have thousands of field/value pairs, Wireshark and tshark's csv (comma separated value) output module only renders a small percentage of these fields and their values by default. We introduce a tool of our own making which enumerates every field (with or without a value) in any PCAP and generates an ARFF (Attribute-Relation File Format - Weka default). This code represents a component of a larger application we are designing (future work) which will ingest a PCAP, semi-autonomously preprocess it and feed it into Weka for processing/mining using several different algorithms.
在当今世界,各种规模的组织都可以获得前所未有的信息量;“信息过载”的问题是有据可查的。这个问题在网络入侵检测领域尤其具有挑战性。在这个领域,我们不仅要处理筛选大量的数据,而且我们还必须及时地进行筛选,即使有时我们不确定我们要寻找的究竟是什么。在我们工作的宏伟计划中,我们打算证明,向一个总体层报告的几种不同的数据挖掘算法将产生比任何单独运行的数据挖掘应用程序(或算法)更准确的结果。该系统将在离线网络和计算机取证数据挖掘领域运行,在混合智能/多代理的指导下,以系统为基础,对调查结果进行解释和插值。为此,在本文中,我们以早期的工作为基础,采取生成和准备适当可挖掘数据所需的步骤。具体来说,我们关心的是从PCAP(数据包捕获)中提取尽可能多的有用数据,以便导入到Weka中。虽然PCAP可能有数千个字段/值对,但Wireshark和tshark的csv(逗号分隔值)输出模块在默认情况下只呈现这些字段及其值的一小部分。我们引入了一个自己制作的工具,它可以枚举任何PCAP中的每个字段(带值或不带值),并生成一个ARFF(属性关系文件格式- Weka默认)。这段代码代表了我们正在设计的一个更大的应用程序的一个组件(未来的工作),它将获取一个PCAP,对其进行半自动预处理,并将其提供给Weka,以便使用几种不同的算法进行处理/挖掘。
{"title":"Converting PCAPs into Weka mineable data","authors":"C. A. Fowler, R. Hammell","doi":"10.1109/SNPD.2014.6888681","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888681","url":null,"abstract":"In today's world there is an unprecedented volume of information available to organizations of all sizes; the “information overload” problem is well documented. This problem is especially challenging in the world of network intrusion detection. In this realm, we must not only deal with sifting through vast amounts of data, but we must also do it in a timely manner even when at times we are not sure what exactly it is we are trying to find. In the grander scheme of our work we intend to demonstrate that several different data mining algorithms reporting to an overarching layer will yield more accurate results than anyone data mining application (or algorithm) acting on its own. The system will operate in the domain of offline network and computer forensic data mining, under the guidance of a hybrid intelligence/multi-agent, systems based, for interpretation and interpolation of the findings. Toward that end, in this paper we build upon earlier work, undertaking the steps required for generating and preparing suitably minable data. Specifically, we are concerned with extracting as much useful data as possible out of a PCAP (Packet capture) for importing into Weka. While a PCAP may have thousands of field/value pairs, Wireshark and tshark's csv (comma separated value) output module only renders a small percentage of these fields and their values by default. We introduce a tool of our own making which enumerates every field (with or without a value) in any PCAP and generates an ARFF (Attribute-Relation File Format - Weka default). This code represents a component of a larger application we are designing (future work) which will ingest a PCAP, semi-autonomously preprocess it and feed it into Weka for processing/mining using several different algorithms.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114579355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
A dialogue-based framework for the user experience reengineering of a legacy application 用于遗留应用程序的用户体验再工程的基于对话框的框架
A. Martella, Roberto Paiano, Andrea Pandurino
Kraft [1] defines the User Experience - UX - as the set of sensations that the end user feels using a product. These sensations dominate so much the UX curve, to generate spikes or dips depending on how positive or negative such feelings are. The UX, therefore, represents a likelihood index of the user willing to continue to use a product. Ansuini [2] defines a legacy application - LA - as an information system of value inherited from the past. Nowadays, most of the companies make use of LAs, which on one side are characterized by high reliability and specialization, but on the other have an equally high degree of obsolescence, also with regard to the user interface. This paper introduces a framework for the UX reengineering of a LA, using the methodology Interactive Dialogue Model - IDM. The framework enables the definition and the implementation of the UX reengineering operations through a specific process, characterized by a series of steps. The reengineering process can take place in an almost fully automatic way, even if, in some phases, is still guaranteed the possibility to the designer to intervene on the intermediate diagrams. Given the reiteration of the reengineering process, it is necessary to historicize the changes made by the designer, and automatically applies them, when necessary, without any manual intervention. At this purpose, the implemented editor generates some extra runtime ATL - ATLAS Transformation Language - rules, in order to keep track of the customizations made on the model by the designer.
Kraft[1]将用户体验(UX)定义为最终用户在使用产品时感受到的一系列感觉。这些感觉在UX曲线中占据主导地位,根据这些感觉的积极或消极程度产生峰值或低谷。因此,用户体验代表了用户愿意继续使用产品的可能性指数。Ansuini[2]将遗留应用(LA)定义为从过去继承的有价值的信息系统。如今,大多数公司都使用LAs,一方面具有高可靠性和专业化的特点,但另一方面,在用户界面方面也具有同样高度的过时性。本文采用交互式对话模型(IDM)的方法,介绍了一个应用于用户体验重构的框架。该框架通过特定的流程(以一系列步骤为特征)来定义和实现用户体验再造操作。再造过程可以以一种几乎完全自动的方式进行,即使在某些阶段,仍然可以保证设计者干预中间图的可能性。考虑到重新工程过程的重复,有必要将设计人员所做的更改记录下来,并在必要时自动应用它们,而不需要任何人工干预。为此,实现的编辑器生成一些额外的运行时ATL (ATLAS Transformation Language)规则,以便跟踪设计器对模型所做的自定义。
{"title":"A dialogue-based framework for the user experience reengineering of a legacy application","authors":"A. Martella, Roberto Paiano, Andrea Pandurino","doi":"10.1109/SNPD.2014.6888713","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888713","url":null,"abstract":"Kraft [1] defines the User Experience - UX - as the set of sensations that the end user feels using a product. These sensations dominate so much the UX curve, to generate spikes or dips depending on how positive or negative such feelings are. The UX, therefore, represents a likelihood index of the user willing to continue to use a product. Ansuini [2] defines a legacy application - LA - as an information system of value inherited from the past. Nowadays, most of the companies make use of LAs, which on one side are characterized by high reliability and specialization, but on the other have an equally high degree of obsolescence, also with regard to the user interface. This paper introduces a framework for the UX reengineering of a LA, using the methodology Interactive Dialogue Model - IDM. The framework enables the definition and the implementation of the UX reengineering operations through a specific process, characterized by a series of steps. The reengineering process can take place in an almost fully automatic way, even if, in some phases, is still guaranteed the possibility to the designer to intervene on the intermediate diagrams. Given the reiteration of the reengineering process, it is necessary to historicize the changes made by the designer, and automatically applies them, when necessary, without any manual intervention. At this purpose, the implemented editor generates some extra runtime ATL - ATLAS Transformation Language - rules, in order to keep track of the customizations made on the model by the designer.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":" 34","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120826471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A requirements description language pLSC for probabilistic branches and three-stage events 一种用于概率分支和三阶段事件的需求描述语言
Jinyu Kai, Huai-kou Miao, Honghao Gao
The language of Live Sequence Chart (LSC), a multi-modal extension of MSC, introduces the distinction between mandatory and possible on the level of the whole chart and for the chart elements. While the LSC still extend the MSC qualitatively, when it comes to capturing the quantitative behaviors, the deficiency emerges. As for the probabilistic systems, i.e., systems that exhibit probabilistic aspects, probabilistic properties are considered as the most important requirements and need to be captured quantitatively. To address this, we propose a requirements description language called pLSC. Supported by the measure theory and the probability theory, the language pLSC describes the interactions quantitatively to suit the probabilistic systems from two dimensions of probabilistic branches and three-stage events. The paper introduces the graphical and textual presentation of the pLSC.
动态序列图(LSC)的语言是MSC的多模态扩展,在整个图表和图表元素的层面上引入了强制性和可能性之间的区别。虽然LSC仍然是对MSC的定性扩展,但在捕捉定量行为时,不足之处就显现出来了。对于概率系统,即表现概率方面的系统,概率属性被认为是最重要的需求,需要定量地捕获。为了解决这个问题,我们提出了一种称为pLSC的需求描述语言。在测度理论和概率论的支持下,pLSC语言从概率分支和三阶段事件的两个维度定量描述了适合概率系统的相互作用。本文介绍了该控制器的图形和文本演示。
{"title":"A requirements description language pLSC for probabilistic branches and three-stage events","authors":"Jinyu Kai, Huai-kou Miao, Honghao Gao","doi":"10.1109/SNPD.2014.6888714","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888714","url":null,"abstract":"The language of Live Sequence Chart (LSC), a multi-modal extension of MSC, introduces the distinction between mandatory and possible on the level of the whole chart and for the chart elements. While the LSC still extend the MSC qualitatively, when it comes to capturing the quantitative behaviors, the deficiency emerges. As for the probabilistic systems, i.e., systems that exhibit probabilistic aspects, probabilistic properties are considered as the most important requirements and need to be captured quantitatively. To address this, we propose a requirements description language called pLSC. Supported by the measure theory and the probability theory, the language pLSC describes the interactions quantitatively to suit the probabilistic systems from two dimensions of probabilistic branches and three-stage events. The paper introduces the graphical and textual presentation of the pLSC.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"59 20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115071923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Survey of attribute based encryption 基于属性的加密概述
Zhi Qiao, Shuwen Liang, Spencer Davis, Hai Jiang
In Attribute Based Encryption, a set of descriptive attributes is used as an identity to generate a secret key, as well as serving as the access structure that performs access control. It successfully integrates Encryption and Access Control and is ideal for sharing secrets among groups, especially in a Cloud environment. Most developed ABE schemes support key-policy or ciphertext-policy access control in addition to other features such as decentralized authority, efficient revocation and key delegation. This paper surveys mainstream papers, analyzes main features for desired ABE systems, and classifies them into different categories. With this high-level guidance, future researchers can treat these features as individual modules and select related ones to build their ABE systems on demand.
在基于属性的加密中,使用一组描述性属性作为标识来生成密钥,并作为执行访问控制的访问结构。它成功地集成了加密和访问控制,是在组之间共享秘密的理想选择,尤其是在云环境中。大多数已开发的ABE方案都支持密钥策略或密文策略访问控制,此外还有诸如分散权限、有效撤销和密钥委托等其他特性。本文对主流论文进行了综述,分析了期望的ABE系统的主要特征,并对其进行了分类。有了这种高层次的指导,未来的研究人员可以将这些特征视为单独的模块,并根据需要选择相关的模块来构建他们的ABE系统。
{"title":"Survey of attribute based encryption","authors":"Zhi Qiao, Shuwen Liang, Spencer Davis, Hai Jiang","doi":"10.1109/SNPD.2014.6888687","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888687","url":null,"abstract":"In Attribute Based Encryption, a set of descriptive attributes is used as an identity to generate a secret key, as well as serving as the access structure that performs access control. It successfully integrates Encryption and Access Control and is ideal for sharing secrets among groups, especially in a Cloud environment. Most developed ABE schemes support key-policy or ciphertext-policy access control in addition to other features such as decentralized authority, efficient revocation and key delegation. This paper surveys mainstream papers, analyzes main features for desired ABE systems, and classifies them into different categories. With this high-level guidance, future researchers can treat these features as individual modules and select related ones to build their ABE systems on demand.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122251610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
An energy balanced topology construction protocol for Wireless Sensor Networks 一种用于无线传感器网络的能量平衡拓扑构建协议
Soumya Saha, L. McLauchlan
Energy efficiency in Wireless Sensor Networks (WSNs) is an important area of research for practical WSN deployment for extended periods of time. Appropriate topology control (TC), consisting of topology construction and maintenance, facilitates an energy aware and load balanced network while ensuring connectivity and coverage. In this research, the authors propose a novel load balanced TC protocol for connectivity - EAST (Energy Aware Spanning Tree). The EAST protocol's goal is to balance the load while reducing energy utilization for the network. The EAST protocol builds upon the Minimal Spanning Tree (MST) construction method. A Connected Dominating Set (CDS) is constructed while taking into account the energy level of each parent branch to form the communication backbone. The aim of this EAST protocol is to place as many nodes into sleep mode while maintaining the communication backbone of active nodes as determined based on an energy metric. Dynamic Global Topology Recreation (DGTRec) was implemented as the topology maintenance protocol which periodically builds a new communication backbone. The proposed EAST algorithm was simulated and compared with Simple Tree, Random Nearest Neighbor Tree (Random NNT) and Euclidian Minimal Spanning Tree (Euclidian MST). The simulation results demonstrate that the EAST algorithm improves load balancing and event coverage as compared to the other tested algorithms.
无线传感器网络(WSNs)的能源效率是长期实际部署WSNs的一个重要研究领域。适当的拓扑控制(TC)包括拓扑的构建和维护,可以在保证网络连通性和覆盖的同时实现能源意识和负载均衡。在本研究中,作者提出了一种新的负载均衡的TC连接协议- EAST (Energy Aware Spanning Tree)。EAST协议的目标是平衡负载,同时降低网络的能源利用率。EAST协议建立在最小生成树(MST)构造方法之上。考虑每个父分支的能量水平,构造连通支配集(CDS)形成通信骨干。此EAST协议的目的是将尽可能多的节点置于休眠模式,同时维护基于能量度量确定的活动节点的通信骨干。动态全局拓扑重建(DGTRec)作为拓扑维护协议,周期性地建立新的通信骨干。对EAST算法进行了仿真,并与简单树、随机近邻树(Random NNT)和欧几里德最小生成树(Euclidian minimum Spanning Tree, MST)算法进行了比较。仿真结果表明,与其他测试算法相比,EAST算法改善了负载平衡和事件覆盖率。
{"title":"An energy balanced topology construction protocol for Wireless Sensor Networks","authors":"Soumya Saha, L. McLauchlan","doi":"10.1109/SNPD.2014.6888704","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888704","url":null,"abstract":"Energy efficiency in Wireless Sensor Networks (WSNs) is an important area of research for practical WSN deployment for extended periods of time. Appropriate topology control (TC), consisting of topology construction and maintenance, facilitates an energy aware and load balanced network while ensuring connectivity and coverage. In this research, the authors propose a novel load balanced TC protocol for connectivity - EAST (Energy Aware Spanning Tree). The EAST protocol's goal is to balance the load while reducing energy utilization for the network. The EAST protocol builds upon the Minimal Spanning Tree (MST) construction method. A Connected Dominating Set (CDS) is constructed while taking into account the energy level of each parent branch to form the communication backbone. The aim of this EAST protocol is to place as many nodes into sleep mode while maintaining the communication backbone of active nodes as determined based on an energy metric. Dynamic Global Topology Recreation (DGTRec) was implemented as the topology maintenance protocol which periodically builds a new communication backbone. The proposed EAST algorithm was simulated and compared with Simple Tree, Random Nearest Neighbor Tree (Random NNT) and Euclidian Minimal Spanning Tree (Euclidian MST). The simulation results demonstrate that the EAST algorithm improves load balancing and event coverage as compared to the other tested algorithms.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127311685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Microcredit risk assessment using crowdsourcing and social networks 利用众包和社交网络进行小额信贷风险评估
Tofig Hasanov, Motoyuki Ozeki, N. Oka
The task of automated risk assessment is attracting significant attention in the light of the recent microloan popularity growth. The industry requires a real time method for the timely processing of the extensive number of applicants for short-term small loans. Owing to the vast number of applications, manual verification is not a viable option. In cooperation with a microloan company in Azerbaijan, we have researched automated risk assessment using crowdsourcing. The principal concept behind this approach is the fact that a significant amount of information relating to a particular applicant can be retrieved from the social networks. The suggested approach can be divided into three parts: First, applicant information is collected on social networks such as LinkedIn and Facebook. This can only occur with the applicant's permission. Then, this data is processed using a program that extracts the relevant information segments. Finally, these information segments are evaluated using crowdsourcing. We attempted to evaluate the information segments using social networks. To that end, we automatically posted requests on the social networks regarding certain information segments and evaluated the community response by counting “likes” and “shares”. For example, we posted the status, “Do you think that a person who has worked at ABC Company is more likely to repay a loan? Please “like” this post if you agree.” From the results, we were able to estimate public opinion. Once evaluated, each information segment was then given a weight factor that was optimized using available loan-repay test data provided to us by a company. We then tested the proposed system on a set of 400 applicants. Using a second crowdsourcing approach, we were able to confirm that the resulting solution provided a 92.5% correct assessment, with 6.45% false positives and 11.11% false negatives, with an assessment duration of 24 hours.
鉴于最近小额贷款普及程度的增长,自动风险评估的任务引起了人们的极大关注。该行业需要一个实时的方法来及时处理大量的短期小额贷款申请人。由于应用程序的数量庞大,手动验证不是一个可行的选择。我们与阿塞拜疆的一家小额贷款公司合作,研究了使用众包的自动风险评估。这种方法背后的主要概念是,可以从社交网络中检索到与特定申请人相关的大量信息。建议的方法可以分为三个部分:首先,在LinkedIn和Facebook等社交网络上收集申请人的信息。这只能在申请人允许的情况下进行。然后,使用提取相关信息段的程序对这些数据进行处理。最后,使用众包对这些信息片段进行评估。我们尝试使用社交网络来评估信息细分。为此,我们自动在社交网络上发布有关某些信息片段的请求,并通过计算“喜欢”和“分享”来评估社区反应。例如,我们发布了这样一个状态:“你认为在ABC公司工作过的人更有可能偿还贷款吗?”如果你同意,请给这篇文章点赞。从结果中,我们可以估计公众的意见。一旦评估完成,每个信息片段就会被赋予一个权重因子,该权重因子是利用一家公司提供给我们的可用贷款偿还测试数据进行优化的。然后,我们在一组400名申请人身上测试了拟议的系统。使用第二种众包方法,我们能够确认最终的解决方案提供了92.5%的正确评估,6.45%的假阳性和11.11%的假阴性,评估持续时间为24小时。
{"title":"Microcredit risk assessment using crowdsourcing and social networks","authors":"Tofig Hasanov, Motoyuki Ozeki, N. Oka","doi":"10.1109/SNPD.2014.6888682","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888682","url":null,"abstract":"The task of automated risk assessment is attracting significant attention in the light of the recent microloan popularity growth. The industry requires a real time method for the timely processing of the extensive number of applicants for short-term small loans. Owing to the vast number of applications, manual verification is not a viable option. In cooperation with a microloan company in Azerbaijan, we have researched automated risk assessment using crowdsourcing. The principal concept behind this approach is the fact that a significant amount of information relating to a particular applicant can be retrieved from the social networks. The suggested approach can be divided into three parts: First, applicant information is collected on social networks such as LinkedIn and Facebook. This can only occur with the applicant's permission. Then, this data is processed using a program that extracts the relevant information segments. Finally, these information segments are evaluated using crowdsourcing. We attempted to evaluate the information segments using social networks. To that end, we automatically posted requests on the social networks regarding certain information segments and evaluated the community response by counting “likes” and “shares”. For example, we posted the status, “Do you think that a person who has worked at ABC Company is more likely to repay a loan? Please “like” this post if you agree.” From the results, we were able to estimate public opinion. Once evaluated, each information segment was then given a weight factor that was optimized using available loan-repay test data provided to us by a company. We then tested the proposed system on a set of 400 applicants. Using a second crowdsourcing approach, we were able to confirm that the resulting solution provided a 92.5% correct assessment, with 6.45% false positives and 11.11% false negatives, with an assessment duration of 24 hours.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129001156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Test image generation using segmental symbolic evaluation for unit testing 使用单元测试的分段符号计算来生成测试图像
Tahir Jameel, Mengxiang Lin, He Li, Xiaomei Hou
This paper presents a novel technique to generate test images using segmental symbolic evaluation for testing of image processing applications. Images are multidimensional and diverse in nature, which leads to different challenges for the testing process. A technique is required to generate test images capable of finding program paths derived by image pixels. The proposed technique is based on symbolic execution which is extensively used for test data generation in recent years. In image processing applications, pixel operations such as averaging, convolution etc. are applied on a segment of input image pixels called window for a single iteration and repeated for the entire image. Our key idea is to imitate operations on pixel window using symbolic values rather than concrete ones to generate path constraints in the program under test. The path constraints generated for different paths are solved for concrete values using our simple SAT solver and the solutions are capable to guide program execution to the specific paths. The solutions of path constraints are used to generate synthetic test images for each identified path and the paths constraints which are not solvable for concrete pixel values are reported as infeasible paths. We have developed a tool IMSUITthat takes an image processing function as input and executes the program symbolically for the given pixels window to generate test images. Effectiveness of IMSUIT is tested on different modules of an optical character recognition system and the result shows that it can successfully create test images for each path of the program under test and capable of identifying infeasible paths.
本文提出了一种利用分段符号求值生成测试图像的新技术,用于图像处理应用的测试。图像具有多维性和多样性,这给测试过程带来了不同的挑战。需要一种技术来生成能够查找由图像像素派生的程序路径的测试图像。本文提出的技术是基于近年来广泛应用于测试数据生成的符号执行。在图像处理应用中,像素操作(如平均、卷积等)应用于称为窗口的输入图像像素段,用于单个迭代,并重复用于整个图像。我们的关键思想是在被测程序中使用符号值而不是具体值来模拟对像素窗口的操作,以生成路径约束。使用我们的简单SAT求解器对不同路径生成的路径约束进行具体值求解,求解结果能够指导程序执行到特定路径。利用路径约束的解对识别出的每条路径生成合成测试图像,对具体像素值无法解的路径约束报告为不可行路径。我们已经开发了一个工具imsuite,它将图像处理函数作为输入,并为给定的像素窗口象征性地执行程序以生成测试图像。在光学字符识别系统的不同模块上测试了IMSUIT算法的有效性,结果表明该算法能够成功地为被测程序的每条路径创建测试图像,并能够识别出不可行的路径。
{"title":"Test image generation using segmental symbolic evaluation for unit testing","authors":"Tahir Jameel, Mengxiang Lin, He Li, Xiaomei Hou","doi":"10.1109/SNPD.2014.6888718","DOIUrl":"https://doi.org/10.1109/SNPD.2014.6888718","url":null,"abstract":"This paper presents a novel technique to generate test images using segmental symbolic evaluation for testing of image processing applications. Images are multidimensional and diverse in nature, which leads to different challenges for the testing process. A technique is required to generate test images capable of finding program paths derived by image pixels. The proposed technique is based on symbolic execution which is extensively used for test data generation in recent years. In image processing applications, pixel operations such as averaging, convolution etc. are applied on a segment of input image pixels called window for a single iteration and repeated for the entire image. Our key idea is to imitate operations on pixel window using symbolic values rather than concrete ones to generate path constraints in the program under test. The path constraints generated for different paths are solved for concrete values using our simple SAT solver and the solutions are capable to guide program execution to the specific paths. The solutions of path constraints are used to generate synthetic test images for each identified path and the paths constraints which are not solvable for concrete pixel values are reported as infeasible paths. We have developed a tool IMSUITthat takes an image processing function as input and executes the program symbolically for the given pixels window to generate test images. Effectiveness of IMSUIT is tested on different modules of an optical character recognition system and the result shows that it can successfully create test images for each path of the program under test and capable of identifying infeasible paths.","PeriodicalId":272932,"journal":{"name":"15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129808277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1