首页 > 最新文献

2012 Ninth Web Information Systems and Applications Conference最新文献

英文 中文
Dual-Kad: Kademlia-Based Query Processing Strategies for P2P Data Integration 基于kdemlia的P2P数据集成查询处理策略
Pub Date : 2012-11-16 DOI: 10.1109/WISA.2012.31
Zongquan Wang, Guoqing Dong, Jie Zhu
The P2P data integration system aims to combine the advantages of P2P technologies and data integration to overcome centralized data integration systems' shortcomings. Kademlia, as a widely used and efficient network protocol for P2P files sharing system, has a very clear logical structure, and with its unique identifying pattern of nodes and XOR metric for distance, it can provide O(logn) lookup to locate the node closest to a given key. In this paper, we put forward a method of applying Kademlia to the P2P data integration system, and propose a new P2P data integration model, Dual-Kad, combing the Kademlia network over the Peer layer with that over the Super-Peer layer. Dual-Kad can process queries based on semantic logic which is a limitation of the original Kademlia, and shorten the query routing path, cache the query results, and as a result, speed the whole query routing. We describe the detailed structures of Dual-Kad and its query routing algorithms. Our query routing strategies are proved effective in our case studies mentioned in this paper.
P2P数据集成系统旨在将P2P技术和数据集成的优点结合起来,克服集中式数据集成系统的缺点。Kademlia作为一种广泛应用于P2P文件共享系统的高效网络协议,具有非常清晰的逻辑结构,具有独特的节点识别模式和距离异或度量,可以提供O(logn)查找来找到最接近给定密钥的节点。本文提出了一种将Kademlia应用于P2P数据集成系统的方法,并提出了一种新的P2P数据集成模型——Dual-Kad,该模型将Peer层的Kademlia网络与Super-Peer层的Kademlia网络相结合。Dual-Kad基于语义逻辑处理查询,缩短了查询路由路径,缓存了查询结果,从而提高了整个查询路由的速度。详细描述了Dual-Kad的结构及其查询路由算法。在本文提到的案例研究中,我们的查询路由策略被证明是有效的。
{"title":"Dual-Kad: Kademlia-Based Query Processing Strategies for P2P Data Integration","authors":"Zongquan Wang, Guoqing Dong, Jie Zhu","doi":"10.1109/WISA.2012.31","DOIUrl":"https://doi.org/10.1109/WISA.2012.31","url":null,"abstract":"The P2P data integration system aims to combine the advantages of P2P technologies and data integration to overcome centralized data integration systems' shortcomings. Kademlia, as a widely used and efficient network protocol for P2P files sharing system, has a very clear logical structure, and with its unique identifying pattern of nodes and XOR metric for distance, it can provide O(logn) lookup to locate the node closest to a given key. In this paper, we put forward a method of applying Kademlia to the P2P data integration system, and propose a new P2P data integration model, Dual-Kad, combing the Kademlia network over the Peer layer with that over the Super-Peer layer. Dual-Kad can process queries based on semantic logic which is a limitation of the original Kademlia, and shorten the query routing path, cache the query results, and as a result, speed the whole query routing. We describe the detailed structures of Dual-Kad and its query routing algorithms. Our query routing strategies are proved effective in our case studies mentioned in this paper.","PeriodicalId":313228,"journal":{"name":"2012 Ninth Web Information Systems and Applications Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130977183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multilayer Method of Schema Matching Based on Semantic and Functional Dependencies 一种基于语义和功能依赖的多层模式匹配方法
Pub Date : 2012-11-16 DOI: 10.1109/WISA.2012.9
Chen Zhao, Derong Shen, Yue Kou, Tiezheng Nie, Ge Yu
Determing matching schemas enables queries on heterogeneous data space to be formulated and facilitates data integration. Current schema matching techniques most focus on mining mappings using elements' own information. This paper proposes to introduce semantic and functional dependencies into matching process to achieve multilayer schema matching results. It calculates semantic similarity with the help of Word Net and generates candidate mapping sets. By introducing functional dependency to formulize structural information, it can get structural similarities between element pairs. A probabilistic factor is considered to select mapping pairs. Through experimental evaluation on real data, the superiority of our method is verified.
确定匹配的模式可以使异构数据空间上的查询公式化,并促进数据集成。当前的模式匹配技术主要关注使用元素自身信息挖掘映射。本文提出在匹配过程中引入语义依赖和功能依赖,以实现多层模式匹配结果。它借助Word Net计算语义相似度,生成候选映射集。通过引入函数依赖关系将结构信息公式化,得到元素对之间的结构相似性。考虑了概率因素来选择映射对。通过对实际数据的实验评价,验证了该方法的优越性。
{"title":"A Multilayer Method of Schema Matching Based on Semantic and Functional Dependencies","authors":"Chen Zhao, Derong Shen, Yue Kou, Tiezheng Nie, Ge Yu","doi":"10.1109/WISA.2012.9","DOIUrl":"https://doi.org/10.1109/WISA.2012.9","url":null,"abstract":"Determing matching schemas enables queries on heterogeneous data space to be formulated and facilitates data integration. Current schema matching techniques most focus on mining mappings using elements' own information. This paper proposes to introduce semantic and functional dependencies into matching process to achieve multilayer schema matching results. It calculates semantic similarity with the help of Word Net and generates candidate mapping sets. By introducing functional dependency to formulize structural information, it can get structural similarities between element pairs. A probabilistic factor is considered to select mapping pairs. Through experimental evaluation on real data, the superiority of our method is verified.","PeriodicalId":313228,"journal":{"name":"2012 Ninth Web Information Systems and Applications Conference","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130121435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Investigations on XML-based Data Exchange between Heterogeneous Databases 异构数据库间基于xml的数据交换研究
Pub Date : 2012-11-16 DOI: 10.1109/WISA.2012.44
Mingli Wu, Yebai Li
With the growing of the Internet, lots of heterogeneous relational databases are built in distributed environment. Data exchange between these databases absorbs more attention of researchers and engineers nowadays than ever. As a well-formed makeup language, XML is suitable to store and transfer information. Therefore we investigate the data exchange method via XML in this paper. We analyze mapping techniques between XML schema and relational database. Then, an effective method for data exchange is described in detail. Finally we design and implement a data exchange system by Java and DOM interface technology. It works well in a real commercial web application.
随着Internet的发展,在分布式环境下建立了大量的异构关系数据库。这些数据库之间的数据交换越来越受到研究人员和工程师的关注。作为一种格式良好的组合语言,XML适合存储和传输信息。为此,本文研究了基于XML的数据交换方法。分析了XML模式与关系数据库之间的映射技术。然后,详细介绍了一种有效的数据交换方法。最后利用Java和DOM接口技术设计并实现了一个数据交换系统。它在真实的商业web应用程序中运行良好。
{"title":"Investigations on XML-based Data Exchange between Heterogeneous Databases","authors":"Mingli Wu, Yebai Li","doi":"10.1109/WISA.2012.44","DOIUrl":"https://doi.org/10.1109/WISA.2012.44","url":null,"abstract":"With the growing of the Internet, lots of heterogeneous relational databases are built in distributed environment. Data exchange between these databases absorbs more attention of researchers and engineers nowadays than ever. As a well-formed makeup language, XML is suitable to store and transfer information. Therefore we investigate the data exchange method via XML in this paper. We analyze mapping techniques between XML schema and relational database. Then, an effective method for data exchange is described in detail. Finally we design and implement a data exchange system by Java and DOM interface technology. It works well in a real commercial web application.","PeriodicalId":313228,"journal":{"name":"2012 Ninth Web Information Systems and Applications Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125553466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Distributed and Collaborative Requirements Elicitation Based on Social Intelligence 基于社会智能的分布式协同需求提取
Pub Date : 2012-11-16 DOI: 10.1109/WISA.2012.14
Bin Wen, Ziqiang Luo, Peng Liang
Requirements is the formal expression of user's needs. Also, requirements elicitation is the process of activity focusing on requirements collection. Traditional acquisition methods, such as interview, observation and prototype, are unsuited for the service-oriented software development featuring in the distributed stakeholders, collective intelligence and behavioral emergence. In this paper, a collaborative requirements elicitation approach based on social intelligence for networked software is put forward, and requirements-semantics concept is defined as the formal requirements description generated by collective participation. Furthermore, semantic wikis technology is chosen as requirements authoring platform to adapt the distributed and collaborative features. Faced to the wide-area distributed Internet, it combines with the Web 2.0 and the semantic web to revise the experts requirements-semantics model through the social classification. At the same time, instantiation of requirements model is finished with semantic tagging and validation. Apart from the traditional documentary specification, requirements-semantics artifacts will be exported from the elicitation process to the subsequent software production process, i.e. services aggregation and services resource customization. Experiment and prototype have proved the feasibility and effectiveness of the proposed approach.
需求是用户需求的正式表达。同样,需求引出是关注需求收集的活动过程。传统的访谈、观察、原型等获取方法不适合以利益相关者分布式、集体智慧和行为涌现为特征的面向服务的软件开发。本文提出了一种基于社会智能的网络化软件协同需求提取方法,并将需求语义概念定义为集体参与生成的形式化需求描述。此外,选择语义wiki技术作为需求创作平台,以适应分布式和协作的特点。面对广域分布式互联网,结合Web 2.0和语义网,通过社会分类对专家需求-语义模型进行修正。同时,通过语义标注和验证,完成了需求模型的实例化。除了传统的文档规范之外,需求语义工件将从启发过程导出到后续的软件生产过程,即服务聚合和服务资源定制。实验和样机验证了该方法的可行性和有效性。
{"title":"Distributed and Collaborative Requirements Elicitation Based on Social Intelligence","authors":"Bin Wen, Ziqiang Luo, Peng Liang","doi":"10.1109/WISA.2012.14","DOIUrl":"https://doi.org/10.1109/WISA.2012.14","url":null,"abstract":"Requirements is the formal expression of user's needs. Also, requirements elicitation is the process of activity focusing on requirements collection. Traditional acquisition methods, such as interview, observation and prototype, are unsuited for the service-oriented software development featuring in the distributed stakeholders, collective intelligence and behavioral emergence. In this paper, a collaborative requirements elicitation approach based on social intelligence for networked software is put forward, and requirements-semantics concept is defined as the formal requirements description generated by collective participation. Furthermore, semantic wikis technology is chosen as requirements authoring platform to adapt the distributed and collaborative features. Faced to the wide-area distributed Internet, it combines with the Web 2.0 and the semantic web to revise the experts requirements-semantics model through the social classification. At the same time, instantiation of requirements model is finished with semantic tagging and validation. Apart from the traditional documentary specification, requirements-semantics artifacts will be exported from the elicitation process to the subsequent software production process, i.e. services aggregation and services resource customization. Experiment and prototype have proved the feasibility and effectiveness of the proposed approach.","PeriodicalId":313228,"journal":{"name":"2012 Ninth Web Information Systems and Applications Conference","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124209574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Applied Research of PSO in Parameter Estimation of Richards Model 粒子群算法在Richards模型参数估计中的应用研究
Pub Date : 2012-11-16 DOI: 10.1109/WISA.2012.29
Ting-fa Wu, Jun-Bin You, Meijuan Yan, Hao-jun Sun
It's significant to establish a mathematical model for the spread of epidemic, to help control the epidemic situation and minimize their impacts. In this paper, Richards model is proposed to fit the spread and PSO is employed to estimate the parameters of Richards model. Meanwhile concave function decreasing strategy and linear decreasing strategy are adopted to update the particle's velocity inertia weights respectively, and a new object function in the sense of normalized cross-correlation is built. The experiment result indicates that, PSO is a valid method for the parameter estimation of Richards model.
建立疫情传播的数学模型,对控制疫情、减少疫情影响具有重要意义。本文提出了Richards模型来拟合分布,并利用粒子群算法估计Richards模型的参数。同时采用凹函数递减策略和线性递减策略分别更新粒子的速度惯性权值,建立归一化互相关意义上的新目标函数。实验结果表明,粒子群算法是一种有效的Richards模型参数估计方法。
{"title":"Applied Research of PSO in Parameter Estimation of Richards Model","authors":"Ting-fa Wu, Jun-Bin You, Meijuan Yan, Hao-jun Sun","doi":"10.1109/WISA.2012.29","DOIUrl":"https://doi.org/10.1109/WISA.2012.29","url":null,"abstract":"It's significant to establish a mathematical model for the spread of epidemic, to help control the epidemic situation and minimize their impacts. In this paper, Richards model is proposed to fit the spread and PSO is employed to estimate the parameters of Richards model. Meanwhile concave function decreasing strategy and linear decreasing strategy are adopted to update the particle's velocity inertia weights respectively, and a new object function in the sense of normalized cross-correlation is built. The experiment result indicates that, PSO is a valid method for the parameter estimation of Richards model.","PeriodicalId":313228,"journal":{"name":"2012 Ninth Web Information Systems and Applications Conference","volume":"302 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124316997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Build the Image File Catalog System Based on the Subdivision of Part-Whole Ontology 构建基于部分-整体本体细分的图像文件编目系统
Pub Date : 2012-11-16 DOI: 10.1109/WISA.2012.46
Jifeng Cui, Yong Zhang, Chunxiao Xing
For the massive remote image file's management, we build the file catalog system of data application based on the part whole ontology of spatial relation. The method is that we analyzed the attributed item of image metadata and calculated the weight for application, then build the catalog concept level relation, calculated the similitude degree of the image attributed item and catalogue node to build the catalog system, stored the file into the corresponding directory of catalog. We design and realize the catalog system, the experiment show that the method of data integration based on the subdivision of part whole ontology is effective for image data's high efficient integrative management.
针对海量远程图像文件的管理,构建了基于局部整体空间关系本体的数据应用文件编目系统。该方法是对图像元数据中的属性项进行分析并计算权重供应用,然后建立目录概念层次关系,计算图像属性项与目录节点的相似度,构建目录系统,将文件存储到目录的相应目录中。设计并实现了该编目系统,实验表明,基于局部整体本体细分的数据集成方法对图像数据的高效集成管理是有效的。
{"title":"Build the Image File Catalog System Based on the Subdivision of Part-Whole Ontology","authors":"Jifeng Cui, Yong Zhang, Chunxiao Xing","doi":"10.1109/WISA.2012.46","DOIUrl":"https://doi.org/10.1109/WISA.2012.46","url":null,"abstract":"For the massive remote image file's management, we build the file catalog system of data application based on the part whole ontology of spatial relation. The method is that we analyzed the attributed item of image metadata and calculated the weight for application, then build the catalog concept level relation, calculated the similitude degree of the image attributed item and catalogue node to build the catalog system, stored the file into the corresponding directory of catalog. We design and realize the catalog system, the experiment show that the method of data integration based on the subdivision of part whole ontology is effective for image data's high efficient integrative management.","PeriodicalId":313228,"journal":{"name":"2012 Ninth Web Information Systems and Applications Conference","volume":"04 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127273289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Data-Centric Storage Approach for Efficient Query of Large-Scale Smart Grid 面向大规模智能电网高效查询的数据中心存储方法
Pub Date : 2012-11-16 DOI: 10.1109/WISA.2012.27
Yan Wang, Qingxu Deng, W. Liu, Baoyan Song
Smart Grid is an important application in Internet Of Things (IOT). Monitoring data in large-scale smart grid are massive, real-time and dynamic which collected by a lot of sensors, Intelligent Electronic Devices (IED) and etc.. All on account of that, traditional centralized storage proposals aren't applicable to data storage in large-scale smart grid. Therefore, we propose a data-centric storage approach in support of monitoring system in large-scale smart grid: Hierarchical Extended Storage Mechanism for Massive Dynamic Data (HES). HES stores monitoring data in different area according to data types. It can add storage nodes dynamically by coding method with extended hash function for avoiding data loss of incidents and frequent events. Monitoring data are stored dispersedly in the nodes of the same player by the multi-threshold levels means in HES, which avoids load skew. The simulation results show that HES satisfies the needs of massive dynamic data storage, and achieves load balance and a longer life cycle of monitoring network.
智能电网是物联网的重要应用。大型智能电网中的监测数据是海量的、实时的、动态的,这些数据是由大量传感器、智能电子设备(IED)等采集的。因此,传统的集中式存储方案并不适用于大规模智能电网的数据存储。为此,我们提出了一种支持大规模智能电网监控系统的以数据为中心的存储方法:海量动态数据分层扩展存储机制(HES)。HES根据数据类型将监控数据存储在不同的区域。通过扩展哈希函数的编码方式动态添加存储节点,避免了意外数据丢失和事件频繁。HES采用多阈值水平方法,将监控数据分散存储在同一播放器的节点上,避免了负载倾斜。仿真结果表明,HES满足了海量动态数据存储的需求,实现了负载均衡和更长的监测网络生命周期。
{"title":"A Data-Centric Storage Approach for Efficient Query of Large-Scale Smart Grid","authors":"Yan Wang, Qingxu Deng, W. Liu, Baoyan Song","doi":"10.1109/WISA.2012.27","DOIUrl":"https://doi.org/10.1109/WISA.2012.27","url":null,"abstract":"Smart Grid is an important application in Internet Of Things (IOT). Monitoring data in large-scale smart grid are massive, real-time and dynamic which collected by a lot of sensors, Intelligent Electronic Devices (IED) and etc.. All on account of that, traditional centralized storage proposals aren't applicable to data storage in large-scale smart grid. Therefore, we propose a data-centric storage approach in support of monitoring system in large-scale smart grid: Hierarchical Extended Storage Mechanism for Massive Dynamic Data (HES). HES stores monitoring data in different area according to data types. It can add storage nodes dynamically by coding method with extended hash function for avoiding data loss of incidents and frequent events. Monitoring data are stored dispersedly in the nodes of the same player by the multi-threshold levels means in HES, which avoids load skew. The simulation results show that HES satisfies the needs of massive dynamic data storage, and achieves load balance and a longer life cycle of monitoring network.","PeriodicalId":313228,"journal":{"name":"2012 Ninth Web Information Systems and Applications Conference","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124090281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A Novel URL Assignment Model Based on Multi-objective Decision Making Method 基于多目标决策方法的URL分配模型
Pub Date : 2012-11-16 DOI: 10.1109/WISA.2012.19
Qiuyan Huang, Qingzhong Li, Zhongmin Yan
With the tremendous growth of the Web, it has become a huge challenge for the single-process crawlers to locate the resources that are precise and relevant to some topics in an appropriate amount of time, so it is increasingly important to use the parallel crawler. However, due to the parallelism of crawlers, one headache problem we have to face is how to distribute the URLs to crawlers to make the parallel system work coordinately and thereby make sure that the Web pages fetched are of high quality. In this paper, a novel URL assignment model for the parallel crawler is described, which is based on multi-objective decision making method and considers multiple factors synthetically such as load balance, overlap and so on. Extensive experiments test and validate our techniques.
随着Web的飞速发展,如何在适当的时间内找到与某些主题相关的精确资源,对单进程爬虫来说是一个巨大的挑战,因此使用并行爬虫变得越来越重要。然而,由于爬虫的并行性,如何将url分配给爬虫,使并行系统协调工作,从而保证获取的网页质量是一个令人头疼的问题。本文提出了一种基于多目标决策方法,综合考虑负载均衡、重叠等多种因素的并行爬虫URL分配模型。大量的实验测试和验证了我们的技术。
{"title":"A Novel URL Assignment Model Based on Multi-objective Decision Making Method","authors":"Qiuyan Huang, Qingzhong Li, Zhongmin Yan","doi":"10.1109/WISA.2012.19","DOIUrl":"https://doi.org/10.1109/WISA.2012.19","url":null,"abstract":"With the tremendous growth of the Web, it has become a huge challenge for the single-process crawlers to locate the resources that are precise and relevant to some topics in an appropriate amount of time, so it is increasingly important to use the parallel crawler. However, due to the parallelism of crawlers, one headache problem we have to face is how to distribute the URLs to crawlers to make the parallel system work coordinately and thereby make sure that the Web pages fetched are of high quality. In this paper, a novel URL assignment model for the parallel crawler is described, which is based on multi-objective decision making method and considers multiple factors synthetically such as load balance, overlap and so on. Extensive experiments test and validate our techniques.","PeriodicalId":313228,"journal":{"name":"2012 Ninth Web Information Systems and Applications Conference","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124170388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling of Parallel Interactive Modes among Collaborative Processes Based on High Level Petri Nets 基于高级Petri网的协同过程并行交互模式建模
Pub Date : 2012-11-16 DOI: 10.1109/WISA.2012.49
Qianqian Xia, Jiantao Zhou, C. Sun
As the development of collaborative applications, the parallel interactions among processes are more and more complicated and frequent. However, modeling of the interactions among many collaborative processes is a complicated and error-prone procedure. In this paper, firstly, a novel model based on Petri net, called PIPN, was proposed. PIPN is suitable to define and analyze the parallel interactions among collaborative processes. Secondly, seven parallel interactive modes were summarized according to three views of parallel interactions, which are unidirectional or bidirectional, single-point or multi-point, and synchronous or asynchronous. Then the formal definitions and control flow graphs of these modes were given. Finally, an example, called micro blog, was modeled to verify the reasonableness and feasibility of the work in this paper.
随着协同应用的发展,进程间的并行交互变得越来越复杂和频繁。然而,对许多协作过程之间的交互进行建模是一个复杂且容易出错的过程。本文首先提出了一种基于Petri网的新模型PIPN。PIPN适用于定义和分析协作过程之间的并行交互。其次,根据并行交互的三种观点,即单向或双向、单点或多点、同步或异步,总结出7种并行交互模式。然后给出了这些模式的形式定义和控制流程图。最后,以微博为例进行建模,验证了本文工作的合理性和可行性。
{"title":"Modeling of Parallel Interactive Modes among Collaborative Processes Based on High Level Petri Nets","authors":"Qianqian Xia, Jiantao Zhou, C. Sun","doi":"10.1109/WISA.2012.49","DOIUrl":"https://doi.org/10.1109/WISA.2012.49","url":null,"abstract":"As the development of collaborative applications, the parallel interactions among processes are more and more complicated and frequent. However, modeling of the interactions among many collaborative processes is a complicated and error-prone procedure. In this paper, firstly, a novel model based on Petri net, called PIPN, was proposed. PIPN is suitable to define and analyze the parallel interactions among collaborative processes. Secondly, seven parallel interactive modes were summarized according to three views of parallel interactions, which are unidirectional or bidirectional, single-point or multi-point, and synchronous or asynchronous. Then the formal definitions and control flow graphs of these modes were given. Finally, an example, called micro blog, was modeled to verify the reasonableness and feasibility of the work in this paper.","PeriodicalId":313228,"journal":{"name":"2012 Ninth Web Information Systems and Applications Conference","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133599424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection Splog Algorithm Based on Features Relation Tree 基于特征关系树的Splog检测算法
Pub Date : 2012-11-16 DOI: 10.1109/WISA.2012.39
Yong-gong Ren, Xue Yang, Ming-fei Yin
Blogosphere has become a hot research field in recent years. As the existing detection algorithm has problems of inefficient feature selection and weak correlation, we propose an algorithm of splog detection based on features relation tree. We could construct the tree according to the correlation of the features, reserving the strong relevance features and removing the weak ones, then prune the redundant and irrelevance features by using the secondary features selection method and retain the best feature subset. The experimental results conducted in the Libsvm platform show that the algorithm based on the features of relation tree has higher precision and covering rate compared to the traditional ones. The precision of the algorithm on simulated training remains at about 90%, which has better generalization ability.
近年来,博客圈已成为一个研究热点。针对现有检测算法存在特征选择效率低、相关性弱的问题,提出了一种基于特征关系树的splog检测算法。我们可以根据特征的相关性构造树,保留强相关特征,去除弱相关特征,然后使用二次特征选择方法对冗余和不相关特征进行修剪,保留最佳特征子集。在Libsvm平台上进行的实验结果表明,与传统算法相比,基于关系树特征的算法具有更高的准确率和覆盖率。算法在模拟训练上的准确率保持在90%左右,具有较好的泛化能力。
{"title":"Detection Splog Algorithm Based on Features Relation Tree","authors":"Yong-gong Ren, Xue Yang, Ming-fei Yin","doi":"10.1109/WISA.2012.39","DOIUrl":"https://doi.org/10.1109/WISA.2012.39","url":null,"abstract":"Blogosphere has become a hot research field in recent years. As the existing detection algorithm has problems of inefficient feature selection and weak correlation, we propose an algorithm of splog detection based on features relation tree. We could construct the tree according to the correlation of the features, reserving the strong relevance features and removing the weak ones, then prune the redundant and irrelevance features by using the secondary features selection method and retain the best feature subset. The experimental results conducted in the Libsvm platform show that the algorithm based on the features of relation tree has higher precision and covering rate compared to the traditional ones. The precision of the algorithm on simulated training remains at about 90%, which has better generalization ability.","PeriodicalId":313228,"journal":{"name":"2012 Ninth Web Information Systems and Applications Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133609807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2012 Ninth Web Information Systems and Applications Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1