Reuse is an important mechanism for improving the efficiency of software development. For Internet-scale software produced through service composition, the simple reuse granularity at service is often inefficient due to the large number of available services. This paper proposes a novel architecture which enables efficient reuse of process fragments. In the proposed architecture, services are organized into a network, called Service Composition Network (SCN), based on their co-occurence in the existing composite services. The reusable process fragments are extracted by decomposing existing composite services according to both the structural constraint of the process and the relevance of services in the same process fragment. The design principles and a prototype implementation of this architecture are presented, the performance of the proposed approach is analyzed, and an application is described to demonstrate the effectiveness of it.
{"title":"Business Process Decomposition Based on Service Relevance Mining","authors":"Zicheng Huang, J. Huai, Xudong Liu, Jiang Zhu","doi":"10.1109/WI-IAT.2010.21","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.21","url":null,"abstract":"Reuse is an important mechanism for improving the efficiency of software development. For Internet-scale software produced through service composition, the simple reuse granularity at service is often inefficient due to the large number of available services. This paper proposes a novel architecture which enables efficient reuse of process fragments. In the proposed architecture, services are organized into a network, called Service Composition Network (SCN), based on their co-occurence in the existing composite services. The reusable process fragments are extracted by decomposing existing composite services according to both the structural constraint of the process and the relevance of services in the same process fragment. The design principles and a prototype implementation of this architecture are presented, the performance of the proposed approach is analyzed, and an application is described to demonstrate the effectiveness of it.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133085085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The frequencies of binary adjacent word pairs (BAWPs) in large corpus of native English speakers were counted to retrieve the data of BAWPs as the foundation of the research. BAWPs in Chinese college students’ English compositions were tagged with the frequencies appearing in native corpus. Researchers’ examination finds that about 46% of the BAWPs in students’ compositions with the tagged frequency lower than 10 are language errors and close to 37% with the tagged frequency lower than 30 are errors. Misreport patterns were summarized and more than 100 filter rules of misreport were constructed. Combining with these rules, the ratios of actual errors are raised to over 60% and 48% for these two threshold values respectively, which can greatly facilitate college English writing.
{"title":"Automated Error Detection of Vocabulary Usage in College English Writing","authors":"Shili Ge, Rou Song","doi":"10.1109/WI-IAT.2010.47","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.47","url":null,"abstract":"The frequencies of binary adjacent word pairs (BAWPs) in large corpus of native English speakers were counted to retrieve the data of BAWPs as the foundation of the research. BAWPs in Chinese college students’ English compositions were tagged with the frequencies appearing in native corpus. Researchers’ examination finds that about 46% of the BAWPs in students’ compositions with the tagged frequency lower than 10 are language errors and close to 37% with the tagged frequency lower than 30 are errors. Misreport patterns were summarized and more than 100 filter rules of misreport were constructed. Combining with these rules, the ratios of actual errors are raised to over 60% and 48% for these two threshold values respectively, which can greatly facilitate college English writing.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133937153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Twitter has been used as one of the communication channels for spreading breaking news. We propose a method to collect, group, rank and track breaking news in Twitter. Since short length messages make similarity comparison difficult, we boost scores on proper nouns to improve the grouping results. Each group is ranked based on popularity and reliability factors. Current detection method is limited to facts part of messages. We developed an application called “Hotstream” based on the proposed method. Users can discover breaking news from the Twitter timeline. Each story is provided with the information of message originator, story development and activity chart. This provides a convenient way for people to follow breaking news and stay informed with real-time updates.
{"title":"Breaking News Detection and Tracking in Twitter","authors":"S. Phuvipadawat, T. Murata","doi":"10.1109/WI-IAT.2010.205","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.205","url":null,"abstract":"Twitter has been used as one of the communication channels for spreading breaking news. We propose a method to collect, group, rank and track breaking news in Twitter. Since short length messages make similarity comparison difficult, we boost scores on proper nouns to improve the grouping results. Each group is ranked based on popularity and reliability factors. Current detection method is limited to facts part of messages. We developed an application called “Hotstream” based on the proposed method. Users can discover breaking news from the Twitter timeline. Each story is provided with the information of message originator, story development and activity chart. This provides a convenient way for people to follow breaking news and stay informed with real-time updates.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"182 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122506244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Auditing Information Systems Security is difficult and becomes crucial to ensure the daily operational activities of organizations as well as to promote competition and to create new business opportunities. A conceptual security framework to manage and audit Information System Security is proposed and discussed. The proposed framework is based on a conceptual model approach, based on the ISO/IEC_JCT1 standards, to assist organizations to better manage their In-formation Systems Security.
{"title":"A Security Framework for Audit and Manage Information System Security","authors":"T. Pereira, H. Santos","doi":"10.1109/WI-IAT.2010.244","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.244","url":null,"abstract":"Auditing Information Systems Security is difficult and becomes crucial to ensure the daily operational activities of organizations as well as to promote competition and to create new business opportunities. A conceptual security framework to manage and audit Information System Security is proposed and discussed. The proposed framework is based on a conceptual model approach, based on the ISO/IEC_JCT1 standards, to assist organizations to better manage their In-formation Systems Security.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121277688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
General Game Playing (GGP) research aims at designing intelligent game playing agents that, given the rules of any game, automatically learn strategies to play and win without human intervention. Our GGP agent can play the wide variety of heterogeneous games provided by the IJCAI GGP competition framework, and without human intervention, learn from its own history to develop strategies toward achieving the game goals. It uses statistical analysis to identify important game features shared by the winners. To illustrate how the correct features are identified, we use game examples from different game categories, including Tic-Tac-Toe (territory taking game), Mini-Chess (strategy game), and Connect Four (board game on larger scale).
{"title":"Predictive Sub-goal Analysis in a General Game Playing Agent","authors":"Xinxin Sheng, D. Thuente","doi":"10.1109/WI-IAT.2010.225","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.225","url":null,"abstract":"General Game Playing (GGP) research aims at designing intelligent game playing agents that, given the rules of any game, automatically learn strategies to play and win without human intervention. Our GGP agent can play the wide variety of heterogeneous games provided by the IJCAI GGP competition framework, and without human intervention, learn from its own history to develop strategies toward achieving the game goals. It uses statistical analysis to identify important game features shared by the winners. To illustrate how the correct features are identified, we use game examples from different game categories, including Tic-Tac-Toe (territory taking game), Mini-Chess (strategy game), and Connect Four (board game on larger scale).","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121303244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
From the viewpoint that most news items report on entities (person, organization and location), we propose a novel stakeholder model to represent and analyze news contents to explore special items in which there is inconsistency in the descriptions. By using this model, we can discover differences in multimedia news items from the perspectives of media types (text, video and audio) and description types (objective, subjective and relationship descriptions). We propose a method of extracting stakeholders as main participants (people, organization, etc.) of the described news event and detect inconsistency to explore the special items by comparing visual and textual descriptions on the exposure level of each stakeholder. A prototype system is implemented and we also show some experimental results to validate the proposed methods.
{"title":"Exploring Special Items in Multimedia News Based on a Stakeholder Model","authors":"Ling Xu, Qiang Ma, Masatoshi Yoshikawa","doi":"10.1109/WI-IAT.2010.163","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.163","url":null,"abstract":"From the viewpoint that most news items report on entities (person, organization and location), we propose a novel stakeholder model to represent and analyze news contents to explore special items in which there is inconsistency in the descriptions. By using this model, we can discover differences in multimedia news items from the perspectives of media types (text, video and audio) and description types (objective, subjective and relationship descriptions). We propose a method of extracting stakeholders as main participants (people, organization, etc.) of the described news event and detect inconsistency to explore the special items by comparing visual and textual descriptions on the exposure level of each stakeholder. A prototype system is implemented and we also show some experimental results to validate the proposed methods.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121466702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Salvadores, Gianluca Correndo, M. Szomszor, Yang Yang, Nicholas Gibbins, Ian Millard, H. Glaser, N. Shadbolt
This paper describes an Open Linked Data backlinking service, a generic architecture component to support the discovery of useful links between items across highly connected data sets. Using Public Sector Information (PSI) currently available as Linked Data, we demonstrate that contemporary publishing practices do not adequately support the ability to navigate or automatically traverse between resources published by different vendors, or the capacity to discover information relevant to a particular URI. Although some useful services in this area have been developed, such as large triple indexes of published data, and the collection of same. As relationships between individuals, we believe that an important component is missing: a mechanism to discover the backlinks to relevant resources that cannot be found by direct URI resolution. We present the implementation of such a component, integrating data from various PSI sources.
{"title":"Domain-Specific Backlinking Services in the Web of Data","authors":"M. Salvadores, Gianluca Correndo, M. Szomszor, Yang Yang, Nicholas Gibbins, Ian Millard, H. Glaser, N. Shadbolt","doi":"10.1109/WI-IAT.2010.34","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.34","url":null,"abstract":"This paper describes an Open Linked Data backlinking service, a generic architecture component to support the discovery of useful links between items across highly connected data sets. Using Public Sector Information (PSI) currently available as Linked Data, we demonstrate that contemporary publishing practices do not adequately support the ability to navigate or automatically traverse between resources published by different vendors, or the capacity to discover information relevant to a particular URI. Although some useful services in this area have been developed, such as large triple indexes of published data, and the collection of same. As relationships between individuals, we believe that an important component is missing: a mechanism to discover the backlinks to relevant resources that cannot be found by direct URI resolution. We present the implementation of such a component, integrating data from various PSI sources.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116955551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhenyu Liu, Tiejiang Liu, T. Lu, Lizhi Cai, Genxing Yang
This paper studies online quality measurement in cloud computing environment. The paper analyzes concentration measure evaluation method of the current software quality evaluation system. As is known to all, the existing evaluation technology uses concentrated evaluation before run-time. By testing and analyzing results, the quality is obtained. In this paper, by analyzing the features in cloud computing environment, we consider that service-oriented evaluation should be mainly based on runtime online measurement. The paper puts forward agent-based online measure infrastructure, which is evaluated by distributed assessment in service computing environment A quality model and corresponding online data collection strategy are described. In this approach, the previous quality evaluation, which is concentrated and simulating in a simulated environment, is substituted. Then, a novel measure method of quality data acquisition, which is based on distributed agent technology, is established. So, during online service operation, the obtained data can make measurement results to be accurate and credible.
{"title":"Agent-Based Online Quality Measurement Approach in Cloud Computing Environment","authors":"Zhenyu Liu, Tiejiang Liu, T. Lu, Lizhi Cai, Genxing Yang","doi":"10.1109/WI-IAT.2010.213","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.213","url":null,"abstract":"This paper studies online quality measurement in cloud computing environment. The paper analyzes concentration measure evaluation method of the current software quality evaluation system. As is known to all, the existing evaluation technology uses concentrated evaluation before run-time. By testing and analyzing results, the quality is obtained. In this paper, by analyzing the features in cloud computing environment, we consider that service-oriented evaluation should be mainly based on runtime online measurement. The paper puts forward agent-based online measure infrastructure, which is evaluated by distributed assessment in service computing environment A quality model and corresponding online data collection strategy are described. In this approach, the previous quality evaluation, which is concentrated and simulating in a simulated environment, is substituted. Then, a novel measure method of quality data acquisition, which is based on distributed agent technology, is established. So, during online service operation, the obtained data can make measurement results to be accurate and credible.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115431116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Significant increase in collected data for investigative tasks and the increased complexity of the reasoning process itself have made investigative analytical tasks more challenging. These tasks are time critical and typically involve identifying and tracking multiple hypotheses; gathering evidence to validate the correct hypotheses and eliminating the incorrect ones. In this paper we specifically address predictive tasks that are concerned with predicting future trends. We describe RESIN, an AI blackboard-based agent that leverages interactive visualizations and mixed-initiative problem solving to enable analysts to explore and pre-process large amounts of data in order to perform predictive analytics. Our empirical evaluation discusses the advantages and challenges of predictive analytics in a complex domain like intelligence analysis.
{"title":"Predictive Analytics Using a Blackboard-Based Reasoning Agent","authors":"Jia Yue, A. Raja, W. Ribarsky","doi":"10.1109/WI-IAT.2010.155","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.155","url":null,"abstract":"Significant increase in collected data for investigative tasks and the increased complexity of the reasoning process itself have made investigative analytical tasks more challenging. These tasks are time critical and typically involve identifying and tracking multiple hypotheses; gathering evidence to validate the correct hypotheses and eliminating the incorrect ones. In this paper we specifically address predictive tasks that are concerned with predicting future trends. We describe RESIN, an AI blackboard-based agent that leverages interactive visualizations and mixed-initiative problem solving to enable analysts to explore and pre-process large amounts of data in order to perform predictive analytics. Our empirical evaluation discusses the advantages and challenges of predictive analytics in a complex domain like intelligence analysis.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115650624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As meta-synthesis approach is proposed to deal with complex system problem, the problem solving process under the meta-synthesis workshop is of importance. This paper discusses the working process of the meta-synthesis workshop which could be expressed as a three-dimension process consists of problem solving, expert collaboration and knowledge discovery (PS-EC-KD). Then, we propose a framework of meta-synthesis workshop supporting this process. Two practical cases, synthesis process for macro-economy forecast in meta-synthesis workshop, and knowledge system building for Food Price Forecast and Policy-making, are discussed finally.
{"title":"Problem Solving Framework and Case Study in Meta-synthesis Workshop","authors":"Xiaoji Zhou, Jingyuan Yu","doi":"10.1109/WI-IAT.2010.262","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.262","url":null,"abstract":"As meta-synthesis approach is proposed to deal with complex system problem, the problem solving process under the meta-synthesis workshop is of importance. This paper discusses the working process of the meta-synthesis workshop which could be expressed as a three-dimension process consists of problem solving, expert collaboration and knowledge discovery (PS-EC-KD). Then, we propose a framework of meta-synthesis workshop supporting this process. Two practical cases, synthesis process for macro-economy forecast in meta-synthesis workshop, and knowledge system building for Food Price Forecast and Policy-making, are discussed finally.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114896855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}