Security at major locations of economic or political importance or transportation or other infrastructure is a key concern around the world, particularly given the threat of terrorism. Limited security resources prevent full security coverage at all times; instead, these limited resources must be deployed intelligently taking into account differences in priorities of targets requiring security coverage, the responses of the adversaries to the security posture and potential uncertainty over the types of adversaries faced. Game theory is well-suited to adversarial reasoning for security resource allocation and scheduling problems. Casting the problem as a Bayesian Stackelberg game, we have developed new algorithms for efficiently solving such games to provide randomized patrolling or inspection strategies: we can thus avoid predictability and address scale-up in these security scheduling problems, addressing key weaknesses of human scheduling. Our algorithms are now deployed in multiple applications. ARMOR, our first game theoretic application, has been deployed at the Los Angeles International Airport (LAX) since 2007 to randomize checkpoints on the roadways entering the airport and canine patrol routes within the airport terminals. IRIS, our second application, is a game-theoretic scheduler for randomized deployment of the Federal Air Marshals (FAMS) requiring significant scale-up in underlying algorithms; IRIS has been in use since 2009. Similarly, a new set of algorithms are deployed in Boston for a system called PROTECT for randomizing US coast guard patrolling; PROTECT is intended to be deployed at more locations in the future, and GUARDS is under evaluation for national deployment by the Transportation Security Administration (TSA). These applications are leading to real-world use-inspired research in scaling up to large-scale problems, handling significant adversarial uncertainty, dealing with bounded rationality of human adversaries, and other fundamental challenges. This talk will outline our algorithms, key research results and lessons learned from these applications.
{"title":"Game Theory for Security: Lessons Learned from Deployed Applications","authors":"Milind Tambe","doi":"10.1109/WI-IAT.2010.306","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.306","url":null,"abstract":"Security at major locations of economic or political importance or transportation or other infrastructure is a key concern around the world, particularly given the threat of terrorism. Limited security resources prevent full security coverage at all times; instead, these limited resources must be deployed intelligently taking into account differences in priorities of targets requiring security coverage, the responses of the adversaries to the security posture and potential uncertainty over the types of adversaries faced. Game theory is well-suited to adversarial reasoning for security resource allocation and scheduling problems. Casting the problem as a Bayesian Stackelberg game, we have developed new algorithms for efficiently solving such games to provide randomized patrolling or inspection strategies: we can thus avoid predictability and address scale-up in these security scheduling problems, addressing key weaknesses of human scheduling. Our algorithms are now deployed in multiple applications. ARMOR, our first game theoretic application, has been deployed at the Los Angeles International Airport (LAX) since 2007 to randomize checkpoints on the roadways entering the airport and canine patrol routes within the airport terminals. IRIS, our second application, is a game-theoretic scheduler for randomized deployment of the Federal Air Marshals (FAMS) requiring significant scale-up in underlying algorithms; IRIS has been in use since 2009. Similarly, a new set of algorithms are deployed in Boston for a system called PROTECT for randomizing US coast guard patrolling; PROTECT is intended to be deployed at more locations in the future, and GUARDS is under evaluation for national deployment by the Transportation Security Administration (TSA). These applications are leading to real-world use-inspired research in scaling up to large-scale problems, handling significant adversarial uncertainty, dealing with bounded rationality of human adversaries, and other fundamental challenges. This talk will outline our algorithms, key research results and lessons learned from these applications.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"231 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122774582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper deals with large scale information retrieval aiming at contributing to web searching. The collections of documents considered are huge and not obvious to tackle with classical approaches. The greater the number of documents belonging to the collection, the more powerful approach required. A Bees Swarm Optimization algorithm called BSO-IR is designed to explore the prohibitive number of documents to find the information needed by the user. Extensive experiments were performed on CACM and RCV1 collections and more large corpuses in order to show the benefit gained from using such approach instead of the classic one. Performances in terms of solutions quality and runtime are compared between BSO and exact algorithms. Numerical results exhibit the superiority of BSO-IR on previous works in terms of scalability while yielding comparable quality.
{"title":"Bees Swarm Optimization Based Approach for Web Information Retrieval","authors":"H. Drias, Hadia Mosteghanemi","doi":"10.1109/WI-IAT.2010.179","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.179","url":null,"abstract":"This paper deals with large scale information retrieval aiming at contributing to web searching. The collections of documents considered are huge and not obvious to tackle with classical approaches. The greater the number of documents belonging to the collection, the more powerful approach required. A Bees Swarm Optimization algorithm called BSO-IR is designed to explore the prohibitive number of documents to find the information needed by the user. Extensive experiments were performed on CACM and RCV1 collections and more large corpuses in order to show the benefit gained from using such approach instead of the classic one. Performances in terms of solutions quality and runtime are compared between BSO and exact algorithms. Numerical results exhibit the superiority of BSO-IR on previous works in terms of scalability while yielding comparable quality.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125101067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The massive size of Wikipedia and the ease with which its content can be created and edited has made Wikipedia an interesting domain for a variety of classification tasks, including topic detection, spam detection, and vandalism detection. These tasks are typically cast into a link-based classification problem, in which the class label of an article or a user is determined from its content-based and link-based features. Prior works have focused primarily on classifying either the editors or the articles (but not both). Yet there are many situations in which the classification can be aided by knowing collectively the class labels of the users and articles (e.g., spammers are more likely to post spam content than non-spammers). This paper presents a novel framework to jointly classify the Wikipedia articles and editors, assuming there are correspondences between their classes. Our experimental results demonstrate that the proposed co-classification algorithm outperforms classifiers that are trained independently to predict the class labels of articles and editors.
{"title":"A Framework for Co-classification of Articles and Users in Wikipedia","authors":"Lei Liu, P. Tan","doi":"10.1109/WI-IAT.2010.223","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.223","url":null,"abstract":"The massive size of Wikipedia and the ease with which its content can be created and edited has made Wikipedia an interesting domain for a variety of classification tasks, including topic detection, spam detection, and vandalism detection. These tasks are typically cast into a link-based classification problem, in which the class label of an article or a user is determined from its content-based and link-based features. Prior works have focused primarily on classifying either the editors or the articles (but not both). Yet there are many situations in which the classification can be aided by knowing collectively the class labels of the users and articles (e.g., spammers are more likely to post spam content than non-spammers). This paper presents a novel framework to jointly classify the Wikipedia articles and editors, assuming there are correspondences between their classes. Our experimental results demonstrate that the proposed co-classification algorithm outperforms classifiers that are trained independently to predict the class labels of articles and editors.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125119921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper intends to present a straightforward, extensive, and noise resistant method for efficiently tagging a web query, submitted to a search engine, with proper category labels. These labels are intended to represent the closest categories related to the query which can ultimately be used to enhance the results of any typical search engine by either restricting the results to matching categories or enriching the query itself. The presented method effectively rules out noise words within a query, forms the optimal keyword packs using a density function, and returns a set of category labels which represent the common topics of the given query using Wikipedia category hierarchy.
{"title":"An Efficient Method for Tagging a Query with Category Labels Using Wikipedia towards Enhancing Search Engine Results","authors":"M. Alemzadeh, F. Karray","doi":"10.1109/WI-IAT.2010.267","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.267","url":null,"abstract":"This paper intends to present a straightforward, extensive, and noise resistant method for efficiently tagging a web query, submitted to a search engine, with proper category labels. These labels are intended to represent the closest categories related to the query which can ultimately be used to enhance the results of any typical search engine by either restricting the results to matching categories or enriching the query itself. The presented method effectively rules out noise words within a query, forms the optimal keyword packs using a density function, and returns a set of category labels which represent the common topics of the given query using Wikipedia category hierarchy.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122786175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose Multi-HDCS, a new hybrid approach for solving Distributed CSPs with complex local problems. In Multi-HDCS, each agent concurrently: (i) runs a centralised systematic search for its complex local problem; (ii) participates in a distributed local search; (iii) contributes to a distributed systematic search. Acentralised systematic search algorithm runs on each agent, finding all non-interchangeable solutions to the agent's complex local problem. In order to find a solution to the overall problem, two distributed algorithms which only consider the local solutions found by the centralised systematic searches are run: a local search algorithm identifies the parts of the problem which are most difficult to satisfy, and this information is used in order to find good dynamic variable orderings for a systematic search. We present two implementations of our approach which differ in the strategy used for local search: breakout and penalties on values. Results from an extensive empirical evaluation indicate that these two Multi-HDCS implementations are competitive against existing distributed local and systematic search techniques on both solvable and unsolvable distributed CSPs with complex local problems.
{"title":"Multi-HDCS: Solving DisCSPs with Complex Local Problems Cooperatively","authors":"David Lee, I. Arana, Hatem Ahriz, Kit-Ying Hui","doi":"10.1109/WI-IAT.2010.141","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.141","url":null,"abstract":"We propose Multi-HDCS, a new hybrid approach for solving Distributed CSPs with complex local problems. In Multi-HDCS, each agent concurrently: (i) runs a centralised systematic search for its complex local problem; (ii) participates in a distributed local search; (iii) contributes to a distributed systematic search. Acentralised systematic search algorithm runs on each agent, finding all non-interchangeable solutions to the agent's complex local problem. In order to find a solution to the overall problem, two distributed algorithms which only consider the local solutions found by the centralised systematic searches are run: a local search algorithm identifies the parts of the problem which are most difficult to satisfy, and this information is used in order to find good dynamic variable orderings for a systematic search. We present two implementations of our approach which differ in the strategy used for local search: breakout and penalties on values. Results from an extensive empirical evaluation indicate that these two Multi-HDCS implementations are competitive against existing distributed local and systematic search techniques on both solvable and unsolvable distributed CSPs with complex local problems.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116678110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is concerned with solving decentralized resource allocation and scheduling problems via auctions with negotiable agents by allowing agents to switch their bid generation strategies within the auction process, such that a better system wide performance is achieved on average as compared to the conventional walrasian auction running with agents of fixed bid generation strategy. We propose a negotiation mechanism embedded in auctioneer to solicit bidders’ change of strategies in the process of auction. Finally we benchmark our approach against conventional auctions subject to the real-time large-scale dynamic resource coordination problem to demonstrate the effectiveness of our approach.
{"title":"Decentralized Resource Allocation and Scheduling via Walrasian Auctions with Negotiable Agents","authors":"HuaXing Chen, H. Lau","doi":"10.1109/WI-IAT.2010.113","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.113","url":null,"abstract":"This paper is concerned with solving decentralized resource allocation and scheduling problems via auctions with negotiable agents by allowing agents to switch their bid generation strategies within the auction process, such that a better system wide performance is achieved on average as compared to the conventional walrasian auction running with agents of fixed bid generation strategy. We propose a negotiation mechanism embedded in auctioneer to solicit bidders’ change of strategies in the process of auction. Finally we benchmark our approach against conventional auctions subject to the real-time large-scale dynamic resource coordination problem to demonstrate the effectiveness of our approach.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128398015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Music and singers are influential in local society. An in-depth study on singers is beneficial to various sectors. However, the evolutional characteristic and the daunting complexity of the interrelationship among singers made the problem technically intriguing. In this paper, we present a novel commentary-based social network analysis (CBSNA) methodology to analyze the singer relationships. Developing weighting schemes and adopting k-nearest-neighbors (kNN) approach from network theory as a visualization technique, we simplify the resulting dense network to ease understanding and further investigations. Proof-of-concept experiments are conducted by using two popular datasets to verify the effectiveness of the proposed approach and the empirical results are promising.
{"title":"Commentary-Based Social Network Analysis and Visualization of Hong Kong Singers","authors":"J. Leung, Chun-hung Li","doi":"10.1109/WI-IAT.2010.287","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.287","url":null,"abstract":"Music and singers are influential in local society. An in-depth study on singers is beneficial to various sectors. However, the evolutional characteristic and the daunting complexity of the interrelationship among singers made the problem technically intriguing. In this paper, we present a novel commentary-based social network analysis (CBSNA) methodology to analyze the singer relationships. Developing weighting schemes and adopting k-nearest-neighbors (kNN) approach from network theory as a visualization technique, we simplify the resulting dense network to ease understanding and further investigations. Proof-of-concept experiments are conducted by using two popular datasets to verify the effectiveness of the proposed approach and the empirical results are promising.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128263402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guoxun Wang, Liang Liu, Yi Peng, G. Nie, Gang Kou, Yong Shi
Nowadays, with increasingly intense competition in the market, major banks pay more attention on customer relationship management. A real-time and effective credit card holders’ churn analysis is important and helpful for bankers to maintain credit card holders. In this research we apply 12 classification algorithms in a real-life credit card holders’ behaviors dataset from a major commercial bank in China to construct a predictive churn model. Furthermore, a comparison is made between the predictive performance of classification algorithms based on Multi-Criteria Decision Making techniques such as PROMETHEE II and TOPSIS. The research results show that banks can choose the most appropriate classification algorithm/s for customer churn prediction for noisy credit card holders’ behaviors data using MCDM.
{"title":"Predicting Credit Card Holder Churn in Banks of China Using Data Mining and MCDM","authors":"Guoxun Wang, Liang Liu, Yi Peng, G. Nie, Gang Kou, Yong Shi","doi":"10.1109/WI-IAT.2010.237","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.237","url":null,"abstract":"Nowadays, with increasingly intense competition in the market, major banks pay more attention on customer relationship management. A real-time and effective credit card holders’ churn analysis is important and helpful for bankers to maintain credit card holders. In this research we apply 12 classification algorithms in a real-life credit card holders’ behaviors dataset from a major commercial bank in China to construct a predictive churn model. Furthermore, a comparison is made between the predictive performance of classification algorithms based on Multi-Criteria Decision Making techniques such as PROMETHEE II and TOPSIS. The research results show that banks can choose the most appropriate classification algorithm/s for customer churn prediction for noisy credit card holders’ behaviors data using MCDM.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129042618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, the UML activity diagrams are first defined graph-theoretically, with an adoption of the concepts of Petri nets tokens. The semantics of activity diagrams is further axiomatized as a logical action theory called SCAD. Example applications of SCAD are also given.
{"title":"Towards Axiomatizing the Semantics of UML Activity Diagrams: A Situation-Calculus Perspective","authors":"Xing Tan, M. Grüninger","doi":"10.1109/WI-IAT.2010.135","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.135","url":null,"abstract":"In this paper, the UML activity diagrams are first defined graph-theoretically, with an adoption of the concepts of Petri nets tokens. The semantics of activity diagrams is further axiomatized as a logical action theory called SCAD. Example applications of SCAD are also given.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129366000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data management and analysis has become an integral component in the area of reservoir engineering. An important metric that determines the overall effectiveness of data analysis is data quality. Data provenance, the metadata that pertains to the derivation history of data objects, has emerged as an invaluable asset in evaluating data quality. The reservoir facilities and software systems that collect provenance information are often distributed, thus making it difficult to analyze provenance data. Our primary contribution in this paper is an approach for provenance information integration in reservoir engineering.
{"title":"Integrating Provenance Information in Reservoir Engineering","authors":"Jing Zhao, Na Chen, K. Gomadam, V. Prasanna","doi":"10.1109/WI-IAT.2010.94","DOIUrl":"https://doi.org/10.1109/WI-IAT.2010.94","url":null,"abstract":"Data management and analysis has become an integral component in the area of reservoir engineering. An important metric that determines the overall effectiveness of data analysis is data quality. Data provenance, the metadata that pertains to the derivation history of data objects, has emerged as an invaluable asset in evaluating data quality. The reservoir facilities and software systems that collect provenance information are often distributed, thus making it difficult to analyze provenance data. Our primary contribution in this paper is an approach for provenance information integration in reservoir engineering.","PeriodicalId":340211,"journal":{"name":"2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"285 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124548569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}