The volume of electronic transactions has raised significantly in last years, mainly due to the popularization of electronic commerce (e-commerce), such as online retailers (e.g., Amazon.com, eBay, Ali Express.com). We also observe a significant increase in the number of fraud cases, resulting in billions of dollars losses each year worldwide. Therefore it is important and necessary to developed and apply techniques that can assist in fraud detection and prevention, which motivates our research. This work aims to apply and evaluate computational intelligence techniques (e.g., Data mining and machine learning) to identify fraud in electronic transactions, more specifically in credit card operations performed by Web payment gateways. In order to evaluate the techniques, we apply and evaluate them in an actual dataset of the most popular Brazilian electronic payment service. Our results show good performance in fraud detection, presenting gains up to 43 percent of an economic metric, when compared to the actual scenario of the company.
{"title":"Fraud Analysis and Prevention in e-Commerce Transactions","authors":"Evandro Caldeira, Gabriel Brandão, A. Pereira","doi":"10.1109/LAWeb.2014.23","DOIUrl":"https://doi.org/10.1109/LAWeb.2014.23","url":null,"abstract":"The volume of electronic transactions has raised significantly in last years, mainly due to the popularization of electronic commerce (e-commerce), such as online retailers (e.g., Amazon.com, eBay, Ali Express.com). We also observe a significant increase in the number of fraud cases, resulting in billions of dollars losses each year worldwide. Therefore it is important and necessary to developed and apply techniques that can assist in fraud detection and prevention, which motivates our research. This work aims to apply and evaluate computational intelligence techniques (e.g., Data mining and machine learning) to identify fraud in electronic transactions, more specifically in credit card operations performed by Web payment gateways. In order to evaluate the techniques, we apply and evaluate them in an actual dataset of the most popular Brazilian electronic payment service. Our results show good performance in fraud detection, presenting gains up to 43 percent of an economic metric, when compared to the actual scenario of the company.","PeriodicalId":251627,"journal":{"name":"2014 9th Latin American Web Congress","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130396115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Bouza, B. Elliot, Lorena Etcheverry, A. Vaisman
The web is changing the way in which data warehouses are designed, used, and queried. With the advent of initiatives such as Open Data and Open Government, organizations want to share their multidimensional data cubes and make them available to be queried online. The RDF data cube vocabulary (QB), the W3C standard to publish statistical data in RDF, presents several limitations to fully support the multidimensional model. The QB4OLAP vocabulary extends QB to overcome these limitations, and provides the distinctive feature of being able to implement several OLAP operations, such as rollup, slice, and dice using standard SPARQL queries. In this paper we present QB4OLAP Engine, a tool that transforms multidimensional data stored in relational DWs into RDF using QB4OLAP, and apply the solution to a real-world case, based on the national survey of housing, health services, and income, carried out by the government of Uruguay.
{"title":"Publishing and Querying Government Multidimensional Data Using QB4OLAP","authors":"M. Bouza, B. Elliot, Lorena Etcheverry, A. Vaisman","doi":"10.1109/LAWeb.2014.11","DOIUrl":"https://doi.org/10.1109/LAWeb.2014.11","url":null,"abstract":"The web is changing the way in which data warehouses are designed, used, and queried. With the advent of initiatives such as Open Data and Open Government, organizations want to share their multidimensional data cubes and make them available to be queried online. The RDF data cube vocabulary (QB), the W3C standard to publish statistical data in RDF, presents several limitations to fully support the multidimensional model. The QB4OLAP vocabulary extends QB to overcome these limitations, and provides the distinctive feature of being able to implement several OLAP operations, such as rollup, slice, and dice using standard SPARQL queries. In this paper we present QB4OLAP Engine, a tool that transforms multidimensional data stored in relational DWs into RDF using QB4OLAP, and apply the solution to a real-world case, based on the national survey of housing, health services, and income, carried out by the government of Uruguay.","PeriodicalId":251627,"journal":{"name":"2014 9th Latin American Web Congress","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133048641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Scharl, Ruslan Kamolov, Daniel Fischl, Walter Rafelsberger, Alistair Jones
Understanding stakeholder perceptions and the impact of campaigns are key insights for communication experts and policy makers. A structured analysis of Web content can help answer these questions, particularly if this analysis involves the ability to extract, disambiguate and visualize contextual information. After summarizing methods used for acquiring and annotating Web content repositories, we present visualization techniques to explore the lexical, geospatial and relational context of entities in these repositories. The examples stem from the Media Watch on Climate Change, a publicly available Web portal that aggregates environmental resources from various online sources.
了解利益相关者的看法和活动的影响是传播专家和政策制定者的关键见解。对Web内容进行结构化分析可以帮助回答这些问题,特别是如果这种分析包含提取、消除歧义和可视化上下文信息的能力。在总结了用于获取和注释Web内容存储库的方法之后,我们介绍了用于探索这些存储库中实体的词法、地理空间和关系上下文的可视化技术。这些例子来自气候变化媒体观察(Media Watch on Climate Change),这是一个公开的门户网站,汇集了来自各种在线资源的环境资源。
{"title":"Visualizing Contextual Information in Aggregated Web Content Repositories","authors":"A. Scharl, Ruslan Kamolov, Daniel Fischl, Walter Rafelsberger, Alistair Jones","doi":"10.1109/LAWeb.2014.18","DOIUrl":"https://doi.org/10.1109/LAWeb.2014.18","url":null,"abstract":"Understanding stakeholder perceptions and the impact of campaigns are key insights for communication experts and policy makers. A structured analysis of Web content can help answer these questions, particularly if this analysis involves the ability to extract, disambiguate and visualize contextual information. After summarizing methods used for acquiring and annotating Web content repositories, we present visualization techniques to explore the lexical, geospatial and relational context of entities in these repositories. The examples stem from the Media Watch on Climate Change, a publicly available Web portal that aggregates environmental resources from various online sources.","PeriodicalId":251627,"journal":{"name":"2014 9th Latin American Web Congress","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121973113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vitor Mesaque Alves de Lima, R. Marcacini, Marcelo Henrique Pereira Lima, Maria Istela Cagnin, M. Turine
The Internet is a way by which organizations may show its identity, its purposes, its achievements, providing services and information to the public. A Content Management System (CMS) provides an efficient solution for content managing for Web sites and portals in general. Researches show that the organizations are interested in cost reduction associated to software development, and it is essential to use automated tools and a reuse systematic process. In this paper we present a generation environment for front-end layer in e-government CMS, in context of systematic reuse using Software Product Line (SPL) automated through frameworks, application generators and reuse repository. The generation environment implements automated mechanisms to reduce accessibility problems in generated Web applications.
{"title":"A Generation Environment for Front-End Layer in e-Government Content Management Systems","authors":"Vitor Mesaque Alves de Lima, R. Marcacini, Marcelo Henrique Pereira Lima, Maria Istela Cagnin, M. Turine","doi":"10.1109/LAWeb.2014.20","DOIUrl":"https://doi.org/10.1109/LAWeb.2014.20","url":null,"abstract":"The Internet is a way by which organizations may show its identity, its purposes, its achievements, providing services and information to the public. A Content Management System (CMS) provides an efficient solution for content managing for Web sites and portals in general. Researches show that the organizations are interested in cost reduction associated to software development, and it is essential to use automated tools and a reuse systematic process. In this paper we present a generation environment for front-end layer in e-government CMS, in context of systematic reuse using Software Product Line (SPL) automated through frameworks, application generators and reuse repository. The generation environment implements automated mechanisms to reduce accessibility problems in generated Web applications.","PeriodicalId":251627,"journal":{"name":"2014 9th Latin American Web Congress","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125633903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leandro Neiva Lopes Figueiredo, Anderson A. Ferreira, G. T. Assis
Extracting data from web pages is an important task for several applications, such as comparison shopping and data mining. Much of that data is provided by search result pages, in which each result, called search result record, represents a record from a database. One of the most important steps for extracting such records is identifying, among different data regions from a page, one that contains the records to be extracted. An incorrect identification of this region may lead to an incorrect extraction of the search result records. In this paper, we propose a simple but efficient method that generates path expression to select the main data region from a given page, based on the rendering area information of its elements. The generated path expression may be used by wrappers for extracting the search result records and its data units, reducing its complexity and increasing its accuracy. Experimental results using web pages from several domains show that the method is highly effective.
{"title":"A Rendering-Based Method for Selecting the Main Data Region in Web Pages","authors":"Leandro Neiva Lopes Figueiredo, Anderson A. Ferreira, G. T. Assis","doi":"10.1109/LAWeb.2014.14","DOIUrl":"https://doi.org/10.1109/LAWeb.2014.14","url":null,"abstract":"Extracting data from web pages is an important task for several applications, such as comparison shopping and data mining. Much of that data is provided by search result pages, in which each result, called search result record, represents a record from a database. One of the most important steps for extracting such records is identifying, among different data regions from a page, one that contains the records to be extracted. An incorrect identification of this region may lead to an incorrect extraction of the search result records. In this paper, we propose a simple but efficient method that generates path expression to select the main data region from a given page, based on the rendering area information of its elements. The generated path expression may be used by wrappers for extracting the search result records and its data units, reducing its complexity and increasing its accuracy. Experimental results using web pages from several domains show that the method is highly effective.","PeriodicalId":251627,"journal":{"name":"2014 9th Latin American Web Congress","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131398352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Cava, C. Freitas, Eric Barboni, Philippe A. Palanque, M. Winckler
Some of the search tasks users perform on the Web aim at complementing the information they are currently reading in a Web page: they are ancillary search tasks. Currently, the standard way to support such ancillary searches follows an inside-out approach, which means that query results are shown in a new window/tab or as a replacement of the current page. We claim that such inside-out approach is only suitable if users really want to dissociate the search results from the Web page they were reading. In this paper we propose an alternative approach, called "inside-in", where query results are displayed inside the Web page next to the keyword that motivated the user to launch an ancillary search. In order to demonstrate the feasibility of our approach we have developed a tool that embeds an egocentric information visualization technique in the Web page. This tool supports nested queries and allows the display of multiple data attributes. The approach is illustrated by a case study based on ancillary searches of co authors from a digital library. The paper also reports some preliminary results obtained with an experiment conducted with remote users.
{"title":"Inside-In Search: An Alternative for Performing Ancillary Search Tasks on the Web","authors":"R. Cava, C. Freitas, Eric Barboni, Philippe A. Palanque, M. Winckler","doi":"10.1109/LAWeb.2014.21","DOIUrl":"https://doi.org/10.1109/LAWeb.2014.21","url":null,"abstract":"Some of the search tasks users perform on the Web aim at complementing the information they are currently reading in a Web page: they are ancillary search tasks. Currently, the standard way to support such ancillary searches follows an inside-out approach, which means that query results are shown in a new window/tab or as a replacement of the current page. We claim that such inside-out approach is only suitable if users really want to dissociate the search results from the Web page they were reading. In this paper we propose an alternative approach, called \"inside-in\", where query results are displayed inside the Web page next to the keyword that motivated the user to launch an ancillary search. In order to demonstrate the feasibility of our approach we have developed a tool that embeds an egocentric information visualization technique in the Web page. This tool supports nested queries and allows the display of multiple data attributes. The approach is illustrated by a case study based on ancillary searches of co authors from a digital library. The paper also reports some preliminary results obtained with an experiment conducted with remote users.","PeriodicalId":251627,"journal":{"name":"2014 9th Latin American Web Congress","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125481312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gustavo Nascimento, Manoel Horta Ribeiro, L. Cerf, Natalia Cesario, Mehdi Kaytoue-Uberall, Chedy Raïssi, Thiago Vasconcelos, Wagner Meira Jr
In parallel to the exponential growth of the gaming industry, video game live-streaming is rising as a major form of online entertainment. Gathering a heterogeneous community, the popularity of this new media led to the creation of web services just for streaming video games, such as Twitch. TV. In this paper, we propose a model to characterize how streamers and spectators behave, based on their possible actions in Twitch and, using it, we perform a case study on the Star craft II streamers and spectators. In the case study we analyze a large amount of data collected in Twitch. TV's chat in order to better understand how streamers behave, and how this new form of online entertainment is different from previous ones. Based on this analysis, we were able to better understand channel switching, channel surfing, and to create a model for predicting the number of chat messages based on the number of spectators. We were also able to describe behavioral patterns, such as the mass evasion of spectators before the end of a streaming section in a channel.
{"title":"Modeling and Analyzing the Video Game Live-Streaming Community","authors":"Gustavo Nascimento, Manoel Horta Ribeiro, L. Cerf, Natalia Cesario, Mehdi Kaytoue-Uberall, Chedy Raïssi, Thiago Vasconcelos, Wagner Meira Jr","doi":"10.1109/LAWeb.2014.9","DOIUrl":"https://doi.org/10.1109/LAWeb.2014.9","url":null,"abstract":"In parallel to the exponential growth of the gaming industry, video game live-streaming is rising as a major form of online entertainment. Gathering a heterogeneous community, the popularity of this new media led to the creation of web services just for streaming video games, such as Twitch. TV. In this paper, we propose a model to characterize how streamers and spectators behave, based on their possible actions in Twitch and, using it, we perform a case study on the Star craft II streamers and spectators. In the case study we analyze a large amount of data collected in Twitch. TV's chat in order to better understand how streamers behave, and how this new form of online entertainment is different from previous ones. Based on this analysis, we were able to better understand channel switching, channel surfing, and to create a model for predicting the number of chat messages based on the number of spectators. We were also able to describe behavioral patterns, such as the mass evasion of spectators before the end of a streaming section in a channel.","PeriodicalId":251627,"journal":{"name":"2014 9th Latin American Web Congress","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115366076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recommender systems play a key role in the decision making process of users in Web systems. In tourism, it is widely used to recommend hotels, tourist attractions, accommodations, etc. In this paper, we present a personalized neighborhood-based method to recommend cities. This is a fundamental problem whose solution support other tourism recommendations. Our recommendation approach takes into account information of two different layers, namely, an upper layer composed by cities and a lower layer composed by attractions of each city. It consists of first building a social network among users, where the edges are weighted by the similarity of interests between pairs of users, and then using this network as a component of a collaborative filtering strategy to recommend cities. We evaluate our method using a large dataset collected from Trip Advisor. Our experimental results show that our approach, despite being simple, outperforms the precision achieved by a state-of-the-art baseline approach for implicit feedback (WRMF), which exploits only the overall popularity of cities. We also show that the use of a secondary layer (attraction) contributes to improve the effectiveness of our approach.
{"title":"Where Should I Go? City Recommendation Based on User Communities","authors":"Ruhan Bidart, A. Pereira, J. Almeida, A. Lacerda","doi":"10.1109/LAWeb.2014.15","DOIUrl":"https://doi.org/10.1109/LAWeb.2014.15","url":null,"abstract":"Recommender systems play a key role in the decision making process of users in Web systems. In tourism, it is widely used to recommend hotels, tourist attractions, accommodations, etc. In this paper, we present a personalized neighborhood-based method to recommend cities. This is a fundamental problem whose solution support other tourism recommendations. Our recommendation approach takes into account information of two different layers, namely, an upper layer composed by cities and a lower layer composed by attractions of each city. It consists of first building a social network among users, where the edges are weighted by the similarity of interests between pairs of users, and then using this network as a component of a collaborative filtering strategy to recommend cities. We evaluate our method using a large dataset collected from Trip Advisor. Our experimental results show that our approach, despite being simple, outperforms the precision achieved by a state-of-the-art baseline approach for implicit feedback (WRMF), which exploits only the overall popularity of cities. We also show that the use of a secondary layer (attraction) contributes to improve the effectiveness of our approach.","PeriodicalId":251627,"journal":{"name":"2014 9th Latin American Web Congress","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133072941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Location Based Social Networks (LBSN) have emerged with the purpose of allowing users to share their visited locations with their friends. Foursquare, for instance, is a popular LBSN where users endorse and share tips about visited locations. In order to improve the experience of LBSN users, simple recommender services, typically based on geographical proximity, are usually provided. The state-of-the-art location recommenders in LBSN are based on linear combinations of collaborative filtering, geo and social-aware recommenders, which implies fine tuning and running three (or more) separate algorithms for each recommendation request. In this paper, we present a new location recommender that integrates collaborative filtering and geographic information into one single diffusion-based recommendation model. The idea is to learn a personalized ranking of locations for a target user considering the locations visited by similar users, the distances between visited and non visited locations and the regions he prefers to visit. We conduct experiments on real data from two different LBSN, namely, Go Walla and Foursquare, and show that our approach outperforms the state-of-art in most of the cities evaluated.
{"title":"A Personalized Geographic-Based Diffusion Model for Location Recommendations in LBSN","authors":"I. Nunes, L. Marinho","doi":"10.1109/LAWeb.2014.22","DOIUrl":"https://doi.org/10.1109/LAWeb.2014.22","url":null,"abstract":"Location Based Social Networks (LBSN) have emerged with the purpose of allowing users to share their visited locations with their friends. Foursquare, for instance, is a popular LBSN where users endorse and share tips about visited locations. In order to improve the experience of LBSN users, simple recommender services, typically based on geographical proximity, are usually provided. The state-of-the-art location recommenders in LBSN are based on linear combinations of collaborative filtering, geo and social-aware recommenders, which implies fine tuning and running three (or more) separate algorithms for each recommendation request. In this paper, we present a new location recommender that integrates collaborative filtering and geographic information into one single diffusion-based recommendation model. The idea is to learn a personalized ranking of locations for a target user considering the locations visited by similar users, the distances between visited and non visited locations and the regions he prefers to visit. We conduct experiments on real data from two different LBSN, namely, Go Walla and Foursquare, and show that our approach outperforms the state-of-art in most of the cities evaluated.","PeriodicalId":251627,"journal":{"name":"2014 9th Latin American Web Congress","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125135305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Gonçalves, Anderson A. Ferreira, G. T. Assis, Andréa Iabrudi Tavares
An undergraduate program must prepare its students for the major needs of the labor market. One of the main ways to identify what are the demands to be met is creating a manner to manage information of its alumni. This consists of gathering data from program's alumni and finding out what are their main areas of employment on the labor market or which are their main fields of research in the academy. Usually, this data is obtained through available forms on the Web or forwarded by mail or email, however, these methods, in addition to being laborious, do not present good feedback from the alumni. Thus, this work proposes a novel method to help teaching staffs of undergraduate programs to gather information on the desired population of alumni, semi-automatically, on the Web. Overall, by using a few alumni pages as an initial set of sample pages, the proposed method was capable of gathering information concerning a number of alumni twice as bigger than adopted conventional methods.
{"title":"Gathering Alumni Information from a Web Social Network","authors":"G. Gonçalves, Anderson A. Ferreira, G. T. Assis, Andréa Iabrudi Tavares","doi":"10.1109/LAWeb.2014.17","DOIUrl":"https://doi.org/10.1109/LAWeb.2014.17","url":null,"abstract":"An undergraduate program must prepare its students for the major needs of the labor market. One of the main ways to identify what are the demands to be met is creating a manner to manage information of its alumni. This consists of gathering data from program's alumni and finding out what are their main areas of employment on the labor market or which are their main fields of research in the academy. Usually, this data is obtained through available forms on the Web or forwarded by mail or email, however, these methods, in addition to being laborious, do not present good feedback from the alumni. Thus, this work proposes a novel method to help teaching staffs of undergraduate programs to gather information on the desired population of alumni, semi-automatically, on the Web. Overall, by using a few alumni pages as an initial set of sample pages, the proposed method was capable of gathering information concerning a number of alumni twice as bigger than adopted conventional methods.","PeriodicalId":251627,"journal":{"name":"2014 9th Latin American Web Congress","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130238316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}