F. J. Martínez-Ruiz, J. M. Arteaga, J. Vanderdonckt, J. González-Calleros, R. M. González
The design and development of graphical user interfaces for rich Internet applications are well known difficult tasks with tools. The designers must be aware of the computing platform, the user's characteristics (education, social background, among others) and the environment within users must interact with the application. We present a method to design this type of user interfaces that is model-based and applies an iterative series of XSLT transformations to translate the abstract modeled interface into a final user interface that is coded in a specific platform. In order to avoid the proprietary engines dependency for designing tasks. UsiXML is used to model all the levels. Several model based technologies have been proposed and in this paper we review a XML-compliant user interface description language: XAML
{"title":"A first draft of a Model-driven Method for Designing Graphical User Interfaces of Rich Internet Applications","authors":"F. J. Martínez-Ruiz, J. M. Arteaga, J. Vanderdonckt, J. González-Calleros, R. M. González","doi":"10.1109/LA-WEB.2006.1","DOIUrl":"https://doi.org/10.1109/LA-WEB.2006.1","url":null,"abstract":"The design and development of graphical user interfaces for rich Internet applications are well known difficult tasks with tools. The designers must be aware of the computing platform, the user's characteristics (education, social background, among others) and the environment within users must interact with the application. We present a method to design this type of user interfaces that is model-based and applies an iterative series of XSLT transformations to translate the abstract modeled interface into a final user interface that is coded in a specific platform. In order to avoid the proprietary engines dependency for designing tasks. UsiXML is used to model all the levels. Several model based technologies have been proposed and in this paper we review a XML-compliant user interface description language: XAML","PeriodicalId":339667,"journal":{"name":"2006 Fourth Latin American Web Congress","volume":"266 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120891264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Rossi, Andres Nieto, L. Mengoni, Nahuel Lofeudo, Liliana Nuño Silva, Damiano Distante
In this paper we present a model-based approach to integrate dynamic and volatile functionality in Web applications. Our approach comprises an extension to the OOHDM design method and a software framework which supports the injection of volatile functionality into the design model. We first motivate our work by discussing the problems which arise when dealing with volatile functionality; some meaningful examples are presented. We briefly describe our design approach, showing how to decouple volatile functionality from the core design model. We finally describe an implementation framework which supports the presented ideas extending Apache Struts with the notion of services and service affinities. Finally, we compare our approach with others' and present some further research we are pursuing
{"title":"Model-Based Design of Volatile Functionality in Web Applications","authors":"G. Rossi, Andres Nieto, L. Mengoni, Nahuel Lofeudo, Liliana Nuño Silva, Damiano Distante","doi":"10.1109/LA-WEB.2006.20","DOIUrl":"https://doi.org/10.1109/LA-WEB.2006.20","url":null,"abstract":"In this paper we present a model-based approach to integrate dynamic and volatile functionality in Web applications. Our approach comprises an extension to the OOHDM design method and a software framework which supports the injection of volatile functionality into the design model. We first motivate our work by discussing the problems which arise when dealing with volatile functionality; some meaningful examples are presented. We briefly describe our design approach, showing how to decouple volatile functionality from the core design model. We finally describe an implementation framework which supports the presented ideas extending Apache Struts with the notion of services and service affinities. Finally, we compare our approach with others' and present some further research we are pursuing","PeriodicalId":339667,"journal":{"name":"2006 Fourth Latin American Web Congress","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122982563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The author discusses play in World of Warcraft, a popular massively multiplayer online game. World of Warcraft enables self-organizing groups of strangers and friends to collaborate on short-term objectives. Such collaborations may reflect coming changes in globalized work in which we increasingly work with remote others we know little about. In game play, the glue that keeps groups together is the shared objective of completing a "quest" or mission, as well as the shared culture of the game. The game is rich in meaning with a strong narrative, a material culture of weapons, armor, potions, recipes, jewelry, and many other goods, as well as a vibrant economy. Players' backgrounds are diverse but discourse emphasizes understandings about the game rather than players' personal lives. Players learn to be at ease with strangers, to get things done with others they don't know and may never interact with again. The game diminishes some of the impact of things that divide us such as ethnicity, gender, and age, through sharing the game
{"title":"Collaborative Play in World of Warcraft","authors":"B. Nardi","doi":"10.1109/LA-WEB.2006.8","DOIUrl":"https://doi.org/10.1109/LA-WEB.2006.8","url":null,"abstract":"Summary form only given. The author discusses play in World of Warcraft, a popular massively multiplayer online game. World of Warcraft enables self-organizing groups of strangers and friends to collaborate on short-term objectives. Such collaborations may reflect coming changes in globalized work in which we increasingly work with remote others we know little about. In game play, the glue that keeps groups together is the shared objective of completing a \"quest\" or mission, as well as the shared culture of the game. The game is rich in meaning with a strong narrative, a material culture of weapons, armor, potions, recipes, jewelry, and many other goods, as well as a vibrant economy. Players' backgrounds are diverse but discourse emphasizes understandings about the game rather than players' personal lives. Players learn to be at ease with strangers, to get things done with others they don't know and may never interact with again. The game diminishes some of the impact of things that divide us such as ethnicity, gender, and age, through sharing the game","PeriodicalId":339667,"journal":{"name":"2006 Fourth Latin American Web Congress","volume":"351 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133134580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite all tricks and mechanisms spammers use to avoid detection, one fact is certain: spammers have to deliver their message, whatever it is. This fact makes the message itself a weak point of spammers, and thus special attention has being devoted to content-based spam detection. In this paper we introduce a novel pattern discovery approach for spam detection. The proposed approach discovers patterns hidden in the message, and then it builds a classification model by exploring the associations among the discovered patterns. The model is composed by rules, showing the relationships between the discovered patterns and classes (i.e., spam/legitimate message). Differently from typical eager classifiers which build a single model that is good on average for all messages, our lazy approach builds a specific model for each message being classified, possibly taking advantage of particular characteristics of the message. We evaluate our approach under the TREC 2005 Spam Track evaluation framework, in which a chronological sequence of messages is presented sequentially to the filter for classification, and the filter is continuously trained with incremental feedback. Our results indicate that the proposed approach can eliminate almost 99% of spam while incurring 0.4% legitimate email loss. Further, our approach is also efficient in terms of computational complexity, being able to classify more than one hundred messages per second
{"title":"Lazy Associative Classification for Content-based Spam Detection","authors":"Adriano Veloso, Wagner Meira Jr","doi":"10.1109/LA-WEB.2006.19","DOIUrl":"https://doi.org/10.1109/LA-WEB.2006.19","url":null,"abstract":"Despite all tricks and mechanisms spammers use to avoid detection, one fact is certain: spammers have to deliver their message, whatever it is. This fact makes the message itself a weak point of spammers, and thus special attention has being devoted to content-based spam detection. In this paper we introduce a novel pattern discovery approach for spam detection. The proposed approach discovers patterns hidden in the message, and then it builds a classification model by exploring the associations among the discovered patterns. The model is composed by rules, showing the relationships between the discovered patterns and classes (i.e., spam/legitimate message). Differently from typical eager classifiers which build a single model that is good on average for all messages, our lazy approach builds a specific model for each message being classified, possibly taking advantage of particular characteristics of the message. We evaluate our approach under the TREC 2005 Spam Track evaluation framework, in which a chronological sequence of messages is presented sequentially to the filter for classification, and the filter is continuously trained with incremental feedback. Our results indicate that the proposed approach can eliminate almost 99% of spam while incurring 0.4% legitimate email loss. Further, our approach is also efficient in terms of computational complexity, being able to classify more than one hundred messages per second","PeriodicalId":339667,"journal":{"name":"2006 Fourth Latin American Web Congress","volume":"346 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124276308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes structures which represent the organization of documents extracted from open archives initiative compliant data providers. We have called them "ontologies of records". They group similar documents by means of data mining techniques and document clustering algorithms. Ontology markup languages are used to implement them. We show an ontology of records constructed in a semi-automatic way and propose a maintenance process based on collaborative rewriting and revision. Ontologies of records have a well defined meaning, they enable human and software agents to work in cooperation to exploit data providers. The paper is a small contribution to the construction of lightweight ontologies to be used for different purposes in the semantic Web
{"title":"Construction, Implementation and Maintenance of Ontologies of Records","authors":"M. A. M. Nieto, A. Chávez-Aragón, R. O. Chávez","doi":"10.1109/LA-WEB.2006.9","DOIUrl":"https://doi.org/10.1109/LA-WEB.2006.9","url":null,"abstract":"This paper describes structures which represent the organization of documents extracted from open archives initiative compliant data providers. We have called them \"ontologies of records\". They group similar documents by means of data mining techniques and document clustering algorithms. Ontology markup languages are used to implement them. We show an ontology of records constructed in a semi-automatic way and propose a maintenance process based on collaborative rewriting and revision. Ontologies of records have a well defined meaning, they enable human and software agents to work in cooperation to exploit data providers. The paper is a small contribution to the construction of lightweight ontologies to be used for different purposes in the semantic Web","PeriodicalId":339667,"journal":{"name":"2006 Fourth Latin American Web Congress","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132626763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eric Sadit Tellez, Edgar Chávez, J. Contreras-Castillo
Remote object management is a key element in distributed and collaborative information retrieval, peer-to-peer systems and agent oriented programming. In existing implementations the communication and parsing overhead represents a significant fraction of the overall latency time in information retrieval tasks. Furthermore, existing architectures are composed of several software layers with potential version conflicts. In this paper, we present SPyRO (simple Python remote objects) which is a Python remote object management system developed to provide transparent and translucent remote object access. The transparent mode is designed to create easily distributed applications supporting code mobility (Fuggetta et al., 1998) in Python programming language, whilst the translucent mode is designed to provide total control over remote calls, and allow access from other programming languages. To lower the communication latency, the connection is stateless, local objects and remote calls are not aware of the connection state. The protocol uses several marshal formats to communicate between peers, trying to maximize the homogeneity in a heterogeneous network. To support our claims we present results showing performance improvements of about 10 times when comparing with state of the art marshalling formats based on XML
远程对象管理是分布式和协同信息检索、点对点系统和面向代理的编程中的一个关键元素。在现有的实现中,通信和解析开销占信息检索任务总延迟时间的很大一部分。此外,现有的体系结构由几个具有潜在版本冲突的软件层组成。在本文中,我们介绍了SPyRO(简单Python远程对象),这是一个Python远程对象管理系统,旨在提供透明和半透明的远程对象访问。透明模式的目的是用Python编程语言创建易于分布的应用程序,支持代码移动性(Fuggetta et al., 1998),而半透明模式的目的是提供对远程调用的完全控制,并允许从其他编程语言访问。为了降低通信延迟,连接是无状态的,本地对象和远程调用不知道连接状态。该协议使用几种封送格式在对等点之间进行通信,试图在异构网络中最大限度地提高同质性。为了支持我们的说法,我们提供的结果显示,与基于XML的最先进的编组格式相比,性能提高了大约10倍
{"title":"SPyRO: Simple Python Remote Objects","authors":"Eric Sadit Tellez, Edgar Chávez, J. Contreras-Castillo","doi":"10.1109/LA-WEB.2006.34","DOIUrl":"https://doi.org/10.1109/LA-WEB.2006.34","url":null,"abstract":"Remote object management is a key element in distributed and collaborative information retrieval, peer-to-peer systems and agent oriented programming. In existing implementations the communication and parsing overhead represents a significant fraction of the overall latency time in information retrieval tasks. Furthermore, existing architectures are composed of several software layers with potential version conflicts. In this paper, we present SPyRO (simple Python remote objects) which is a Python remote object management system developed to provide transparent and translucent remote object access. The transparent mode is designed to create easily distributed applications supporting code mobility (Fuggetta et al., 1998) in Python programming language, whilst the translucent mode is designed to provide total control over remote calls, and allow access from other programming languages. To lower the communication latency, the connection is stateless, local objects and remote calls are not aware of the connection state. The protocol uses several marshal formats to communicate between peers, trying to maximize the homogeneity in a heterogeneous network. To support our claims we present results showing performance improvements of about 10 times when comparing with state of the art marshalling formats based on XML","PeriodicalId":339667,"journal":{"name":"2006 Fourth Latin American Web Congress","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132565624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since the end of 2004, when the law decree about accessibility for Brazilian governmental Web sites came into force, the federal agencies have been struggling to conform to the norms of law. Discussions about both evaluation and fixing of pages with access barriers are taking place on the mainstream, but little attention has been paid to a feature commonly found on governmental Web sites: the large amount of information, compared to commercial Web sites. This paper proposes an accessibility planning for Web sites with both large number of pages and intense daily accesses, carried out through a work model which we call accessibility factory. In addition we present the accessibility planning directions for Brazilian Central Bank Web site as a case study
{"title":"Accessibility Implementation Planning for Large Governmental Websites: a Case Study","authors":"Filipe Levi, Paulo Melo, U. Lucena","doi":"10.1109/LA-WEB.2006.4","DOIUrl":"https://doi.org/10.1109/LA-WEB.2006.4","url":null,"abstract":"Since the end of 2004, when the law decree about accessibility for Brazilian governmental Web sites came into force, the federal agencies have been struggling to conform to the norms of law. Discussions about both evaluation and fixing of pages with access barriers are taking place on the mainstream, but little attention has been paid to a feature commonly found on governmental Web sites: the large amount of information, compared to commercial Web sites. This paper proposes an accessibility planning for Web sites with both large number of pages and intense daily accesses, carried out through a work model which we call accessibility factory. In addition we present the accessibility planning directions for Brazilian Central Bank Web site as a case study","PeriodicalId":339667,"journal":{"name":"2006 Fourth Latin American Web Congress","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117109111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes an approach for the semi-automatic learning object metadata markup of course's Web pages and their posterior extraction into SCORM packages. This is identified as the first step to recycle course's Web pages with the full potential of the semantic Web. The key issue of this approach is to avoid as much as possible manual metadata markup. To achieve this goal, the process of automatic metadata markup is provided with: (i) an ontology of course descriptions in OWL that basically provides a sound specification of the diverse elements in a course and (ii) context information explicitly set up by linguistics rules. A prototype implementation has been developed in Java for Spanish course's Web pages
{"title":"Recycling Course Web Pages for the Semantic Web","authors":"Regina Motz, Raquel Sosa, Michael A. Rodriguez","doi":"10.1109/LA-WEB.2006.31","DOIUrl":"https://doi.org/10.1109/LA-WEB.2006.31","url":null,"abstract":"This paper describes an approach for the semi-automatic learning object metadata markup of course's Web pages and their posterior extraction into SCORM packages. This is identified as the first step to recycle course's Web pages with the full potential of the semantic Web. The key issue of this approach is to avoid as much as possible manual metadata markup. To achieve this goal, the process of automatic metadata markup is provided with: (i) an ontology of course descriptions in OWL that basically provides a sound specification of the diverse elements in a course and (ii) context information explicitly set up by linguistics rules. A prototype implementation has been developed in Java for Spanish course's Web pages","PeriodicalId":339667,"journal":{"name":"2006 Fourth Latin American Web Congress","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115443600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Web spamming refers to actions intended to mislead search engines into ranking some pages higher than they deserve. The amount of Web spam has increased dramatically, leading to a degradation of search results. In this paper the author presents a taxonomy of spamming techniques, which he believes can help in developing appropriate countermeasures. He also describes some of spam detection techniques they have developed at Stanford
{"title":"Overview of Search Engine Spamming","authors":"H. Garcia-Molina","doi":"10.1109/LA-WEB.2006.24","DOIUrl":"https://doi.org/10.1109/LA-WEB.2006.24","url":null,"abstract":"Summary form only given. Web spamming refers to actions intended to mislead search engines into ranking some pages higher than they deserve. The amount of Web spam has increased dramatically, leading to a degradation of search results. In this paper the author presents a taxonomy of spamming techniques, which he believes can help in developing appropriate countermeasures. He also describes some of spam detection techniques they have developed at Stanford","PeriodicalId":339667,"journal":{"name":"2006 Fourth Latin American Web Congress","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126790045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Gordillo, G. Rossi, A. Moreira, J. Araújo, Carla Vairetti, Matías Urbieta
Complex applications, in particular Web applications, deal with a myriad of different concerns and some of them affect several others. The result is that these crosscutting concerns are scattered throughout different software artifacts and tangled with other concerns. In this paper we present an approach for modeling and composing navigational concerns in Web applications. By showing how to build partial navigation scenarios with user interaction diagrams, analyzing how they crosscut and defining corresponding composition rules, we add modularity to the requirements specification stage, facilitating reasoning about the requirements and a consequent tradeoff analysis to support informed decisions on architectural choices. Moreover, by focusing on navigation concerns during the early stages of applications development, we aim to address the impact of crosscutting concerns in design models, improve the discovering of meaningful design artefacts and improve traceability of design decisions
{"title":"Modeling and Composing Navigational Concerns in Web Applications. Requirements and Design Issues.","authors":"S. Gordillo, G. Rossi, A. Moreira, J. Araújo, Carla Vairetti, Matías Urbieta","doi":"10.1109/LA-WEB.2006.21","DOIUrl":"https://doi.org/10.1109/LA-WEB.2006.21","url":null,"abstract":"Complex applications, in particular Web applications, deal with a myriad of different concerns and some of them affect several others. The result is that these crosscutting concerns are scattered throughout different software artifacts and tangled with other concerns. In this paper we present an approach for modeling and composing navigational concerns in Web applications. By showing how to build partial navigation scenarios with user interaction diagrams, analyzing how they crosscut and defining corresponding composition rules, we add modularity to the requirements specification stage, facilitating reasoning about the requirements and a consequent tradeoff analysis to support informed decisions on architectural choices. Moreover, by focusing on navigation concerns during the early stages of applications development, we aim to address the impact of crosscutting concerns in design models, improve the discovering of meaningful design artefacts and improve traceability of design decisions","PeriodicalId":339667,"journal":{"name":"2006 Fourth Latin American Web Congress","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131307755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}