In line with the growing success of e-commerce demands for an open infrastructure providing security services are growing stronger. Authentication and authorisation infrastructures (AAIs) enhanced with an attribute-based access control model (ABAC) offer such services to service federations and customers. As AAIs are a security enhancing technology, design and implementation must comply with extremely high quality standards. Failures and vulnerabilities in the provided basic security services exponentially affect the service providing processes. Various AAI concepts, frameworks, and products have been developed in the past. Building on these experiences, we define a pattern system for AAIs. It will ensure interoperability and quality of future AAI solutions. The derived pattern system consists of security patterns already published and in use, as well as on open standards like SAML and XACML and related patterns. It can be directly used in the software development cycle, as proposed by different methodologies.
{"title":"Patterns for Authentication and Authorisation Infrastructures","authors":"Roland Erber, Christian Schläger, G. Pernul","doi":"10.1109/DEXA.2007.4","DOIUrl":"https://doi.org/10.1109/DEXA.2007.4","url":null,"abstract":"In line with the growing success of e-commerce demands for an open infrastructure providing security services are growing stronger. Authentication and authorisation infrastructures (AAIs) enhanced with an attribute-based access control model (ABAC) offer such services to service federations and customers. As AAIs are a security enhancing technology, design and implementation must comply with extremely high quality standards. Failures and vulnerabilities in the provided basic security services exponentially affect the service providing processes. Various AAI concepts, frameworks, and products have been developed in the past. Building on these experiences, we define a pattern system for AAIs. It will ensure interoperability and quality of future AAI solutions. The derived pattern system consists of security patterns already published and in use, as well as on open standards like SAML and XACML and related patterns. It can be directly used in the software development cycle, as proposed by different methodologies.","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115292372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joan De Boeck, Kristof Verpoorten, K. Luyten, K. Coninx
During the past few years, personal portable computer systems such as PDAs or laptops are being used in different contexts such as in meetings, at the office, or at home. In the current era of multimodal interaction, each context may require other interaction strategies or system settings to allow the end-users to reach their envisioned goals. For instance, in a meeting room a user may want to use the projection equipment and disable the audio output for a presentation, while audio input and output may be important while in a teleconference. In present computer systems most changes have to be made manually and require explicit interaction with the system. The number of different devices used in such environments makes that this configuration step results in a high cognitive load and causes interrupts of the tasks being executed by the end-user. In this paper we present how proactive user interfaces may predict the next interface changes invoked by context switches or user actions. In particular, we will focus on two machine learning algorithms, decision trees and Markov models, that may support this proactive behaviour for multimodal user interfaces. Based on some simple but relevant scenarios, we compare the outcome of both implementations in order to decide which algorithm is most applicable in this context.
{"title":"A Comparison between Decision Trees and Markov Models to Support Proactive Interfaces","authors":"Joan De Boeck, Kristof Verpoorten, K. Luyten, K. Coninx","doi":"10.1109/DEXA.2007.94","DOIUrl":"https://doi.org/10.1109/DEXA.2007.94","url":null,"abstract":"During the past few years, personal portable computer systems such as PDAs or laptops are being used in different contexts such as in meetings, at the office, or at home. In the current era of multimodal interaction, each context may require other interaction strategies or system settings to allow the end-users to reach their envisioned goals. For instance, in a meeting room a user may want to use the projection equipment and disable the audio output for a presentation, while audio input and output may be important while in a teleconference. In present computer systems most changes have to be made manually and require explicit interaction with the system. The number of different devices used in such environments makes that this configuration step results in a high cognitive load and causes interrupts of the tasks being executed by the end-user. In this paper we present how proactive user interfaces may predict the next interface changes invoked by context switches or user actions. In particular, we will focus on two machine learning algorithms, decision trees and Markov models, that may support this proactive behaviour for multimodal user interfaces. Based on some simple but relevant scenarios, we compare the outcome of both implementations in order to decide which algorithm is most applicable in this context.","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125223488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
eGovernment has become reality: Most governmental organisations offer electronic services to citizens through the Internet. However, inter-organizational collaboration, especially by means of workflows, is not as widespread. To foster transparency and control in collaborative workflows according to global policies, such as European law regulations, collaborative partners have to partially provide insight into local events and workflows to comply to the global regulations. An example collaboration between Eurojust and Europol emphasises the shortage of existing collaboration architectures lacking the desired control and transparency capabilities needed for the application domain of human-centric eGovernment workflows. To address this, we propose an architecture of a modular runtime infrastructure for decentralized, collaborative eGovernment workflows, hereby respecting the heterogeneity of the system and application landscapes in place.
{"title":"Collaborative Workflow Management for eGovernment","authors":"C. Wolter, H. Plate, Cédric Hébert","doi":"10.1109/DEXA.2007.15","DOIUrl":"https://doi.org/10.1109/DEXA.2007.15","url":null,"abstract":"eGovernment has become reality: Most governmental organisations offer electronic services to citizens through the Internet. However, inter-organizational collaboration, especially by means of workflows, is not as widespread. To foster transparency and control in collaborative workflows according to global policies, such as European law regulations, collaborative partners have to partially provide insight into local events and workflows to comply to the global regulations. An example collaboration between Eurojust and Europol emphasises the shortage of existing collaboration architectures lacking the desired control and transparency capabilities needed for the application domain of human-centric eGovernment workflows. To address this, we propose an architecture of a modular runtime infrastructure for decentralized, collaborative eGovernment workflows, hereby respecting the heterogeneity of the system and application landscapes in place.","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116939375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IPY helps to overcome national and other borders and constraints towards better information access and management of precious natural resources worldwide. It carries promises of a sustainable global village. Beyond the polar regions, IPY will leave a global legacy, most of it is digital and can be expressed as new data and synthesized information. IPY will affect how we do and fund science, how we administer the globe, how we teach and evaluate, and eventually, how society lives and carries out business and democracy. IPY offers solutions in times of massive global resource pressures, and deserves our full support. However, it needs to be assured that IPY remains balanced in its economic, social and ecological concepts. Teaching is the key to its success.
{"title":"The digital teaching legacy of the International Polar Year (IPY): Details of a present to the global village for achieving sustainability","authors":"F. Huettmann","doi":"10.1109/DEXA.2007.31","DOIUrl":"https://doi.org/10.1109/DEXA.2007.31","url":null,"abstract":"IPY helps to overcome national and other borders and constraints towards better information access and management of precious natural resources worldwide. It carries promises of a sustainable global village. Beyond the polar regions, IPY will leave a global legacy, most of it is digital and can be expressed as new data and synthesized information. IPY will affect how we do and fund science, how we administer the globe, how we teach and evaluate, and eventually, how society lives and carries out business and democracy. IPY offers solutions in times of massive global resource pressures, and deserves our full support. However, it needs to be assured that IPY remains balanced in its economic, social and ecological concepts. Teaching is the key to its success.","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124984814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Polaillon, Marie-Aude Aufaure, B. L. Grand, M. Soto
This paper presents an information retrieval methodology which uses formal concept analysis in conjunction with semantics to provide contextual answers to users' queries. User formulates a query on a set of heterogeneous data sources. This set is semantically unified by the proposed notion conceptual context. A context can be global: it defines a semantic space the user can query - or instantaneous- it defines the current position of the user in the semantic space. Our methodology consists first in a pre-treatment providing the global conceptual context and then in an online contextual processing of users' requests, associated to an instantaneous context.This methodology can be applied to heterogeneous data sources such as Web pages, databases, email, personal documents and images, etc. One interest of our approach is to perform a more relevant and refined information retrieval and contextual navigation, closer to the users ' expectation.
{"title":"FCA for contextual semantic navigation and information retrieval in heterogeneous information systems","authors":"G. Polaillon, Marie-Aude Aufaure, B. L. Grand, M. Soto","doi":"10.1109/DEXA.2007.147","DOIUrl":"https://doi.org/10.1109/DEXA.2007.147","url":null,"abstract":"This paper presents an information retrieval methodology which uses formal concept analysis in conjunction with semantics to provide contextual answers to users' queries. User formulates a query on a set of heterogeneous data sources. This set is semantically unified by the proposed notion conceptual context. A context can be global: it defines a semantic space the user can query - or instantaneous- it defines the current position of the user in the semantic space. Our methodology consists first in a pre-treatment providing the global conceptual context and then in an online contextual processing of users' requests, associated to an instantaneous context.This methodology can be applied to heterogeneous data sources such as Web pages, databases, email, personal documents and images, etc. One interest of our approach is to perform a more relevant and refined information retrieval and contextual navigation, closer to the users ' expectation.","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125064762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anomaly detection has the double purpose of discovering interesting exceptions and identifying incorrect data in huge amounts of data. Since anomalies are rare events which violate the frequent relationships among data, we propose a method to detect frequent relationships and then extract anomalies. The RADAR (Research of Anomalous Data through Association Rules) method is based on data mining techniques to extract frequent "rules" from datasets, in the form of quasi-functional dependencies. Such dependencies are extracted by using association rules. Given a quasi-functional dependency, we can discover the associated anomalies by querying either the original database or the association rules previously mined. The analysis on this kind of anomaly can either derive the presence of erroneous data or highlight novel information which represents significant outliers of frequent rules. Our method does not require any previous knowledge and directly infers rules from the data. Experiments performed on real XML databases are reported to show the applicability and effectiveness of the proposed approach.
异常检测有两个目的,一是发现有趣的异常,二是在海量数据中识别不正确的数据。由于异常是违反数据间频繁关系的罕见事件,我们提出了一种检测频繁关系并提取异常的方法。RADAR (Research of Anomalous Data through Association Rules)方法基于数据挖掘技术,以准功能依赖关系的形式从数据集中提取频繁的“规则”。这些依赖关系是通过使用关联规则提取的。给定一个准功能依赖,我们可以通过查询原始数据库或先前挖掘的关联规则来发现相关的异常。对这类异常的分析既可以得出错误数据的存在,也可以突出新的信息,这些信息代表了频繁规则的显著异常值。我们的方法不需要任何先前的知识,直接从数据中推断出规则。在实际XML数据库上进行的实验表明了该方法的适用性和有效性。
{"title":"Anomaly Detection in XML databases by means of Association Rules","authors":"G. Bruno, P. Garza, E. Quintarelli, R. Rossato","doi":"10.1109/DEXA.2007.68","DOIUrl":"https://doi.org/10.1109/DEXA.2007.68","url":null,"abstract":"Anomaly detection has the double purpose of discovering interesting exceptions and identifying incorrect data in huge amounts of data. Since anomalies are rare events which violate the frequent relationships among data, we propose a method to detect frequent relationships and then extract anomalies. The RADAR (Research of Anomalous Data through Association Rules) method is based on data mining techniques to extract frequent \"rules\" from datasets, in the form of quasi-functional dependencies. Such dependencies are extracted by using association rules. Given a quasi-functional dependency, we can discover the associated anomalies by querying either the original database or the association rules previously mined. The analysis on this kind of anomaly can either derive the presence of erroneous data or highlight novel information which represents significant outliers of frequent rules. Our method does not require any previous knowledge and directly infers rules from the data. Experiments performed on real XML databases are reported to show the applicability and effectiveness of the proposed approach.","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"427 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116537711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a case study of applying the Problem Based Learning (PBL) approach in a Web-based environment. The application domain concerns Business Management and Technology Management topics. At first, the paper describes the rationale and the main features behind the PBL for creating Business Leaders in the Emerging Competitive Environment. Then, it introduces the Web-based system called "Virtual eBMS" that has been designed and implemented at the e-Business Management Section (eBMS) of the Scuola Superiore ISUFI. The system is illustrated by highlighting the operational framework allowing an author to design PBL-based curricula; moreover the two main learning approaches (structured and unstructured) referred to the learner access to the curricula are proposed, allowing him to capture the most suitable learning resources for solving complex problems. Finally, the main benefits from the learner perspective, together with some indications for future research will end the paper.
{"title":"An e-Learning System Supporting the Problem-Based-Learning Approach: the Case of \"Virtual eBMS\"","authors":"G. Secundo, G. Elia, Cesare Taurino","doi":"10.1109/DEXA.2007.121","DOIUrl":"https://doi.org/10.1109/DEXA.2007.121","url":null,"abstract":"This paper presents a case study of applying the Problem Based Learning (PBL) approach in a Web-based environment. The application domain concerns Business Management and Technology Management topics. At first, the paper describes the rationale and the main features behind the PBL for creating Business Leaders in the Emerging Competitive Environment. Then, it introduces the Web-based system called \"Virtual eBMS\" that has been designed and implemented at the e-Business Management Section (eBMS) of the Scuola Superiore ISUFI. The system is illustrated by highlighting the operational framework allowing an author to design PBL-based curricula; moreover the two main learning approaches (structured and unstructured) referred to the learner access to the curricula are proposed, allowing him to capture the most suitable learning resources for solving complex problems. Finally, the main benefits from the learner perspective, together with some indications for future research will end the paper.","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128682895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The WWW is a very large repository composed of many documents that are stored by several data sources. Web search engines allow retrieval based on keywords. Nevertheless, published documents may be incomplete, obsolete or huge. Therefore, search also might include quality criteria such as completeness, recentness, update frequency, and granularity. Traditional DBMS rigidity does not allow the expression of user preferences based on soft criteria. Fuzzy DBMS, such as SQLfi, are necessary instead. Here we present a tool for the selection of the best data sources and documents in terms of user preferences. Documents and data sources would be described according to quality parameters stored in a catalog. Retrieval would be done by means of fuzzy SQLf queries. Our tool offers a user-oriented wizard to allow the expression of requirements.
{"title":"The Egloo Fuzzy Web Data Source Selection Tool","authors":"Marlene Goncalves, Leonid José Tineo Rodríguez","doi":"10.1109/DEXA.2007.154","DOIUrl":"https://doi.org/10.1109/DEXA.2007.154","url":null,"abstract":"The WWW is a very large repository composed of many documents that are stored by several data sources. Web search engines allow retrieval based on keywords. Nevertheless, published documents may be incomplete, obsolete or huge. Therefore, search also might include quality criteria such as completeness, recentness, update frequency, and granularity. Traditional DBMS rigidity does not allow the expression of user preferences based on soft criteria. Fuzzy DBMS, such as SQLfi, are necessary instead. Here we present a tool for the selection of the best data sources and documents in terms of user preferences. Documents and data sources would be described according to quality parameters stored in a catalog. Retrieval would be done by means of fuzzy SQLf queries. Our tool offers a user-oriented wizard to allow the expression of requirements.","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124354670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ontology exploration or modelling becomes a challenging task when knowledge domains are huge and involve many relationships and restrictions. In such circumstances the ability to inspect and modify only specific parts of a given model may enable designers to achieve better results, by focusing on limited subsets of the domain. The work proposed in this paper extends a visualization plug-in for Protege (Ontosphere3d) with so-called "Logical Views" in order to provide an explicit support for visualizing and modelling subsets of a given knowledge domain. We intend a logical view as a user-definable set of ontology entities (concepts and relations) having in common a so-called subject area. Once defined, logical views can be stored inside the ontology model inform of annotation properties; as a consequence the view definition is completely independent from the tool employed for its creation and can be easily ported to different platforms and development environments.
{"title":"Ontology Exploration through Logical Views in Protégé","authors":"A. Bosca, Dario Bonino","doi":"10.1109/DEXA.2007.155","DOIUrl":"https://doi.org/10.1109/DEXA.2007.155","url":null,"abstract":"Ontology exploration or modelling becomes a challenging task when knowledge domains are huge and involve many relationships and restrictions. In such circumstances the ability to inspect and modify only specific parts of a given model may enable designers to achieve better results, by focusing on limited subsets of the domain. The work proposed in this paper extends a visualization plug-in for Protege (Ontosphere3d) with so-called \"Logical Views\" in order to provide an explicit support for visualizing and modelling subsets of a given knowledge domain. We intend a logical view as a user-definable set of ontology entities (concepts and relations) having in common a so-called subject area. Once defined, logical views can be stored inside the ontology model inform of annotation properties; as a consequence the view definition is completely independent from the tool employed for its creation and can be easily ported to different platforms and development environments.","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"2006 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127641571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The process of investigating a crime committed by using digital means is called digital forensics (DF). DF training is critical, especially since first responders to a digital crime scene often contaminate the evidence unknowingly. A pattern is a generic solution to a repeating problem. The authors propose a DF pattern template (patlet - as it is only a proposal) to govern the above-mentioned issue. The patlet is illustrated by its application using a live-CD based tool for first responder validation of DF evidence.
{"title":"Patlet for Digital Forensics First Responders","authors":"D. Kotzé, M. Olivier","doi":"10.1109/DEXA.2007.30","DOIUrl":"https://doi.org/10.1109/DEXA.2007.30","url":null,"abstract":"The process of investigating a crime committed by using digital means is called digital forensics (DF). DF training is critical, especially since first responders to a digital crime scene often contaminate the evidence unknowingly. A pattern is a generic solution to a repeating problem. The authors propose a DF pattern template (patlet - as it is only a proposal) to govern the above-mentioned issue. The patlet is illustrated by its application using a live-CD based tool for first responder validation of DF evidence.","PeriodicalId":314834,"journal":{"name":"18th International Workshop on Database and Expert Systems Applications (DEXA 2007)","volume":"34 22","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132938317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}