F. Corradini, A. Polzonetti, Romeo Pruno, L. Forastieri
In this paper we'll explain a really implementation of collaborative framework solution for document exchange management in the domain of e-government. Our study of document process management for public administration allows an efficient approach towards exchange methodology for collaborative work in e-government. Our case studies is founded by Marche region and represent an innovative solution for to model, to design and to share electronics document between public administrations and citizens. This architecture allows interoperability between different platforms by using a standard interface via Web
{"title":"Document Exchange Methodology for Collaborative Work in e-Government","authors":"F. Corradini, A. Polzonetti, Romeo Pruno, L. Forastieri","doi":"10.1109/DEXA.2006.54","DOIUrl":"https://doi.org/10.1109/DEXA.2006.54","url":null,"abstract":"In this paper we'll explain a really implementation of collaborative framework solution for document exchange management in the domain of e-government. Our study of document process management for public administration allows an efficient approach towards exchange methodology for collaborative work in e-government. Our case studies is founded by Marche region and represent an innovative solution for to model, to design and to share electronics document between public administrations and citizens. This architecture allows interoperability between different platforms by using a standard interface via Web","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"392 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116667354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Role-based access control, where object accesses are controlled by roles (or job functions) is a more feasible alternative to traditional access control mechanisms. Constraints play a critical role in realizing and providing finegrained RBAC in diverse domains such as P2P and grid computing. In this paper, we have shown how events and authorization rules are used to provide fine-grained RBAC. First, simple events are identified for the RBAC domain. Second, various event operators for modeling constraints such as precedence, non-occurrence, dependency and their combinations are introduced. Third, how event-based RBAC policies are specified using both simple and complex events are discussed. Finally, how the proposed fine-grained RBAC policies can be exploited for P2P resource management is discussed
{"title":"How to Use Events and Rules for Supporting Role-Based Security? (Invited Paper)","authors":"R. Adaikkalavan, Sharma Chakravarthy","doi":"10.1109/DEXA.2006.68","DOIUrl":"https://doi.org/10.1109/DEXA.2006.68","url":null,"abstract":"Role-based access control, where object accesses are controlled by roles (or job functions) is a more feasible alternative to traditional access control mechanisms. Constraints play a critical role in realizing and providing finegrained RBAC in diverse domains such as P2P and grid computing. In this paper, we have shown how events and authorization rules are used to provide fine-grained RBAC. First, simple events are identified for the RBAC domain. Second, various event operators for modeling constraints such as precedence, non-occurrence, dependency and their combinations are introduced. Third, how event-based RBAC policies are specified using both simple and complex events are discussed. Finally, how the proposed fine-grained RBAC policies can be exploited for P2P resource management is discussed","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127548368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper presents a schema repository, an original repository containing different kinds of database schemas. The repository is part of a multidisciplinary approach for schema evolution called the predictive approach for database evolution. The schema repository has a dual role in the approach: (1) During the data-mining process, the repository identifies and analyzes trends on collected schemas belonging to the same domain. (2) The repository is used in the building of the requirements ontology - a domain ontology that contributes in the database design and its evolution. This paper presents both the design and a heuristic-based method to populate such a repository
{"title":"Schema Repository for Database Schema Evolution","authors":"Hassina Bounif, R. Pottinger","doi":"10.1109/DEXA.2006.125","DOIUrl":"https://doi.org/10.1109/DEXA.2006.125","url":null,"abstract":"The paper presents a schema repository, an original repository containing different kinds of database schemas. The repository is part of a multidisciplinary approach for schema evolution called the predictive approach for database evolution. The schema repository has a dual role in the approach: (1) During the data-mining process, the repository identifies and analyzes trends on collected schemas belonging to the same domain. (2) The repository is used in the building of the requirements ontology - a domain ontology that contributes in the database design and its evolution. This paper presents both the design and a heuristic-based method to populate such a repository","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121789108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Johannes Osrael, Lorenz Froihofer, Georg Stoifl, Lucas Weigl, K. Zagar, Igor Habjan, K. M. Göschka
Replication is a well-known technique to achieve fault-tolerance in distributed systems, thereby enhancing availability. However, so far, not much attention has been paid to object replication using Microsoft's .NET technologies. In this paper, we present the lessons we have learned during design and implementation of a .NET based replication framework that allows building dependable, distributed .NET applications. Our framework does not only support traditional replication protocols like primary-backup replication or voting but also a new protocol for explicit balancing between data integrity and availability. Based on our experiences, we recommend to use a state-of-the-art group communication toolkit (e.g., spread) and .NET remoting as basis for object replication in a .NET environment
{"title":"Using Replication to Build Highly Available .NET Applications","authors":"Johannes Osrael, Lorenz Froihofer, Georg Stoifl, Lucas Weigl, K. Zagar, Igor Habjan, K. M. Göschka","doi":"10.1109/DEXA.2006.146","DOIUrl":"https://doi.org/10.1109/DEXA.2006.146","url":null,"abstract":"Replication is a well-known technique to achieve fault-tolerance in distributed systems, thereby enhancing availability. However, so far, not much attention has been paid to object replication using Microsoft's .NET technologies. In this paper, we present the lessons we have learned during design and implementation of a .NET based replication framework that allows building dependable, distributed .NET applications. Our framework does not only support traditional replication protocols like primary-backup replication or voting but also a new protocol for explicit balancing between data integrity and availability. Based on our experiences, we recommend to use a state-of-the-art group communication toolkit (e.g., spread) and .NET remoting as basis for object replication in a .NET environment","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"13 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132610945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a metamodel-oriented framework for modeling business analytical indicators. In the modeling we apply the CWM's behavioral metamodel and discuss the advantages of using it to plan and model business warehouses. We show how to model indicators in the conformity with such a metamodel. This approach, which is based on a standard metamodel, allows us to realize data transformations between different implementations of different data repositories. In this way, it is possible to achieve interoperability of independently designed data warehouse systems
{"title":"Modeling Analytical Indicators Using DataWarehouse Metamodel","authors":"Andrzej Januszewski, Tadeusz Pankowski","doi":"10.1109/DEXA.2006.98","DOIUrl":"https://doi.org/10.1109/DEXA.2006.98","url":null,"abstract":"We propose a metamodel-oriented framework for modeling business analytical indicators. In the modeling we apply the CWM's behavioral metamodel and discuss the advantages of using it to plan and model business warehouses. We show how to model indicators in the conformity with such a metamodel. This approach, which is based on a standard metamodel, allows us to realize data transformations between different implementations of different data repositories. In this way, it is possible to achieve interoperability of independently designed data warehouse systems","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134074692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At the convergence of peer-to-peer (P2P) and service oriented architectures is the idea of effectively and efficiently managing distribution, heterogeneity and autonomy of information sources and services. In this paper, we take the point of view that next generation database management systems (DBMS) should be a federation of distributed, heterogeneous and autonomous components. Such components constitute Web database services. We challenge the conventional notions of what constitute a DBMS, and presents the full spectrum of possible DBMSs based on such service-oriented database architecture (SODA). We examine the issues and challenges of SODA. Finally, we propose one possible instance of SODA that we call DBNet. In order to illustrate some of the research issues involved, we present query processing and optimization techniques that we have devised for DBNet
{"title":"DBNet: A Service-Oriented Database Architecture","authors":"W. Tok, S. Bressan","doi":"10.1109/DEXA.2006.48","DOIUrl":"https://doi.org/10.1109/DEXA.2006.48","url":null,"abstract":"At the convergence of peer-to-peer (P2P) and service oriented architectures is the idea of effectively and efficiently managing distribution, heterogeneity and autonomy of information sources and services. In this paper, we take the point of view that next generation database management systems (DBMS) should be a federation of distributed, heterogeneous and autonomous components. Such components constitute Web database services. We challenge the conventional notions of what constitute a DBMS, and presents the full spectrum of possible DBMSs based on such service-oriented database architecture (SODA). We examine the issues and challenges of SODA. Finally, we propose one possible instance of SODA that we call DBNet. In order to illustrate some of the research issues involved, we present query processing and optimization techniques that we have devised for DBNet","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134387338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hiroyuki Echigo, Hiroaki Yuze, Tsuyoshi Hoshikawa, Kazuo Takahata, N. Sawano, Y. Shibata
In this paper, a robust and large scale resident-oriented safety information system on the occurrence of the various disasters constructed over a nationwide high-speed network is introduced. The resident evacuated can registered his/her safety information in the local safety information servers in the evacuation area whether he/she can safely evaluated or not using mobile PCs or terminals at the evacuation area or mobile terminals on the way of evacuation. All of the local information servers are connected each other by wireless network and the safety information can be sent an upper-layer database in the district area and finally integrated into a district safety information in that region. In our system some of the damaged local servers due to the disaster can be detected and recovered manually by the upper-layer database server. On the other hand, the upper-layer database servers are backed up by mirror servers located to mutually different locations with long distance to isolate the influence to the same disaster when the some of them were destroyed or disordered. Thus, by introducing two levels of redundancy and backup functions, more large scale and robust safety information database system can be realized
{"title":"Distributed Disaster Information System over Japan Gigabit Network","authors":"Hiroyuki Echigo, Hiroaki Yuze, Tsuyoshi Hoshikawa, Kazuo Takahata, N. Sawano, Y. Shibata","doi":"10.1109/DEXA.2006.52","DOIUrl":"https://doi.org/10.1109/DEXA.2006.52","url":null,"abstract":"In this paper, a robust and large scale resident-oriented safety information system on the occurrence of the various disasters constructed over a nationwide high-speed network is introduced. The resident evacuated can registered his/her safety information in the local safety information servers in the evacuation area whether he/she can safely evaluated or not using mobile PCs or terminals at the evacuation area or mobile terminals on the way of evacuation. All of the local information servers are connected each other by wireless network and the safety information can be sent an upper-layer database in the district area and finally integrated into a district safety information in that region. In our system some of the damaged local servers due to the disaster can be detected and recovered manually by the upper-layer database server. On the other hand, the upper-layer database servers are backed up by mirror servers located to mutually different locations with long distance to isolate the influence to the same disaster when the some of them were destroyed or disordered. Thus, by introducing two levels of redundancy and backup functions, more large scale and robust safety information database system can be realized","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"281 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134406990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Organizations of today must learn how to learn in order to become competitive. How an organization reaches maturity clearing this area is not clear. This paper presents an initial version of a maturity model aiming to set directions for how to become a learning organization, and to assist people when discussing where in this process the organization finds itself in. Future work consists of detailing the model and develops guidelines for how to measure maturity
{"title":"Towards a Maturity Model for Learning Organizations – the Role of Knowledge Management","authors":"Lena Aggestam","doi":"10.1109/DEXA.2006.138","DOIUrl":"https://doi.org/10.1109/DEXA.2006.138","url":null,"abstract":"Organizations of today must learn how to learn in order to become competitive. How an organization reaches maturity clearing this area is not clear. This paper presents an initial version of a maturity model aiming to set directions for how to become a learning organization, and to assist people when discussing where in this process the organization finds itself in. Future work consists of detailing the model and develops guidelines for how to measure maturity","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129604754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mass spectrometry is the work-horse technology of the emerging field of metabolomics. Community-wide accepted data models and XML formats for data interchange such as mzData are currently in development. The information contained in these models is sufficient to create applications and databases in a model driven architecture (MDA). This allows to (re-)create the necessary code basis and backend database with minimal manual coding. We present an infrastructure to support the use of these data standards. It uses the Eclipse framework to generate Java objects, XML input/output, database persistence and a user-friendly editor for both the XML files and database content. A prototype of a Web frontend has been created to view, verify and upload to such a repository
{"title":"Storage and Processing of Mass Spectrometry Data","authors":"S. Klie, S. Neumann","doi":"10.1109/DEXA.2006.131","DOIUrl":"https://doi.org/10.1109/DEXA.2006.131","url":null,"abstract":"Mass spectrometry is the work-horse technology of the emerging field of metabolomics. Community-wide accepted data models and XML formats for data interchange such as mzData are currently in development. The information contained in these models is sufficient to create applications and databases in a model driven architecture (MDA). This allows to (re-)create the necessary code basis and backend database with minimal manual coding. We present an infrastructure to support the use of these data standards. It uses the Eclipse framework to generate Java objects, XML input/output, database persistence and a user-friendly editor for both the XML files and database content. A prototype of a Web frontend has been created to view, verify and upload to such a repository","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116035132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As XML becomes a serious candidate medium for the management and interchange of critical enterprise data, more attention to the issue of data integrity maintenance is needed. We propose a mapping of XML to relational that takes into account the integrity constraints expressed in XML schema. We present a an extension, called XShreX, of the ShreX mapping. We report the preliminary results of a comparative performance analysis using a mainstream commercial relational database management system. The results suggest that the extension of ShreX does not come at a prohibitive cost for insertions, deletions, updates and queries. In the case of queries, XShreX can even yield a performance improvement
{"title":"XShreX: Maintaining Integrity Constraints in the Mapping of XML Schema to Relational","authors":"Q. Lee, S. Bressan, J. Rahayu","doi":"10.1109/DEXA.2006.150","DOIUrl":"https://doi.org/10.1109/DEXA.2006.150","url":null,"abstract":"As XML becomes a serious candidate medium for the management and interchange of critical enterprise data, more attention to the issue of data integrity maintenance is needed. We propose a mapping of XML to relational that takes into account the integrity constraints expressed in XML schema. We present a an extension, called XShreX, of the ShreX mapping. We report the preliminary results of a comparative performance analysis using a mainstream commercial relational database management system. The results suggest that the extension of ShreX does not come at a prohibitive cost for insertions, deletions, updates and queries. In the case of queries, XShreX can even yield a performance improvement","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116710434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}