The diversity and sophistication of terminals and switch devices that are connected to user networks (home networks and office networks) makes it difficult for users who have no technical knowledge to operate user networks flexibly. The purpose of our research is to alleviate the difficulty of user network operation by end users. To achieve this, we believe that a knowledge base is the key. By using a knowledge base that represents each user environment well, users do not need to know the user environment in detail in order to operate the user network. Individual knowledge is apart of our proposed knowledge base and plays the important role of representing the current user environment correctly. This paper describes a method for generating individual knowledge that supports user network operation by employing user network ontology with local protocols. We have verified that the instance generation method using SNMP and UPnP works well partly in our testing environment
{"title":"Individual Knowledge Generation for Individual User Environments by Relating User Network Ontology to Local Protocols","authors":"K. Nishikawa, N. Nishiyama, F. Ito, T. Yamamura","doi":"10.1109/DEXA.2006.73","DOIUrl":"https://doi.org/10.1109/DEXA.2006.73","url":null,"abstract":"The diversity and sophistication of terminals and switch devices that are connected to user networks (home networks and office networks) makes it difficult for users who have no technical knowledge to operate user networks flexibly. The purpose of our research is to alleviate the difficulty of user network operation by end users. To achieve this, we believe that a knowledge base is the key. By using a knowledge base that represents each user environment well, users do not need to know the user environment in detail in order to operate the user network. Individual knowledge is apart of our proposed knowledge base and plays the important role of representing the current user environment correctly. This paper describes a method for generating individual knowledge that supports user network operation by employing user network ontology with local protocols. We have verified that the instance generation method using SNMP and UPnP works well partly in our testing environment","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133626486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohamed T. Ibrahim, R. Anthony, Torsten Eymann, A. Taleb-Bendiab, L. Gruenwald
This panel paper sets out to discuss what self-adaptation means, and to explore the extent to which current autonomic systems exhibit truly self-adaptive behaviour. Many of the currently cited examples are clearly adaptive, but debate remains as to what extent they are simply following prescribed adaptation rules within preset bounds, and to what extent they have the ability to truly learn new behaviour. Is there a standard test that can be applied to differentiate? Is adaptive behaviour sufficient anyway? Other autonomic computing issues are also discussed.
{"title":"Exploring Adaptation & Self-Adaptation in Autonomic Computing Systems","authors":"Mohamed T. Ibrahim, R. Anthony, Torsten Eymann, A. Taleb-Bendiab, L. Gruenwald","doi":"10.1109/DEXA.2006.57","DOIUrl":"https://doi.org/10.1109/DEXA.2006.57","url":null,"abstract":"This panel paper sets out to discuss what self-adaptation means, and to explore the extent to which current autonomic systems exhibit truly self-adaptive behaviour. Many of the currently cited examples are clearly adaptive, but debate remains as to what extent they are simply following prescribed adaptation rules within preset bounds, and to what extent they have the ability to truly learn new behaviour. Is there a standard test that can be applied to differentiate? Is adaptive behaviour sufficient anyway? Other autonomic computing issues are also discussed.","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130945236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we analyze the relation between the content of business news and long-term market trends. We describe cleansing and classification of business news, we investigate how much similarity good news and bad news have, and how their ratio behaves in context of long-terms market trends. We have processed more than 400 thousand business news coming from the years 1999 to 2005. We present results of our experiments and their possible impact on forecasting of long-term market trends
{"title":"Text Mining of Business News for Forecasting","authors":"P. Kroha, Ricardo Baeza-Yates, Björn Krellner","doi":"10.1109/DEXA.2006.135","DOIUrl":"https://doi.org/10.1109/DEXA.2006.135","url":null,"abstract":"In this paper, we analyze the relation between the content of business news and long-term market trends. We describe cleansing and classification of business news, we investigate how much similarity good news and bad news have, and how their ratio behaves in context of long-terms market trends. We have processed more than 400 thousand business news coming from the years 1999 to 2005. We present results of our experiments and their possible impact on forecasting of long-term market trends","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115290421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Esther Palomar, J. Tapiador, J. Castro, A. Ribagorda
A significant challenge for peer-to-peer (P2P) systems is maintaining the correctness and consistency of their global data structures and shared contents as peers independently and unpredictably join and leave the system. In such networks, it is necessary that some security mechanisms will be applied with the aim of avoiding attacks based on non-authorized content modifications. In this paper, we propose a content authentication protocol for pure P2P networks. The scheme also incorporates a rational content access procedure based on proofs of computational effort. Our proposal relies on a set of peers playing the role of a certification authority, since it is unrealistic to assume that appropriate trusted third parties can be deployed in such environments
{"title":"A Protocol for Secure Content Distribution in Pure P2P Networks","authors":"Esther Palomar, J. Tapiador, J. Castro, A. Ribagorda","doi":"10.1109/DEXA.2006.17","DOIUrl":"https://doi.org/10.1109/DEXA.2006.17","url":null,"abstract":"A significant challenge for peer-to-peer (P2P) systems is maintaining the correctness and consistency of their global data structures and shared contents as peers independently and unpredictably join and leave the system. In such networks, it is necessary that some security mechanisms will be applied with the aim of avoiding attacks based on non-authorized content modifications. In this paper, we propose a content authentication protocol for pure P2P networks. The scheme also incorporates a rational content access procedure based on proofs of computational effort. Our proposal relies on a set of peers playing the role of a certification authority, since it is unrealistic to assume that appropriate trusted third parties can be deployed in such environments","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"173 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116425699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Bianchini, V. D. Antonellis, M. Melchiori, Denise Salvi
The current ever-growing evolution of mobile technologies suggests the possibility of accessing services, in an itinerant and ubiquitous way, through many kinds of mobile devices (laptops, palmtops, cellular phones and so on). On the other hand, the highly dynamic and context-dependent requirements of services in distributed environments motivate and recommend the use of ontology-based techniques and tools to automatically locate services that fulfil a given user request. Our aim in this work is to propose a lightweight ontology-based approach to allow the service discovery on mobile terminals, taking into account limited user interactions supported by this kind of devices. In the proposed approach, the user can specify the requested service in terms of expected capabilities. A semantic-enriched framework to describe services and an ontology-based discovery approach where such framework is exploited was applied to find services fulfilling the user needs, combining together different kinds of comparison strategies that provide a flexible and efficient matchmaking between service descriptions
{"title":"Lightweight Ontology-Based Service Discovery in Mobile Environments","authors":"D. Bianchini, V. D. Antonellis, M. Melchiori, Denise Salvi","doi":"10.1109/DEXA.2006.83","DOIUrl":"https://doi.org/10.1109/DEXA.2006.83","url":null,"abstract":"The current ever-growing evolution of mobile technologies suggests the possibility of accessing services, in an itinerant and ubiquitous way, through many kinds of mobile devices (laptops, palmtops, cellular phones and so on). On the other hand, the highly dynamic and context-dependent requirements of services in distributed environments motivate and recommend the use of ontology-based techniques and tools to automatically locate services that fulfil a given user request. Our aim in this work is to propose a lightweight ontology-based approach to allow the service discovery on mobile terminals, taking into account limited user interactions supported by this kind of devices. In the proposed approach, the user can specify the requested service in terms of expected capabilities. A semantic-enriched framework to describe services and an ontology-based discovery approach where such framework is exploited was applied to find services fulfilling the user needs, combining together different kinds of comparison strategies that provide a flexible and efficient matchmaking between service descriptions","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125410029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the past, multi-agent systems were used in proprietary environments. Nowadays, these systems have been used broadly in open distributed networks, such as e-commerce applications for Internet. An environment such as the Internet cannot be considered a safe place. Thus, multi-agent systems should have security mechanisms, e.g. confidentiality and integrity. The XML security specifications are standards that are based on XML and provide security mechanisms. They include: XML digital signature for digital signature; XML encryption for cryptography; XML key management specification for public key infrastructure. Agents may use a FIPA (foundation for intelligent physical agents) standard called RDF(resource description framework), which is a message content standard in XML language. Using this standard, agents can communicate exchanging XML messages, but these messages are not secure. In this article, we propose a secure communication model for agents based on RDF and the XML security specifications
{"title":"Security on MASs with XML Security Specifications","authors":"Emerson Oliveira, Zair Abdelouahab, D. Lopes","doi":"10.1109/DEXA.2006.126","DOIUrl":"https://doi.org/10.1109/DEXA.2006.126","url":null,"abstract":"In the past, multi-agent systems were used in proprietary environments. Nowadays, these systems have been used broadly in open distributed networks, such as e-commerce applications for Internet. An environment such as the Internet cannot be considered a safe place. Thus, multi-agent systems should have security mechanisms, e.g. confidentiality and integrity. The XML security specifications are standards that are based on XML and provide security mechanisms. They include: XML digital signature for digital signature; XML encryption for cryptography; XML key management specification for public key infrastructure. Agents may use a FIPA (foundation for intelligent physical agents) standard called RDF(resource description framework), which is a message content standard in XML language. Using this standard, agents can communicate exchanging XML messages, but these messages are not secure. In this article, we propose a secure communication model for agents based on RDF and the XML security specifications","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116801344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John N. Wilson, R. Gourlay, Robert Japp, M. Neumüller
The effective grouping, or partitioning, of semistructured data is of fundamental importance when providing support for queries. Partitions allow items within the data set that share common structural properties to be identified efficiently. This allows queries that make use of these properties, such as branching path expressions, to be accelerated. Here, we evaluate the effectiveness of several partitioning techniques by establishing the number of partitions that each scheme can identify over a given data set. In particular, we explore the use of parameterised indexes, based upon the notion of forward and backward bisimilarity, as a means of partitioning semistructured data; demonstrating that even restricted instances of such indexes can be used to identify the majority of relevant partitions in the data
{"title":"Extracting Partition Statistics from Semistructured Data","authors":"John N. Wilson, R. Gourlay, Robert Japp, M. Neumüller","doi":"10.1109/DEXA.2006.59","DOIUrl":"https://doi.org/10.1109/DEXA.2006.59","url":null,"abstract":"The effective grouping, or partitioning, of semistructured data is of fundamental importance when providing support for queries. Partitions allow items within the data set that share common structural properties to be identified efficiently. This allows queries that make use of these properties, such as branching path expressions, to be accelerated. Here, we evaluate the effectiveness of several partitioning techniques by establishing the number of partitions that each scheme can identify over a given data set. In particular, we explore the use of parameterised indexes, based upon the notion of forward and backward bisimilarity, as a means of partitioning semistructured data; demonstrating that even restricted instances of such indexes can be used to identify the majority of relevant partitions in the data","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129759967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Business-to-business e-commerce is emerging as trading parties are attempting to integrate their heterogeneous business processes and automate exchange of their services. To be able to collaborate, heterogeneity cross-organisational business processes needs adaptation of existing concepts for business process management. A shared ontology can encapsulate heterogeneity in business process model and offer common concepts to different partners. A problem is that, even agreeing on information to be exchanged, the partners usually expose different scenarios for collaboration. In this paper, we propose architecture for flexible modeling of company processes, to support a larger scale of B2B integration. The proposal is based on an extended ebXML scenario for companies to collaborate in a B2B context
{"title":"Towards Integrating Collaborative Business Process Based on a Process Ontology and EbXML Collaboration Scenario","authors":"Razika Driouche, Zizette Boufaïda, F. Kordon","doi":"10.1109/DEXA.2006.139","DOIUrl":"https://doi.org/10.1109/DEXA.2006.139","url":null,"abstract":"Business-to-business e-commerce is emerging as trading parties are attempting to integrate their heterogeneous business processes and automate exchange of their services. To be able to collaborate, heterogeneity cross-organisational business processes needs adaptation of existing concepts for business process management. A shared ontology can encapsulate heterogeneity in business process model and offer common concepts to different partners. A problem is that, even agreeing on information to be exchanged, the partners usually expose different scenarios for collaboration. In this paper, we propose architecture for flexible modeling of company processes, to support a larger scale of B2B integration. The proposal is based on an extended ebXML scenario for companies to collaborate in a B2B context","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"295 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123090565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data management has become one of the central issues in high-throughput biological screening. In particular high-throughput screening (HTS), applying automated microscopy, requires a system which is capable of storing and analyzing vast amounts of image and numeric data. These data include comprehensive information about the bioactive molecules, the targeted genes, and images as well as their extracted data matrices after acquisition. Here we present a Web-based bioinformatics solution for the management of images from different screening microscopes: the screening image browser (SIB). The following points describe this tool as well as an image retrieval mechanism, both working as a framework for browsing and analyzing screening information. A major outcome of this database is a unique, fully operational, distributed digital library of screening image data accessible to researchers. SIB is a scientific database that enables effective data management accessible through a standard Web-browser interface. The application utilizes a robust security architecture and is designed for efficient data exploration
{"title":"SIB: Database and Tool for the Integration and Browsing of Large Scale Image Hhigh-Throughput Screening Data","authors":"K. Kozak, M. Kozak, E. Krausz","doi":"10.1109/DEXA.2006.128","DOIUrl":"https://doi.org/10.1109/DEXA.2006.128","url":null,"abstract":"Data management has become one of the central issues in high-throughput biological screening. In particular high-throughput screening (HTS), applying automated microscopy, requires a system which is capable of storing and analyzing vast amounts of image and numeric data. These data include comprehensive information about the bioactive molecules, the targeted genes, and images as well as their extracted data matrices after acquisition. Here we present a Web-based bioinformatics solution for the management of images from different screening microscopes: the screening image browser (SIB). The following points describe this tool as well as an image retrieval mechanism, both working as a framework for browsing and analyzing screening information. A major outcome of this database is a unique, fully operational, distributed digital library of screening image data accessible to researchers. SIB is a scientific database that enables effective data management accessible through a standard Web-browser interface. The application utilizes a robust security architecture and is designed for efficient data exploration","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133657973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Real-time data processing systems are more and more popular nowadays. Data warehouses not only collect terabytes of data, they also process endless data streams. To support such a situation, a data extraction process must become a continuous process also. Here a problem of a failure resistance arises. It is important not only to process a set of data on time, even more important is not to lose any data when a failure occurs. We achieve this by applying a redundant distributed stream processing. In this paper, we present a fault-tolerant system designed for processing data streams originating from geographically distributed sources
{"title":"Fault-Tolerant Distributed Stream Processing System","authors":"M. Gorawski, Pawel Marks","doi":"10.1109/DEXA.2006.61","DOIUrl":"https://doi.org/10.1109/DEXA.2006.61","url":null,"abstract":"Real-time data processing systems are more and more popular nowadays. Data warehouses not only collect terabytes of data, they also process endless data streams. To support such a situation, a data extraction process must become a continuous process also. Here a problem of a failure resistance arises. It is important not only to process a set of data on time, even more important is not to lose any data when a failure occurs. We achieve this by applying a redundant distributed stream processing. In this paper, we present a fault-tolerant system designed for processing data streams originating from geographically distributed sources","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130125779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}