首页 > 最新文献

17th International Workshop on Database and Expert Systems Applications (DEXA'06)最新文献

英文 中文
A Tool for Collaborative Construction of Large Biological Ontologies 协同构建大型生物本体的工具
Jie Bao, Zhilian Hu, Doina Caragea, J. Reecy, Vasant G Honavar
In order for ontologies to be broadly useful to the scientific community, they need to capture knowledge and expertise of multiple experts and research groups. Consequently, the construction of such ontologies necessarily requires collaboration among individual experts or research groups. Support for such collaboration is largely lacking in existing ontology development environments. We describe some initial steps towards the development of a collaborative ontology development environment. Specifically, we describe an ontology editing tool COB editor which exploits the notion of modular ontologies (or ontology packages) to support sharing, reuse, and collaborative editing of partial order (i.e., DAG-structured) ontologies. COB editor can engage diverse and relatively autonomous communities of biologists in the process of creating the ontologies needed for annotating, integrating, and analyzing diverse sources of `omics' data
为了使本体对科学界广泛有用,它们需要获取多个专家和研究小组的知识和专业知识。因此,构建这样的本体必然需要专家个人或研究小组之间的合作。现有的本体开发环境在很大程度上缺乏对这种协作的支持。我们描述了协作本体开发环境开发的一些初始步骤。具体来说,我们描述了一个本体编辑工具COB编辑器,它利用模块化本体(或本体包)的概念来支持部分顺序(即dag结构)本体的共享、重用和协作编辑。COB编辑器可以在创建注释、整合和分析“组学”数据的不同来源所需的本体的过程中,吸引不同的和相对自主的生物学家社区
{"title":"A Tool for Collaborative Construction of Large Biological Ontologies","authors":"Jie Bao, Zhilian Hu, Doina Caragea, J. Reecy, Vasant G Honavar","doi":"10.1109/DEXA.2006.20","DOIUrl":"https://doi.org/10.1109/DEXA.2006.20","url":null,"abstract":"In order for ontologies to be broadly useful to the scientific community, they need to capture knowledge and expertise of multiple experts and research groups. Consequently, the construction of such ontologies necessarily requires collaboration among individual experts or research groups. Support for such collaboration is largely lacking in existing ontology development environments. We describe some initial steps towards the development of a collaborative ontology development environment. Specifically, we describe an ontology editing tool COB editor which exploits the notion of modular ontologies (or ontology packages) to support sharing, reuse, and collaborative editing of partial order (i.e., DAG-structured) ontologies. COB editor can engage diverse and relatively autonomous communities of biologists in the process of creating the ontologies needed for annotating, integrating, and analyzing diverse sources of `omics' data","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132354235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Performance Analysis of the varphi Failure Detector with its Tunable Parameters 参数可调的varphi故障检测器的性能分析
Naohiro Hayashibara, M. Takizawa
In this paper, we explain an implementation of an accrual failure detector, that we call the phi failure detector. The particularity of the phi failure detector is that it dynamically adjusts to current network conditions the scale on which the suspicion level is expressed. We have done the experiment in a LAN in a whole day and evaluated the behavior of our phi failure detector. Then we discuss on the parameters of the failure detector based on our experimental result
在本文中,我们解释了一个累加式故障检测器的实现,我们称之为phi故障检测器。phi故障检测器的特殊之处在于它能根据当前网络条件动态调整表示怀疑程度的尺度。我们用了一整天的时间在局域网中完成了实验,并评估了我们的phi故障检测器的行为。然后根据实验结果对故障检测器的参数进行了讨论
{"title":"Performance Analysis of the varphi Failure Detector with its Tunable Parameters","authors":"Naohiro Hayashibara, M. Takizawa","doi":"10.1109/DEXA.2006.111","DOIUrl":"https://doi.org/10.1109/DEXA.2006.111","url":null,"abstract":"In this paper, we explain an implementation of an accrual failure detector, that we call the phi failure detector. The particularity of the phi failure detector is that it dynamically adjusts to current network conditions the scale on which the suspicion level is expressed. We have done the experiment in a LAN in a whole day and evaluated the behavior of our phi failure detector. Then we discuss on the parameters of the failure detector based on our experimental result","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122913074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Refinement of Correspondences in EXSMAL for XML Document Transformation EXSMAL中用于XML文档转换的对应关系的细化
Herzi Khaled, A. Benharkat, Y. Amghar
Schema matching is an important prerequisite to the transformation of XML documents with different schemas. In this work, we are interested in the process of matching between data schemes in order to transform documents XML. After explaining related works in the domain, we choose the EXSMAL algorithm to generate a set of correspondences. Then we try to filter this set in order to obtain 1-1 correspondences. In this purpose, two calculations of similarity are applied: path similarity and internal similarity. This refinement helps to facilitate the transformation of the documents XML. We also base on a dynamic ontology updated by a user feedback which describes the semantic relation between nodes like IsA, PartOf, Similar, etc. These semantic relations are then expressed in LIMXS data model. The transformation will use operations such as: connect and rename for the simple matching, merge and split for the complex ones
模式匹配是用不同模式转换XML文档的重要前提。在这项工作中,我们感兴趣的是数据模式之间的匹配过程,以便将文档转换为XML。在解释了该领域的相关工作后,我们选择了EXSMAL算法来生成一组对应。然后我们尝试过滤这个集合以获得1-1对应。为此,应用了两种相似度计算方法:路径相似度和内部相似度。这种细化有助于简化文档XML的转换。基于用户反馈更新的动态本体,描述了IsA、PartOf、Similar等节点之间的语义关系。然后用LIMXS数据模型表示这些语义关系。转换将使用以下操作:对于简单匹配使用连接和重命名,对于复杂匹配使用合并和分割
{"title":"Refinement of Correspondences in EXSMAL for XML Document Transformation","authors":"Herzi Khaled, A. Benharkat, Y. Amghar","doi":"10.1109/DEXA.2006.121","DOIUrl":"https://doi.org/10.1109/DEXA.2006.121","url":null,"abstract":"Schema matching is an important prerequisite to the transformation of XML documents with different schemas. In this work, we are interested in the process of matching between data schemes in order to transform documents XML. After explaining related works in the domain, we choose the EXSMAL algorithm to generate a set of correspondences. Then we try to filter this set in order to obtain 1-1 correspondences. In this purpose, two calculations of similarity are applied: path similarity and internal similarity. This refinement helps to facilitate the transformation of the documents XML. We also base on a dynamic ontology updated by a user feedback which describes the semantic relation between nodes like IsA, PartOf, Similar, etc. These semantic relations are then expressed in LIMXS data model. The transformation will use operations such as: connect and rename for the simple matching, merge and split for the complex ones","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"726 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132325665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Extracting Metadata from Biological Experimental Data 从生物实验数据中提取元数据
Badr Al-Daihani, W. A. Gray, P. Kille
The process of automatically extracting metadata from an experiment's dataset is an important stage in efficiently integrating this dataset with data available in public bioinformatics data sources. Metadata extracted from the experiment's dataset can be stored in databases and used to verify data extracted from other experiments' datasets. Moreover, the biologist can keep track of the dataset so that it can be easily retrieved next time. The extracted metadata can be mined to discover useful knowledge as well as integrated with other information using domain ontology to reveal hidden relationships. The experiment's dataset may contain several kinds of metadata that can be used to add semantic value to linked data. This paper describes an approach for extracting metadata from an experiment's dataset. This system has been used in a preliminary investigation of aging across species
从实验数据集中自动提取元数据的过程是有效集成实验数据集与公共生物信息学数据源数据的重要步骤。从实验数据集中提取的元数据可以存储在数据库中,用于验证从其他实验数据集中提取的数据。此外,生物学家可以跟踪数据集,以便下次检索。提取的元数据可以被挖掘以发现有用的知识,也可以通过领域本体与其他信息集成以揭示隐藏的关系。实验的数据集可能包含几种元数据,可用于为关联数据添加语义值。本文描述了一种从实验数据集中提取元数据的方法。该系统已用于跨物种衰老的初步调查
{"title":"Extracting Metadata from Biological Experimental Data","authors":"Badr Al-Daihani, W. A. Gray, P. Kille","doi":"10.1109/DEXA.2006.58","DOIUrl":"https://doi.org/10.1109/DEXA.2006.58","url":null,"abstract":"The process of automatically extracting metadata from an experiment's dataset is an important stage in efficiently integrating this dataset with data available in public bioinformatics data sources. Metadata extracted from the experiment's dataset can be stored in databases and used to verify data extracted from other experiments' datasets. Moreover, the biologist can keep track of the dataset so that it can be easily retrieved next time. The extracted metadata can be mined to discover useful knowledge as well as integrated with other information using domain ontology to reveal hidden relationships. The experiment's dataset may contain several kinds of metadata that can be used to add semantic value to linked data. This paper describes an approach for extracting metadata from an experiment's dataset. This system has been used in a preliminary investigation of aging across species","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122054665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Data Replication and Update Management in Mobile Ad Hoc Networks (Invited Paper) 移动Ad Hoc网络中的数据复制和更新管理(特邀论文)
T. Hara
Data replication can drastically improve data accessibility in mobile ad hoc networks (MANETs). In this paper, we introduce our work that addresses data replication in MANETs, particularly focusing on update management. We explain a few research issues based on both optimistic and pessimistic consistency management policies. We also describe a few prospects for future directions
数据复制可以极大地提高移动自组织网络(manet)中的数据可访问性。在本文中,我们介绍了我们在manet中解决数据复制的工作,特别是关注更新管理。在乐观和悲观一致性管理策略的基础上,我们解释了一些研究问题。并对未来发展方向作了展望
{"title":"Data Replication and Update Management in Mobile Ad Hoc Networks (Invited Paper)","authors":"T. Hara","doi":"10.1109/DEXA.2006.47","DOIUrl":"https://doi.org/10.1109/DEXA.2006.47","url":null,"abstract":"Data replication can drastically improve data accessibility in mobile ad hoc networks (MANETs). In this paper, we introduce our work that addresses data replication in MANETs, particularly focusing on update management. We explain a few research issues based on both optimistic and pessimistic consistency management policies. We also describe a few prospects for future directions","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121078759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Co-Protégé: Collaborative Ontology Building with Divergences co - pro égé:基于分歧的协同本体构建
A. Díaz, Guillermo Baldo, G. Canals
In this paper we present an innovative approach to develop a domain ontology in collaborative fashion. This approach is synthesized in a groupware application which is called Co-Protege, a set of plug-ins which extends Protege. This approach is innovative because Co-Protege enables the coexistence of divergent conceptualizations and the discussion thread in order to record the ontology evolution
本文提出了一种以协作方式开发领域本体的创新方法。这种方法是在一个称为Co-Protege的群件应用程序中合成的,这是一组扩展Protege的插件。这种方法是创新的,因为Co-Protege使不同的概念和讨论线程共存,以记录本体的演变
{"title":"Co-Protégé: Collaborative Ontology Building with Divergences","authors":"A. Díaz, Guillermo Baldo, G. Canals","doi":"10.1109/DEXA.2006.41","DOIUrl":"https://doi.org/10.1109/DEXA.2006.41","url":null,"abstract":"In this paper we present an innovative approach to develop a domain ontology in collaborative fashion. This approach is synthesized in a groupware application which is called Co-Protege, a set of plug-ins which extends Protege. This approach is innovative because Co-Protege enables the coexistence of divergent conceptualizations and the discussion thread in order to record the ontology evolution","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"154155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116777502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A High-Level Architecture of a Metadata-based Ontology Matching Framework 基于元数据的本体匹配框架的高级体系结构
Malgorzata Mochól, E. Simperl
One of the pre-requisites for the realization of the semantic Web vision are matching techniques which are capable of handling the open, dynamic and heterogeneous nature of the semantic data in a feasible way. Currently this issue is not being optimally resolved; the majority of existing approaches to ontology matching are (implicitly) restricted to processing particular classes of ontologies and thus unable to guarantee a predictable result quality on arbitrary inputs. Accounting for the empirical findings of two case studies in ontology engineering, we argue that a possible solution to cope with this situation is to design a matching strategy which strives for an optimization of the matching process whilst being aware of the inherent dependencies between algorithms and the types of ontologies these are able to process successfully. We introduce a matching framework that, given a set of ontologies to be matched described by ontology metadata, takes into account the capabilities of existing matching algorithms (matcher metadata) and suggests, by using a set of rules, appropriate ones
语义Web视觉实现的先决条件之一是匹配技术,该技术能够以可行的方式处理语义数据的开放性、动态性和异构性。目前这个问题没有得到最佳解决;大多数现有的本体匹配方法(隐式地)局限于处理特定类别的本体,因此无法保证任意输入的可预测结果质量。考虑到本体工程中两个案例研究的实证结果,我们认为应对这种情况的可能解决方案是设计一种匹配策略,该策略力求优化匹配过程,同时意识到算法与这些能够成功处理的本体类型之间的固有依赖关系。我们引入了一个匹配框架,给定一组由本体元数据描述的要匹配的本体,该框架考虑到现有匹配算法(matcher元数据)的能力,并通过使用一组规则建议适当的匹配算法
{"title":"A High-Level Architecture of a Metadata-based Ontology Matching Framework","authors":"Malgorzata Mochól, E. Simperl","doi":"10.1109/DEXA.2006.9","DOIUrl":"https://doi.org/10.1109/DEXA.2006.9","url":null,"abstract":"One of the pre-requisites for the realization of the semantic Web vision are matching techniques which are capable of handling the open, dynamic and heterogeneous nature of the semantic data in a feasible way. Currently this issue is not being optimally resolved; the majority of existing approaches to ontology matching are (implicitly) restricted to processing particular classes of ontologies and thus unable to guarantee a predictable result quality on arbitrary inputs. Accounting for the empirical findings of two case studies in ontology engineering, we argue that a possible solution to cope with this situation is to design a matching strategy which strives for an optimization of the matching process whilst being aware of the inherent dependencies between algorithms and the types of ontologies these are able to process successfully. We introduce a matching framework that, given a set of ontologies to be matched described by ontology metadata, takes into account the capabilities of existing matching algorithms (matcher metadata) and suggests, by using a set of rules, appropriate ones","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123767769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Bridging the Gap between the SemanticWeb and Existing Network Services 弥合语义网和现有网络服务之间的鸿沟
Nickolas J. G. Falkner, P. Coddington, A. Wendelborn
This paper presents an overview of a mechanism for bridging the gaps between the semantic Web data and services, and existing network-based services that are not semantically-annotated or do not meet the requirements of semantic Web-based applications. The semantic Web is a relatively new set of technologies that mutually interoperate well but often requires mediation, translation or wrapping to interoperate with existing network-based services. Seen as an extension of network-based services and the WWW, the semantic Web constitutes an expanding system that can require significant effort to integrate and develop services while still providing seamless service to users. New components in a system must interoperate with the existing components and their use of protocols and shared data must be structurally and semantically equivalent. The new system must continue to meet the original system requirements as well as providing the new features or facilities. We propose a new model of network services using a knowledge-based approach that defines services and their data in terms of an ontology that can be shared with other components
本文概述了一种机制,用于弥合语义Web数据和服务与现有的基于网络的服务之间的差距,这些服务没有进行语义注释或不满足基于语义Web的应用程序的需求。语义Web是一组相对较新的技术,可以很好地相互互操作,但通常需要中介、翻译或包装才能与现有的基于网络的服务互操作。作为基于网络的服务和WWW的扩展,语义网构成了一个扩展系统,它需要大量的工作来集成和开发服务,同时仍然向用户提供无缝的服务。系统中的新组件必须与现有组件互操作,并且它们对协议和共享数据的使用必须在结构和语义上相同。新系统必须继续满足原系统的要求,并提供新的功能或设施。我们提出了一种新的网络服务模型,该模型使用基于知识的方法,根据可与其他组件共享的本体定义服务及其数据
{"title":"Bridging the Gap between the SemanticWeb and Existing Network Services","authors":"Nickolas J. G. Falkner, P. Coddington, A. Wendelborn","doi":"10.1109/DEXA.2006.37","DOIUrl":"https://doi.org/10.1109/DEXA.2006.37","url":null,"abstract":"This paper presents an overview of a mechanism for bridging the gaps between the semantic Web data and services, and existing network-based services that are not semantically-annotated or do not meet the requirements of semantic Web-based applications. The semantic Web is a relatively new set of technologies that mutually interoperate well but often requires mediation, translation or wrapping to interoperate with existing network-based services. Seen as an extension of network-based services and the WWW, the semantic Web constitutes an expanding system that can require significant effort to integrate and develop services while still providing seamless service to users. New components in a system must interoperate with the existing components and their use of protocols and shared data must be structurally and semantically equivalent. The new system must continue to meet the original system requirements as well as providing the new features or facilities. We propose a new model of network services using a knowledge-based approach that defines services and their data in terms of an ontology that can be shared with other components","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"294 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121361865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Regularity Analysis Using Time Slot Counting in the Mobile Clickstream 基于时隙计数的移动点击流规律分析
T. Yamakami
The ever-changing nature of the mobile Internet contributes to the difficulties encountered in user behavior research. Regularity is an important aspect of the mobile Internet in research and marketing, because it end users easily lose their interest and leave the mobile Web sites due to the limited visibility of the Web. Maintaining user loyalty is a vital challenge for mobile Webs. Therefore, the methods to identify loyalty transitions are important. The author proposes a regularity measure using click counting in the time slots. With the assumption that the users with regular access have more chances to continue to use the mobile Webs, the author examines the monthly prediction of user behavior based on the user access regularity in the previous month. The author obtains approximate 80% accuracy of prediction in the case study. The author discusses the limitation and implications of the comparison
移动互联网瞬息万变的特性给用户行为研究带来了困难。规律性是移动互联网在研究和市场营销中的一个重要方面,因为由于网络的有限可见性,最终用户很容易失去兴趣并离开移动网站。保持用户忠诚度是移动web的一个重要挑战。因此,识别忠诚度转变的方法是很重要的。作者提出了一种使用时隙点击计数的规则度量方法。假设定期访问的用户更有可能继续使用移动web,作者根据用户上个月的访问规律对用户行为进行月度预测。在实例研究中,作者的预测准确率接近80%。作者讨论了这种比较的局限性和意义
{"title":"Regularity Analysis Using Time Slot Counting in the Mobile Clickstream","authors":"T. Yamakami","doi":"10.1109/DEXA.2006.122","DOIUrl":"https://doi.org/10.1109/DEXA.2006.122","url":null,"abstract":"The ever-changing nature of the mobile Internet contributes to the difficulties encountered in user behavior research. Regularity is an important aspect of the mobile Internet in research and marketing, because it end users easily lose their interest and leave the mobile Web sites due to the limited visibility of the Web. Maintaining user loyalty is a vital challenge for mobile Webs. Therefore, the methods to identify loyalty transitions are important. The author proposes a regularity measure using click counting in the time slots. With the assumption that the users with regular access have more chances to continue to use the mobile Webs, the author examines the monthly prediction of user behavior based on the user access regularity in the previous month. The author obtains approximate 80% accuracy of prediction in the case study. The author discusses the limitation and implications of the comparison","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130303096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Mobility Extensions for Knowledge Discovery Workflows in Data Mining Grids 数据挖掘网格中知识发现工作流的移动性扩展
K. Hummel, Georg Bohs, P. Brezany, I. Janciak
In scientific and other domains, knowledge discovery has started to be widely supported by service oriented data mining grids. When access to such services is required anytime at anyplace, the integration of mobile devices and wireless networks into grids is useful. However, mobile technologies exhibit limited capabilities and movement further cause frequent changes of context, like location and, thus, network connectivity. In this paper, the integration of mobile devices as ubiquitous knowledge discovery clients is proposed. The major service classes in knowledge discovery workflow management are addressed, those are, the monitoring and controlling of executing services. The feasibility of the approach is demonstrated by means of a .NET-based prototypical implementation on PDAs for the knowledge discovery framework GridMiner
在科学和其他领域,面向服务的数据挖掘网格已经开始广泛支持知识发现。当需要随时随地访问这些服务时,将移动设备和无线网络集成到网格中是有用的。然而,移动技术的能力有限,移动进一步导致环境的频繁变化,比如位置和网络连接。本文提出将移动设备集成为泛在知识发现客户端。讨论了知识发现工作流管理中的主要服务类,即对执行服务的监视和控制。通过一个基于。net的知识发现框架GridMiner在pda上的原型实现,验证了该方法的可行性
{"title":"Mobility Extensions for Knowledge Discovery Workflows in Data Mining Grids","authors":"K. Hummel, Georg Bohs, P. Brezany, I. Janciak","doi":"10.1109/DEXA.2006.97","DOIUrl":"https://doi.org/10.1109/DEXA.2006.97","url":null,"abstract":"In scientific and other domains, knowledge discovery has started to be widely supported by service oriented data mining grids. When access to such services is required anytime at anyplace, the integration of mobile devices and wireless networks into grids is useful. However, mobile technologies exhibit limited capabilities and movement further cause frequent changes of context, like location and, thus, network connectivity. In this paper, the integration of mobile devices as ubiquitous knowledge discovery clients is proposed. The major service classes in knowledge discovery workflow management are addressed, those are, the monitoring and controlling of executing services. The feasibility of the approach is demonstrated by means of a .NET-based prototypical implementation on PDAs for the knowledge discovery framework GridMiner","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129285665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
17th International Workshop on Database and Expert Systems Applications (DEXA'06)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1