Publishing information in a virtual organization (VO) has become too easy due to low barriers; hence development of novel mechanisms to assess the quality of collected information has become a necessity. An evaluator makes such an assessment based on the trust he/she places on the information. This paper presents a model for evaluating information trustworthiness in a data-intensive VO.When some information is derived from various data items gathered from multiple sources (each data item is called an object as used together with the term, subject), it is possible that no data value (called a version of the object) satisfies an evaluator's requirement with regard to information quality, if they are evaluated separately. According to the principle of object trust combination, if the final values of an object calculated by using significantly different methods are similar, then the evaluator places higher level of trust in the results. Intuitively, different versions of the same object that are calculated in different ways but have similar values provides "multiple-proofs" towards their correctness. We assume that a subject has no conflicting information on a given object.This paper uses a formal data structure to represent how a given piece of information (object version) has been formed and develops algorithms (see Section 4) to compare the component structure similarity/dissimilarity between two object versions. This helps in calculating the final trust values of the object.
{"title":"Information trustworthiness evaluation based on trust combination","authors":"Yanjun Zuo, B. Panda","doi":"10.1145/1141277.1141721","DOIUrl":"https://doi.org/10.1145/1141277.1141721","url":null,"abstract":"Publishing information in a virtual organization (VO) has become too easy due to low barriers; hence development of novel mechanisms to assess the quality of collected information has become a necessity. An evaluator makes such an assessment based on the trust he/she places on the information. This paper presents a model for evaluating information trustworthiness in a data-intensive VO.When some information is derived from various data items gathered from multiple sources (each data item is called an object as used together with the term, subject), it is possible that no data value (called a version of the object) satisfies an evaluator's requirement with regard to information quality, if they are evaluated separately. According to the principle of object trust combination, if the final values of an object calculated by using significantly different methods are similar, then the evaluator places higher level of trust in the results. Intuitively, different versions of the same object that are calculated in different ways but have similar values provides \"multiple-proofs\" towards their correctness. We assume that a subject has no conflicting information on a given object.This paper uses a formal data structure to represent how a given piece of information (object version) has been formed and develops algorithms (see Section 4) to compare the component structure similarity/dissimilarity between two object versions. This helps in calculating the final trust values of the object.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115165113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to its importance for the development and the implementation of standard software applications in organizations, reference modeling has gained intensive coverage by IS researchers as well as by practitioners. Method engineering is covered less, in particular by IS practice. There is evidence that the basic construction and application principles for reference models and methods are similar. The goal of this paper is to analyze reuse potentials. As a conceptual basis, the state-of-the-art of reference modeling and reference model application as well as the state-of-the-art of method engineering and method application are presented. Reuse potentials are systematically analyzed, and future research directions in this area are outlined.
{"title":"Reference modeling and method construction: a design science perspective","authors":"R. Winter, Joachim Schelp","doi":"10.1145/1141277.1141638","DOIUrl":"https://doi.org/10.1145/1141277.1141638","url":null,"abstract":"Due to its importance for the development and the implementation of standard software applications in organizations, reference modeling has gained intensive coverage by IS researchers as well as by practitioners. Method engineering is covered less, in particular by IS practice. There is evidence that the basic construction and application principles for reference models and methods are similar. The goal of this paper is to analyze reuse potentials. As a conceptual basis, the state-of-the-art of reference modeling and reference model application as well as the state-of-the-art of method engineering and method application are presented. Reuse potentials are systematically analyzed, and future research directions in this area are outlined.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116909251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
What can be said against a moral obligation to use IT for enhancement purposes? Some have argued - and it is very well conceivable that this is an increasingly common conception - that we may have a moral obligation to use IT for enhancing human bodies and human decision-making, for instance by using computers for moral decision-making in cases in which we are dealing with a high level of (moral) complexity such as euthanasia decisions. In this paper I will formulate some objections against the suggestion made by some that IT tools can and ought to be used for human enhancement, in the sense of improving moral decision-making.If we were to use IT for enhancement purposes, what would be the problems? In this paper I will discuss some problems, such as moral deskilling, epistemic dependence, the allocation of responsibility for IT support, and epistemic paternalism. The conclusion is that it is questionable whether we can speak of a moral obligation to use IT tools for human enhancement. IT is certainly extremely helpful in improving decision-making and improving the quality of life as conceived by some. However, speaking of a moral obligation seems too strong of a claim or at least it should be reconsidered in light of the issues here discussed.
{"title":"Moral responsibility and IT for human enhancement.","authors":"Noëmi Manders-Huits","doi":"10.1145/1141277.1141340","DOIUrl":"https://doi.org/10.1145/1141277.1141340","url":null,"abstract":"What can be said against a moral obligation to use IT for enhancement purposes? Some have argued - and it is very well conceivable that this is an increasingly common conception - that we may have a moral obligation to use IT for enhancing human bodies and human decision-making, for instance by using computers for moral decision-making in cases in which we are dealing with a high level of (moral) complexity such as euthanasia decisions. In this paper I will formulate some objections against the suggestion made by some that IT tools can and ought to be used for human enhancement, in the sense of improving moral decision-making.If we were to use IT for enhancement purposes, what would be the problems? In this paper I will discuss some problems, such as moral deskilling, epistemic dependence, the allocation of responsibility for IT support, and epistemic paternalism. The conclusion is that it is questionable whether we can speak of a moral obligation to use IT tools for human enhancement. IT is certainly extremely helpful in improving decision-making and improving the quality of life as conceived by some. However, speaking of a moral obligation seems too strong of a claim or at least it should be reconsidered in light of the issues here discussed.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117340410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While the informal style used to describe design patterns has proven valuable, it is also imprecise. To ensure that patterns are applied correctly, we must also have precise pattern characterizations, and tools for determining whether the appropriate implementation requirements are satisfied. To address this problem, we first present a specification language that captures pattern requirements precisely, as well as the ways in which patterns are specialized for use. Second, we present a tool that generates a set of aspect-oriented monitors for a system based on the specifications of the patterns used in its design. The generated aspects are used to monitor the system at runtime to determine whether the appropriate implementation requirements are satisfied.
{"title":"Automated generation of monitors for pattern contracts","authors":"B. Tyler, J. Hallstrom, N. Soundarajan","doi":"10.1145/1141277.1141695","DOIUrl":"https://doi.org/10.1145/1141277.1141695","url":null,"abstract":"While the informal style used to describe design patterns has proven valuable, it is also imprecise. To ensure that patterns are applied correctly, we must also have precise pattern characterizations, and tools for determining whether the appropriate implementation requirements are satisfied. To address this problem, we first present a specification language that captures pattern requirements precisely, as well as the ways in which patterns are specialized for use. Second, we present a tool that generates a set of aspect-oriented monitors for a system based on the specifications of the patterns used in its design. The generated aspects are used to monitor the system at runtime to determine whether the appropriate implementation requirements are satisfied.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121052742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Upcoming ubiquitous computing systems are required to operate in dynamic, diverse, unverified, and unpredictable operating environment. The OSGi (Open Service Gateway initiative) framework employs the service-oriented approach and the java classloader architecture for the runtime service deployment, that are well suited to the dynamic environment envisioned for home networking and ubiquitous computing. However, the current OSGi framework does not provide full reliability measures, especially for failure conditions such as network, device, and application failures. This paper analyzes software reliability issues in OSGi framework and proposes a proxy-based reliable extensions. The design concept is implemented and partly tested on an open source OSGi platform, Oscar, for the smart home residential gateway test-bed.
{"title":"Towards reliable OSGi framework and applications","authors":"Heejune Ahn, H. Oh, C. Sung","doi":"10.1145/1141277.1141617","DOIUrl":"https://doi.org/10.1145/1141277.1141617","url":null,"abstract":"Upcoming ubiquitous computing systems are required to operate in dynamic, diverse, unverified, and unpredictable operating environment. The OSGi (Open Service Gateway initiative) framework employs the service-oriented approach and the java classloader architecture for the runtime service deployment, that are well suited to the dynamic environment envisioned for home networking and ubiquitous computing. However, the current OSGi framework does not provide full reliability measures, especially for failure conditions such as network, device, and application failures. This paper analyzes software reliability issues in OSGi framework and proposes a proxy-based reliable extensions. The design concept is implemented and partly tested on an open source OSGi platform, Oscar, for the smart home residential gateway test-bed.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127468970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose to improve the effectiveness of biomedical information retrieval via a medical thesaurus. We analyzed the deficiencies of the existing medical thesauri and reconstructed a new thesaurus, called MEDTHES, which follows the ANSI/NISO Z39.19-2003 standard. MEDTHES also endows the users with fine-grained control of information retrieval by providing functions to calculate the semantic similarity between words. We demonstrate the usage of MEDTHES through an existing data search engine.
{"title":"Semantic-based information retrieval of biomedical data","authors":"Peng Yan, Y. Jiao, A. Hurson, T. Potok","doi":"10.1145/1141277.1141678","DOIUrl":"https://doi.org/10.1145/1141277.1141678","url":null,"abstract":"In this paper, we propose to improve the effectiveness of biomedical information retrieval via a medical thesaurus. We analyzed the deficiencies of the existing medical thesauri and reconstructed a new thesaurus, called MEDTHES, which follows the ANSI/NISO Z39.19-2003 standard. MEDTHES also endows the users with fine-grained control of information retrieval by providing functions to calculate the semantic similarity between words. We demonstrate the usage of MEDTHES through an existing data search engine.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123319099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Benda, P. Jisl, M. Pechoucek, Niranjan Suri, M. Carvalho
Mobile ad-hoc networks (MANET) are expected to form the basis of future mission critical applications such as combat and rescue operations. In this context, communication and computational tasks required by overlying applications will rely on the combined capabilities and resources provided by the underlying network nodes. This paper introduces an integrated FlexFeed/A-globe technology and distributed algorithm for opportunistic resource allocation in resource-and policy-constrained mobile ad-hoc networks. The algorithm is based on agent negotiation for the bidding, contract and reservation of resources, relying primarily on the concept of remote presence. In the proposed algorithm, stand-in Agents technology is used to create a virtual, distributed co-ordination component for opportunistic resource allocation in mobile ad-hoc networks.
移动自组织网络(MANET)有望成为未来关键任务应用的基础,如战斗和救援行动。在这种情况下,覆盖的应用程序所需的通信和计算任务将依赖于底层网络节点提供的综合能力和资源。本文介绍了一种集成的FlexFeed/ a - global技术和分布式算法,用于在资源和策略受限的移动自组织网络中进行机会性资源分配。该算法主要依靠远程存在的概念,基于代理协商进行资源的招标、合同和保留。在该算法中,使用代理技术创建虚拟的分布式协调组件,用于移动自组织网络中的机会性资源分配。
{"title":"A distributed stand-in agent based algorithm for opportunistic resource allocation","authors":"P. Benda, P. Jisl, M. Pechoucek, Niranjan Suri, M. Carvalho","doi":"10.1145/1141277.1141303","DOIUrl":"https://doi.org/10.1145/1141277.1141303","url":null,"abstract":"Mobile ad-hoc networks (MANET) are expected to form the basis of future mission critical applications such as combat and rescue operations. In this context, communication and computational tasks required by overlying applications will rely on the combined capabilities and resources provided by the underlying network nodes. This paper introduces an integrated FlexFeed/A-globe technology and distributed algorithm for opportunistic resource allocation in resource-and policy-constrained mobile ad-hoc networks. The algorithm is based on agent negotiation for the bidding, contract and reservation of resources, relying primarily on the concept of remote presence. In the proposed algorithm, stand-in Agents technology is used to create a virtual, distributed co-ordination component for opportunistic resource allocation in mobile ad-hoc networks.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123780183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many distributed real-time and embedded (DRE) applications require a scalable event-driven communication model that decouples suppliers from consumers and simultaneously supports advanced quality of service (QoS) properties. This article focuses on the design of such a service complying with the OMG Notification Service standard. It is optimized for the CAN bus, a widely used interconnect, where real-time characteristics are a requirement. A new protocol for the efficient distribution of events in a CAN-based distributed control system is presented, a protocol which is tailored to the CAN bus and produces very low overhead by utilizing CAN-specific features.
{"title":"Design and implementation of a real-time notification service within the context of embedded ORB and the CAN bus","authors":"T. Guesmi, H. Rezig","doi":"10.1145/1141277.1141454","DOIUrl":"https://doi.org/10.1145/1141277.1141454","url":null,"abstract":"Many distributed real-time and embedded (DRE) applications require a scalable event-driven communication model that decouples suppliers from consumers and simultaneously supports advanced quality of service (QoS) properties. This article focuses on the design of such a service complying with the OMG Notification Service standard. It is optimized for the CAN bus, a widely used interconnect, where real-time characteristics are a requirement. A new protocol for the efficient distribution of events in a CAN-based distributed control system is presented, a protocol which is tailored to the CAN bus and produces very low overhead by utilizing CAN-specific features.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126728373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Grid has been focused in a distributed computing community. There has been a lot of research in these areas, especially for design and development of Grid middleware. More recently, the service-oriented architecture based on Web Services rapidly became a major issue. The service-oriented architecture provides a modularized functionality to Grid applications. However, this new technology has some limitations. Web Services basically works with the SOAP protocol, but it is not suitable for massive scientific data. In this paper, we propose MAGE, Modular and Adaptive Grid Environment, which is uses dynamically reconfigurable component architecture with interfaces. MAGE provides several level of transparency to the Grid application development, and it can dynamically reconfigure its architecture to adapt to heterogeneous Grid environments.
{"title":"Light-weight service-oriented grid application toolkit","authors":"Sungju Kwon, Jaeyoung Choi, Kumwon Cho","doi":"10.1145/1141277.1141622","DOIUrl":"https://doi.org/10.1145/1141277.1141622","url":null,"abstract":"Grid has been focused in a distributed computing community. There has been a lot of research in these areas, especially for design and development of Grid middleware. More recently, the service-oriented architecture based on Web Services rapidly became a major issue. The service-oriented architecture provides a modularized functionality to Grid applications. However, this new technology has some limitations. Web Services basically works with the SOAP protocol, but it is not suitable for massive scientific data. In this paper, we propose MAGE, Modular and Adaptive Grid Environment, which is uses dynamically reconfigurable component architecture with interfaces. MAGE provides several level of transparency to the Grid application development, and it can dynamically reconfigure its architecture to adapt to heterogeneous Grid environments.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126832339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A two-phase strategy is widely adopted to solve the side-chain conformation prediction (SCCP) problem. Phase one is a fast reduction phase removing large numbers of rotamers not existing in the GMEC. Phase two (optimization phase) uses heuristics or exhaustive search to find a good/optimal solution. Presently, DEE (Dead End Elimination) is the only deterministic reduction method for phase one. However, to achieve convergence in phase two using DEE, the strategy of forming super-residues is used. This quickly leads to a combinatorial explosion, and becomes inefficient In this paper, an improvement of the DEE process by forming super-residues efficiently is proposed for phase one. The method basically merges residues into pairs based on some merging criteria. Simple Goldstein is then applied until no more elimination is possible. A decoupling process then reforms the original residues sans removed rotamers and rotamer pairs. The process of merging and elimination is repeated until no more elimination is possible. Initial experiments have shown the method, called Merge-Decoupling DEE, can fix up to 25% of the unfixed residues coming out of Simple Goldstein DEE.
侧链构象预测(SCCP)问题广泛采用两阶段策略。第一阶段是快速还原阶段,去除GMEC中不存在的大量转子。第二阶段(优化阶段)使用启发式或穷举搜索来找到一个好的/最优的解决方案。目前,DEE (Dead End Elimination)是第一阶段唯一的确定性约简方法。然而,为了在第二阶段使用DEE实现收敛,使用了形成超残数的策略。本文提出了一种通过有效地形成超残基来改进DEE过程的方法。该方法基本上是根据一些合并准则将残基合并成对。然后应用简单的戈尔茨坦直到不可能再消除为止。然后进行解耦过程,在不去除转子和转子对的情况下对原始残留物进行改造。重复合并和消除的过程,直到不能再消除为止。最初的实验表明,这种被称为“合并-解耦DEE”的方法可以固定简单戈尔茨坦DEE中高达25%的未固定残留物。
{"title":"An extension of dead end elimination for protein side-chain conformation using merge-decoupling","authors":"K. F. Chong, H. Leong","doi":"10.1145/1141277.1141320","DOIUrl":"https://doi.org/10.1145/1141277.1141320","url":null,"abstract":"A two-phase strategy is widely adopted to solve the side-chain conformation prediction (SCCP) problem. Phase one is a fast reduction phase removing large numbers of rotamers not existing in the GMEC. Phase two (optimization phase) uses heuristics or exhaustive search to find a good/optimal solution. Presently, DEE (Dead End Elimination) is the only deterministic reduction method for phase one. However, to achieve convergence in phase two using DEE, the strategy of forming super-residues is used. This quickly leads to a combinatorial explosion, and becomes inefficient In this paper, an improvement of the DEE process by forming super-residues efficiently is proposed for phase one. The method basically merges residues into pairs based on some merging criteria. Simple Goldstein is then applied until no more elimination is possible. A decoupling process then reforms the original residues sans removed rotamers and rotamer pairs. The process of merging and elimination is repeated until no more elimination is possible. Initial experiments have shown the method, called Merge-Decoupling DEE, can fix up to 25% of the unfixed residues coming out of Simple Goldstein DEE.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126865830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}