The Knowledge management will become important and effective strategic instrument to improve the organizational competitive power with the coming of knowledge-economy's age, the evaluation of knowledge management performance is an important part of knowledge management process and an important way to know the level of knowledge management of enterprise. This paper brings forward the indicator system of knowledge management's performance evaluation on the basis of current study of home and oversea and the related reference, and establishes the performance evaluation model based on combination of Grey Evaluation Method and AHP method(Grey-AHP), and also do some example research. The examples demonstrated that: Grey-AHP method can do well in evaluation.
{"title":"The Evaluation Study of Knowledge Management Performance Based on Grey -AHP Method","authors":"F. Jin, Peide Liu, Xin Zhang","doi":"10.1109/SNPD.2007.143","DOIUrl":"https://doi.org/10.1109/SNPD.2007.143","url":null,"abstract":"The Knowledge management will become important and effective strategic instrument to improve the organizational competitive power with the coming of knowledge-economy's age, the evaluation of knowledge management performance is an important part of knowledge management process and an important way to know the level of knowledge management of enterprise. This paper brings forward the indicator system of knowledge management's performance evaluation on the basis of current study of home and oversea and the related reference, and establishes the performance evaluation model based on combination of Grey Evaluation Method and AHP method(Grey-AHP), and also do some example research. The examples demonstrated that: Grey-AHP method can do well in evaluation.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115597968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computational morphology is a core component in many different types of natural language processing, such as the alignment techniques. This paper describes a method for morphological processing. Based on both rules and statistical models, a lemmatizer is constructed to analyze the English inflectional morphology, and automatically derives the lemmas of the words. The rule model incorporates data from various corpora, machine-readable dictionaries, and an empirical metamorphose rule set, and the statistical model applies mainly the maximum entropy principles to deal with unknown words and ambiguous cases effectively. The knowledge used in our lemmatizer is convenient to update to support the development of natural language processing. Experiments show that the lemmatizer has a wide coverage and high accuracy.
{"title":"A Hybrid Model for Computational Morphology Application","authors":"Xu Yang, Wang Hou-feng","doi":"10.1109/SNPD.2007.31","DOIUrl":"https://doi.org/10.1109/SNPD.2007.31","url":null,"abstract":"Computational morphology is a core component in many different types of natural language processing, such as the alignment techniques. This paper describes a method for morphological processing. Based on both rules and statistical models, a lemmatizer is constructed to analyze the English inflectional morphology, and automatically derives the lemmas of the words. The rule model incorporates data from various corpora, machine-readable dictionaries, and an empirical metamorphose rule set, and the statistical model applies mainly the maximum entropy principles to deal with unknown words and ambiguous cases effectively. The knowledge used in our lemmatizer is convenient to update to support the development of natural language processing. Experiments show that the lemmatizer has a wide coverage and high accuracy.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116643637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chuanyao Yang, Yuqin Li, Zhenghua Wang, Chenghong Zhang, Yunfa Hu
This paper analyses the theory of information retrieval relativity and proposes establishing a content-based industry information retrieval system based on ontology. The system can accomplish the mapping between retrieval results and pragmatics relativity in some degree and bring more convenience to users when querying, which embodies the user- oriented principle. This retrieval system uses a novel retrieval model called sorted duality inter-relevant successive tree to index the contents of yellow page data to improve the retrieval efficiency. The experiment shows that this system is successful and has a better precision and recall of retrieval.
{"title":"A Yellow Page Information Retrieval System Based on Sorted Duality Interrelevant Successive Tree and Industry Ontology","authors":"Chuanyao Yang, Yuqin Li, Zhenghua Wang, Chenghong Zhang, Yunfa Hu","doi":"10.1109/SNPD.2007.480","DOIUrl":"https://doi.org/10.1109/SNPD.2007.480","url":null,"abstract":"This paper analyses the theory of information retrieval relativity and proposes establishing a content-based industry information retrieval system based on ontology. The system can accomplish the mapping between retrieval results and pragmatics relativity in some degree and bring more convenience to users when querying, which embodies the user- oriented principle. This retrieval system uses a novel retrieval model called sorted duality inter-relevant successive tree to index the contents of yellow page data to improve the retrieval efficiency. The experiment shows that this system is successful and has a better precision and recall of retrieval.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116944220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In modern battle field, battle damage assessment (BDA) is one of the most important bases for battle decision. With increasing intensity of war, more and more BDA data created, how to effectively store, manage and make a good use of these BDA data becomes an important matter. Data warehouse (DW) is a decision support tool developed in recent decades, having powerful capability of data management and decision support. Paper discussed building of BDA data warehouse, management of BDA data and BDA decision based on DW.
{"title":"BDA information management and decision support based on DW","authors":"Ma Zhi-jun, Chen Li, Zhang yi-zhen","doi":"10.1109/SNPD.2007.196","DOIUrl":"https://doi.org/10.1109/SNPD.2007.196","url":null,"abstract":"In modern battle field, battle damage assessment (BDA) is one of the most important bases for battle decision. With increasing intensity of war, more and more BDA data created, how to effectively store, manage and make a good use of these BDA data becomes an important matter. Data warehouse (DW) is a decision support tool developed in recent decades, having powerful capability of data management and decision support. Paper discussed building of BDA data warehouse, management of BDA data and BDA decision based on DW.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116955482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a new structure-based approach, called Xregion, to store XML data in relational databases. Our approach first partitions an XML document into several disjoint regions according to the cardinality of element nodes, and then maps these regions into separate relations. Our experimental results demonstrate that the proposed approach dramatically improves the performance of queries on the XML data over the existing approaches.
{"title":"Xregion: A structure-based approach to Storing XML Data in Relational Databases","authors":"Li-Yan Yuan, Meng Xue","doi":"10.1109/SNPD.2007.534","DOIUrl":"https://doi.org/10.1109/SNPD.2007.534","url":null,"abstract":"In this paper, we propose a new structure-based approach, called Xregion, to store XML data in relational databases. Our approach first partitions an XML document into several disjoint regions according to the cardinality of element nodes, and then maps these regions into separate relations. Our experimental results demonstrate that the proposed approach dramatically improves the performance of queries on the XML data over the existing approaches.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116987858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The concept of cumulated anomaly is addressed in this paper, which describes a new type of database anomalies. A detection model, dubiety-determining model (DDM), for cumulated anomaly, is proposed. This model is based on statistical theories and fuzzy set theories. The DDM can measure the dubiety degree of each database transaction quantitatively. We designed software system architecture to support the DDM for monitoring database transactions. We also implemented the system and tested it. Our experimental results show that the DDM method is feasible and effective.
本文提出了累积异常的概念,描述了一种新的数据库异常类型。提出了一种累积异常的检测模型,即dubidetermination model (DDM)。该模型基于统计理论和模糊集理论。DDM可以定量地度量每个数据库事务的可疑程度。设计了支持DDM监控数据库事务的软件系统架构。我们还实现了该系统并对其进行了测试。实验结果表明,DDM方法是可行和有效的。
{"title":"Detecting Cumulated Anomaly by a Dubiety Degree based detection Model","authors":"Gang Lu, Junkai Yi, K. Lu","doi":"10.1109/SNPD.2007.237","DOIUrl":"https://doi.org/10.1109/SNPD.2007.237","url":null,"abstract":"The concept of cumulated anomaly is addressed in this paper, which describes a new type of database anomalies. A detection model, dubiety-determining model (DDM), for cumulated anomaly, is proposed. This model is based on statistical theories and fuzzy set theories. The DDM can measure the dubiety degree of each database transaction quantitatively. We designed software system architecture to support the DDM for monitoring database transactions. We also implemented the system and tested it. Our experimental results show that the DDM method is feasible and effective.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"107 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120874700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The reliable assignment of the messages to its originator and the integrality verification of the messages is an important issue in classic cryptography as well as quantum cryptography. To perform such kind of tasks, a quantum signature scheme is proposed by exploiting quantum one-way function. The employed quantum one-way function is generated by using quantum fingerprint and stabilizer quantum code.
{"title":"Quantum signature based on quantum one-way map and the stabilized quantum code","authors":"Ying Guo, Wenshan Cui, Yutao Zhang","doi":"10.1109/SNPD.2007.440","DOIUrl":"https://doi.org/10.1109/SNPD.2007.440","url":null,"abstract":"The reliable assignment of the messages to its originator and the integrality verification of the messages is an important issue in classic cryptography as well as quantum cryptography. To perform such kind of tasks, a quantum signature scheme is proposed by exploiting quantum one-way function. The employed quantum one-way function is generated by using quantum fingerprint and stabilizer quantum code.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127176085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many automatic test data generation approaches use constraint solvers to find data values. One problem with this method is that it cannot generate test data when the constraints are not solvable, either because there is no solution or the constraints are too complex. We propose a constraint prioritization method using data sampling scores to generate valid test data even when a set of constraints is not solvable. Our case study illustrates the effectiveness of this method.
{"title":"Prioritized Constraints with Data Sampling Scores for Automatic Test Data Generation","authors":"Xiao Ma, J. J. Li, D. Weiss","doi":"10.1109/SNPD.2007.523","DOIUrl":"https://doi.org/10.1109/SNPD.2007.523","url":null,"abstract":"Many automatic test data generation approaches use constraint solvers to find data values. One problem with this method is that it cannot generate test data when the constraints are not solvable, either because there is no solution or the constraints are too complex. We propose a constraint prioritization method using data sampling scores to generate valid test data even when a set of constraints is not solvable. Our case study illustrates the effectiveness of this method.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127502058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is very difficult to construct correct component behavior models for distributed, reactive system. The concept of service makes us revisit the method of how to construct component behavior models. On the contrary, current descriptions of service are mainly based on syntax. Because service lacks of a systematic semantic description, it is hard to develop service with high quality and reliability. In this paper we combine the ideas of service and component; propose a new methodology supporting automatic synthesis and verification of component behavior model. This method uses traditional scenario-based synthesis technique to synthesize each role behavior model of service, and verifies desired properties of role by model checking, then synthesizes component behavior model by composing all roles participated in different services, and verifies liveness properties of component by compositional reasoning thus avoids searching the whole state space of component.
{"title":"A service-oriented methodology supporting automatic synthesis and verification of component behavior model","authors":"Pengcheng Zhang, Yu Zhou, Bixin Li","doi":"10.1109/SNPD.2007.124","DOIUrl":"https://doi.org/10.1109/SNPD.2007.124","url":null,"abstract":"It is very difficult to construct correct component behavior models for distributed, reactive system. The concept of service makes us revisit the method of how to construct component behavior models. On the contrary, current descriptions of service are mainly based on syntax. Because service lacks of a systematic semantic description, it is hard to develop service with high quality and reliability. In this paper we combine the ideas of service and component; propose a new methodology supporting automatic synthesis and verification of component behavior model. This method uses traditional scenario-based synthesis technique to synthesize each role behavior model of service, and verifies desired properties of role by model checking, then synthesizes component behavior model by composing all roles participated in different services, and verifies liveness properties of component by compositional reasoning thus avoids searching the whole state space of component.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125866245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
From the perspective of practical application, this article elaborates on the business rules management module developed in the telecom operator's settlement and apportion system, analyses potential problems that may arise in the business rules management module, provides a design approach and solution to the business rules management, and validates the effectiveness and correctness of the business rules at all levels. At present, the system has been put in production and proved to be effective.
{"title":"The Management and Validation of Business Rules in Telecom Operator's Settlement and Apportion System","authors":"Biying Lin, Yanhui Zhang","doi":"10.1109/SNPD.2007.546","DOIUrl":"https://doi.org/10.1109/SNPD.2007.546","url":null,"abstract":"From the perspective of practical application, this article elaborates on the business rules management module developed in the telecom operator's settlement and apportion system, analyses potential problems that may arise in the business rules management module, provides a design approach and solution to the business rules management, and validates the effectiveness and correctness of the business rules at all levels. At present, the system has been put in production and proved to be effective.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123444045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}