In the near future, people will be able to access information and resources on the public open network environment. However, few current network environments seem to have unified method for protection of information and resource. This paper proposes a unified resource protection scheme on the basis of fusion of two multi-agent systems called Kodama and VPC. Kodama has the power of constructing flexible hierarchical logical spaces, and VPC has the ability to dynamically change its behavior according to the circumstance of user. As a protection scheme, two kinds of policies called public policy and private policy are introduced into the fused multi-agent system. Combination of these policies makes it possible to realize an open and secure information sharing system. We show practical sample applications using the scheme
{"title":"Rights Sensitive Information Sharing Space Based on Multi-Agent System","authors":"S. Amamiya, M. Amamiya","doi":"10.1109/ICITA.2005.239","DOIUrl":"https://doi.org/10.1109/ICITA.2005.239","url":null,"abstract":"In the near future, people will be able to access information and resources on the public open network environment. However, few current network environments seem to have unified method for protection of information and resource. This paper proposes a unified resource protection scheme on the basis of fusion of two multi-agent systems called Kodama and VPC. Kodama has the power of constructing flexible hierarchical logical spaces, and VPC has the ability to dynamically change its behavior according to the circumstance of user. As a protection scheme, two kinds of policies called public policy and private policy are introduced into the fused multi-agent system. Combination of these policies makes it possible to realize an open and secure information sharing system. We show practical sample applications using the scheme","PeriodicalId":371528,"journal":{"name":"Third International Conference on Information Technology and Applications (ICITA'05)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123306019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a multi-expert seal imprint verification system. The system has been specifically designed for applications in the Japanese bank check processing. A difficult problem encountered in automatic seal imprint verification is that the system is required an extremely low error rate despite of using a small number of reference data for training. To conquer this problem, it combines two different algorithms for seal imprint verification. A seal imprint is first extracted from bank check image based on color features. The first verification algorithm is based on a method using local and global features of seal imprint. The second algorithm uses a special correlation based on a global approach. The two algorithms are combined in the multi-expert system by a voting strategy. Experimental results showed that the combination of the two algorithms improves significantly the verification performance both on "false-acceptance error rate" and "false-rejection error rate".
{"title":"Automatic seal imprint verification system for bank check processing","authors":"K. Ueda, Ken'ichi Matsuo","doi":"10.1109/ICITA.2005.81","DOIUrl":"https://doi.org/10.1109/ICITA.2005.81","url":null,"abstract":"In this paper, we propose a multi-expert seal imprint verification system. The system has been specifically designed for applications in the Japanese bank check processing. A difficult problem encountered in automatic seal imprint verification is that the system is required an extremely low error rate despite of using a small number of reference data for training. To conquer this problem, it combines two different algorithms for seal imprint verification. A seal imprint is first extracted from bank check image based on color features. The first verification algorithm is based on a method using local and global features of seal imprint. The second algorithm uses a special correlation based on a global approach. The two algorithms are combined in the multi-expert system by a voting strategy. Experimental results showed that the combination of the two algorithms improves significantly the verification performance both on \"false-acceptance error rate\" and \"false-rejection error rate\".","PeriodicalId":371528,"journal":{"name":"Third International Conference on Information Technology and Applications (ICITA'05)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121339776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose an efficient approach to maintaining data consistency of broadcast data in mobile computing environments. At the server a special read-only transaction, the broadcast transaction, is scheduled to read data entities from the database for broadcasting data. Obviously, traditional concurrency control algorithms are inadequate and inefficient as the broadcast transaction creates numerous data conflicts with other update transactions, which are concurrently accessing the same set of data entities. Some algorithms for global-reading of databases, Shade Test and Color Test could be a solution. However, we observe that they have also some deficiencies. Therefore, we devise a new and efficient algorithm called Look-Ahead Protocol (LAP) to address this problem. The simulation results show that our algorithm outperforms all existing algorithms
{"title":"Using Look-Ahead Protocol for Mobile Data Broadcast","authors":"K. Lam, C. Wong, William Leung","doi":"10.1109/ICITA.2005.299","DOIUrl":"https://doi.org/10.1109/ICITA.2005.299","url":null,"abstract":"In this paper, we propose an efficient approach to maintaining data consistency of broadcast data in mobile computing environments. At the server a special read-only transaction, the broadcast transaction, is scheduled to read data entities from the database for broadcasting data. Obviously, traditional concurrency control algorithms are inadequate and inefficient as the broadcast transaction creates numerous data conflicts with other update transactions, which are concurrently accessing the same set of data entities. Some algorithms for global-reading of databases, Shade Test and Color Test could be a solution. However, we observe that they have also some deficiencies. Therefore, we devise a new and efficient algorithm called Look-Ahead Protocol (LAP) to address this problem. The simulation results show that our algorithm outperforms all existing algorithms","PeriodicalId":371528,"journal":{"name":"Third International Conference on Information Technology and Applications (ICITA'05)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114681302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IPv6 site multihoming discussed at the Multi6 working group is one of the hottest topics among many IPv6 related issues in the IETF. We have already proposed a variant of end-to-end multihoming (E2E-MH), where an outgoing packet from a site is routed to a site-exit router by source address dependent (SAD) routing so that it goes out to the transit provider that gives the address prefix of the source address of it. In this paper, we first show that such SAD routing can be implemented with acceptable cost when we apply SAD routing only to default route entries on each router. Next we propose a hierarchical subdivision method for automatic address prefix assignment to links in a site. Connecting the SAD routing setting up on each router with the hierarchical address assignment, all needed configuration of routers can automatically be done, without any pre-configurations about IP address or routing information.
{"title":"Automatic address assignment for IPv6 end-to-end multihoming sites","authors":"K. Ohira, Youichi Koyama, K. Fujikawa, Y. Okabe","doi":"10.1109/ICITA.2005.79","DOIUrl":"https://doi.org/10.1109/ICITA.2005.79","url":null,"abstract":"IPv6 site multihoming discussed at the Multi6 working group is one of the hottest topics among many IPv6 related issues in the IETF. We have already proposed a variant of end-to-end multihoming (E2E-MH), where an outgoing packet from a site is routed to a site-exit router by source address dependent (SAD) routing so that it goes out to the transit provider that gives the address prefix of the source address of it. In this paper, we first show that such SAD routing can be implemented with acceptable cost when we apply SAD routing only to default route entries on each router. Next we propose a hierarchical subdivision method for automatic address prefix assignment to links in a site. Connecting the SAD routing setting up on each router with the hierarchical address assignment, all needed configuration of routers can automatically be done, without any pre-configurations about IP address or routing information.","PeriodicalId":371528,"journal":{"name":"Third International Conference on Information Technology and Applications (ICITA'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123688721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes an attempt to develop a fuzzy data aggregation technique for analyzing data collected during a groupware usability study. We show the formal parallelism between the decision making problem and that of ranking alternatives in a usability study. This equivalence allows using a combination of decision under uncertainty and multi-criteria decision making (MCDM) techniques for ranking alternatives for developing an approach for ranking alternatives within a usability study. The effectiveness of such approach is then illustrated with experimental data gathered during a usability study conducted by the Ambient Technology Group in Middlesex University.
{"title":"Using fuzzy functions to aggregate usability study data: a novel approach","authors":"S. Ramalingam, D. Iourinski","doi":"10.1109/ICITA.2005.296","DOIUrl":"https://doi.org/10.1109/ICITA.2005.296","url":null,"abstract":"This paper describes an attempt to develop a fuzzy data aggregation technique for analyzing data collected during a groupware usability study. We show the formal parallelism between the decision making problem and that of ranking alternatives in a usability study. This equivalence allows using a combination of decision under uncertainty and multi-criteria decision making (MCDM) techniques for ranking alternatives for developing an approach for ranking alternatives within a usability study. The effectiveness of such approach is then illustrated with experimental data gathered during a usability study conducted by the Ambient Technology Group in Middlesex University.","PeriodicalId":371528,"journal":{"name":"Third International Conference on Information Technology and Applications (ICITA'05)","volume":"294 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124173532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the results of the investigation of a chaotic multi-user modulation method, called chaotic phase shift keying. Only one chaotic sequence generator exists in each transmitter and receiver. We present the theory behind their operations, as well as the bit error rate derivation. It was found to have better performances, confirmed by analytical and simulation results, than the chaotic shift keying (CSK) system, which requires two chaotic sequence generators in each transmitter and receiver.
{"title":"Investigation on operations of a secure communication system based on the chaotic phase shift keying scheme","authors":"G. S. Sandhu, S. Berber","doi":"10.1109/ICITA.2005.164","DOIUrl":"https://doi.org/10.1109/ICITA.2005.164","url":null,"abstract":"This paper presents the results of the investigation of a chaotic multi-user modulation method, called chaotic phase shift keying. Only one chaotic sequence generator exists in each transmitter and receiver. We present the theory behind their operations, as well as the bit error rate derivation. It was found to have better performances, confirmed by analytical and simulation results, than the chaotic shift keying (CSK) system, which requires two chaotic sequence generators in each transmitter and receiver.","PeriodicalId":371528,"journal":{"name":"Third International Conference on Information Technology and Applications (ICITA'05)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128999661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an approach of using image contour recognition in the navigation for enterprise geographic information systems (GIS). The extraction of object mark images is based on some morphological structural patterns which are described by morphological structural points, contour property, and other geometrical data in a binary image of enterprise geographic information map. Some preprocessing methods, contour smooth following, linearization and extraction patterns of structural points, are introduced. If a focus point of a map is selected, based on the extracted object mark image, a dynamic layout and navigation diagram is constructed, which consists of three underlying views: a graph navigation view, an image layout view, and an enterprise information view. In the diagram each object mark image is represented by a node, the relation information between two nodes is represented by an edge. If a node is selected, the new dynamic layout and navigation diagram is found based on the extracted object mark images around the selected node, which contains the relation with the previous diagram. Therefore a dynamic layout adjustment and navigation for enterprise GIS is determined.
{"title":"Using image contour recognition in GIS navigation","authors":"Wei Lai, Donggang Yu, J. Tanaka, Cai Fei","doi":"10.1109/ICITA.2005.297","DOIUrl":"https://doi.org/10.1109/ICITA.2005.297","url":null,"abstract":"This paper presents an approach of using image contour recognition in the navigation for enterprise geographic information systems (GIS). The extraction of object mark images is based on some morphological structural patterns which are described by morphological structural points, contour property, and other geometrical data in a binary image of enterprise geographic information map. Some preprocessing methods, contour smooth following, linearization and extraction patterns of structural points, are introduced. If a focus point of a map is selected, based on the extracted object mark image, a dynamic layout and navigation diagram is constructed, which consists of three underlying views: a graph navigation view, an image layout view, and an enterprise information view. In the diagram each object mark image is represented by a node, the relation information between two nodes is represented by an edge. If a node is selected, the new dynamic layout and navigation diagram is found based on the extracted object mark images around the selected node, which contains the relation with the previous diagram. Therefore a dynamic layout adjustment and navigation for enterprise GIS is determined.","PeriodicalId":371528,"journal":{"name":"Third International Conference on Information Technology and Applications (ICITA'05)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129395212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intrusion detection systems (IDS) have become widely used tools for ensuring system and network security. Among many other challenges, contemporary IDS have to cope with increasingly higher bandwidths, which sometimes force them to let some data go by without being checked for possible malicious activity. This paper presents a novel method to improve the performance of IDS based on multimedia traffic classification. In the proposed method, the IDS has additional knowledge about common multimedia file formats and uses this knowledge to perform a more detailed analysis of packets carrying that type of data. If the structure and selected contents of the data are compliant, the corresponding stream is tagged accordingly, and the IDS is spared from further work on that stream. Otherwise, an anomaly is detected and reported. Our experiments using Snort confirm that this additional specialized knowledge results in substantial computational savings, without significant overhead for processing non-multimedia data
{"title":"A Multimedia Traffic Classification Scheme for Intrusion Detection Systems","authors":"Oge Marques, Pierre Baillargeon","doi":"10.1109/ICITA.2005.28","DOIUrl":"https://doi.org/10.1109/ICITA.2005.28","url":null,"abstract":"Intrusion detection systems (IDS) have become widely used tools for ensuring system and network security. Among many other challenges, contemporary IDS have to cope with increasingly higher bandwidths, which sometimes force them to let some data go by without being checked for possible malicious activity. This paper presents a novel method to improve the performance of IDS based on multimedia traffic classification. In the proposed method, the IDS has additional knowledge about common multimedia file formats and uses this knowledge to perform a more detailed analysis of packets carrying that type of data. If the structure and selected contents of the data are compliant, the corresponding stream is tagged accordingly, and the IDS is spared from further work on that stream. Otherwise, an anomaly is detected and reported. Our experiments using Snort confirm that this additional specialized knowledge results in substantial computational savings, without significant overhead for processing non-multimedia data","PeriodicalId":371528,"journal":{"name":"Third International Conference on Information Technology and Applications (ICITA'05)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129589527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reconstructing 3D models from 2D images is an important problem in visualization, particularly in medical and biological visualization. We present a new algorithm for 3D model reconstruction from 2D-images. In our approach, a 3D atlas is deformed to fit the feature data set in the target images. Compared with existing 3D reconstruction techniques, our approach is more efficient and easier to control. The resultant model has good inherent connectivity and smoothness. The resolution of the final model is determined by the resolution of reference model rather than the dataset. The model generates highly detailed view with a limited number of images. This algorithm has been successfully applied to medical simulation.
{"title":"A 3D reconstruction algorithm based on 3D deformable atlas","authors":"Ying Zhu, S. Belkasim","doi":"10.1109/ICITA.2005.3","DOIUrl":"https://doi.org/10.1109/ICITA.2005.3","url":null,"abstract":"Reconstructing 3D models from 2D images is an important problem in visualization, particularly in medical and biological visualization. We present a new algorithm for 3D model reconstruction from 2D-images. In our approach, a 3D atlas is deformed to fit the feature data set in the target images. Compared with existing 3D reconstruction techniques, our approach is more efficient and easier to control. The resultant model has good inherent connectivity and smoothness. The resolution of the final model is determined by the resolution of reference model rather than the dataset. The model generates highly detailed view with a limited number of images. This algorithm has been successfully applied to medical simulation.","PeriodicalId":371528,"journal":{"name":"Third International Conference on Information Technology and Applications (ICITA'05)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128062938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dynamic composing Web services provides an efficient mechanism to offer complex large systems. Substantial progress has already been made towards composing Web services. Unfortunately, these approaches cannot build a model of the whole life cycle of composed Web services. In order to solve this problem, we designed a model named Service-Cloud model based on the forming picture of clouds in nature. Service-Cloud model can describe the whole life cycle of the composed Web services: discovery, compose, publish, and terminate. Based on Service-Cloud model, we also designed and implemented a prototype
{"title":"Service-Cloud Model of Composed Web Services","authors":"Yanping Chen, Zeng-zhi Li, Li Wang, Huaizhou Yang","doi":"10.1109/ICITA.2005.251","DOIUrl":"https://doi.org/10.1109/ICITA.2005.251","url":null,"abstract":"Dynamic composing Web services provides an efficient mechanism to offer complex large systems. Substantial progress has already been made towards composing Web services. Unfortunately, these approaches cannot build a model of the whole life cycle of composed Web services. In order to solve this problem, we designed a model named Service-Cloud model based on the forming picture of clouds in nature. Service-Cloud model can describe the whole life cycle of the composed Web services: discovery, compose, publish, and terminate. Based on Service-Cloud model, we also designed and implemented a prototype","PeriodicalId":371528,"journal":{"name":"Third International Conference on Information Technology and Applications (ICITA'05)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126344339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}