R. Hashemi, J. H. Early, M. Bahar, A. Tyler, John F. Young
The predictive system presented in this paper employs both SOM and Hopfield nets to determine whether a given chemical agent causes cancer in the liver. The SOM net performs the clustering of the training set and delivers a signature for each cluster. Hopfield net treats each signature as an exemplar and learns the exemplars. Each record of the test set is considered a corrupted signature. The Hopfield net tries to un-corrupt the test record using learned exemplars and map it to one of the signatures and consequently to the prediction value associated with the signature. Four pairs of training and test sets are used to test the system. To establish the validity of the new predictive system, its performance is compared with the performance of the discriminant analysis and the rough sets methodology applied on the same datasets.
{"title":"A signature-based liver cancer predictive system","authors":"R. Hashemi, J. H. Early, M. Bahar, A. Tyler, John F. Young","doi":"10.1109/ITCC.2005.37","DOIUrl":"https://doi.org/10.1109/ITCC.2005.37","url":null,"abstract":"The predictive system presented in this paper employs both SOM and Hopfield nets to determine whether a given chemical agent causes cancer in the liver. The SOM net performs the clustering of the training set and delivers a signature for each cluster. Hopfield net treats each signature as an exemplar and learns the exemplars. Each record of the test set is considered a corrupted signature. The Hopfield net tries to un-corrupt the test record using learned exemplars and map it to one of the signatures and consequently to the prediction value associated with the signature. Four pairs of training and test sets are used to test the system. To establish the validity of the new predictive system, its performance is compared with the performance of the discriminant analysis and the rough sets methodology applied on the same datasets.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117194194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the advancement in wireless technologies in general and mobile devices capabilities in particular, ubiquitous access of mobile Web services continues to be in the focal point of research. This paper presents a novel architecture for discovery and invocation of mobile Web services through automatically generated abstract multimodal user interface for these services. A prototype has been developed to auto-generate user interface based on XForms and VoiceXml from a WDSL file. In this proposed architecture, the discovered Web services are invoked dynamically with a transparent mechanism. Moreover, the proposed architecture is a component-based architecture that provides its core functionality as Web services.
{"title":"Mobile Web services discovery and invocation through auto-generation of abstract multimodal interface","authors":"R. Steele, K. Khankan, T. Dillon","doi":"10.1109/ITCC.2005.202","DOIUrl":"https://doi.org/10.1109/ITCC.2005.202","url":null,"abstract":"With the advancement in wireless technologies in general and mobile devices capabilities in particular, ubiquitous access of mobile Web services continues to be in the focal point of research. This paper presents a novel architecture for discovery and invocation of mobile Web services through automatically generated abstract multimodal user interface for these services. A prototype has been developed to auto-generate user interface based on XForms and VoiceXml from a WDSL file. In this proposed architecture, the discovered Web services are invoked dynamically with a transparent mechanism. Moreover, the proposed architecture is a component-based architecture that provides its core functionality as Web services.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117325505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a parallel implementation of the multiple sequence alignment algorithm, known as ClustalW, on distributed memory parallel machines. The proposed algorithm divides a progressive alignment into subtasks and schedules them dynamically. A task tree is built according to the dependency of the generated phylogenetic tree. The computation and communication costs of the tasks are estimated at run-time and updated periodically. With dynamic scheduling, tasks are allocated to the processors considering the tasks' estimated computation and communication costs and the processors' workload in order to minimize the completion time. The experiment results show that the proposed parallel implementation achieves a considerable speedup over the sequential ClustalW.
{"title":"Parallel multiple sequence alignment with dynamic scheduling","authors":"Jiancong Luo, I. Ahmad, Munib Ahmed, R. Paul","doi":"10.1109/ITCC.2005.223","DOIUrl":"https://doi.org/10.1109/ITCC.2005.223","url":null,"abstract":"This paper proposes a parallel implementation of the multiple sequence alignment algorithm, known as ClustalW, on distributed memory parallel machines. The proposed algorithm divides a progressive alignment into subtasks and schedules them dynamically. A task tree is built according to the dependency of the generated phylogenetic tree. The computation and communication costs of the tasks are estimated at run-time and updated periodically. With dynamic scheduling, tasks are allocated to the processors considering the tasks' estimated computation and communication costs and the processors' workload in order to minimize the completion time. The experiment results show that the proposed parallel implementation achieves a considerable speedup over the sequential ClustalW.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121115773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Results from any existing clustering algorithm that are used for segmentation are highly sensitive to features that limit their generalization. Shape is one important attribute of an object. The detection and separation of an object using fuzzy ring-shaped clustering (FKR) and elliptic ring-shaped clustering (FKE) already exists in the literature. Not all real objects however, are ring or elliptical in shape, so to address these issues, this paper introduces a new shape-based algorithm, called fuzzy image segmentation combining ring and elliptic shaped clustering algorithms (FCRE) by merging the initial segmented results produced by FKR and FKE. The distribution of unclassified pixels is performed by connectedness and fuzzy c-means (FCM) using a combination of pixel intensity and normalized pixel location. Both qualitative and quantitative analysis of the results for different varieties of images proves the superiority of the proposed FCRE algorithm compared with both FKR and FKE.
{"title":"Fuzzy image segmentation combining ring and elliptic shaped clustering algorithms","authors":"Mohammed Ameer Ali, L. Dooley, G. Karmakar","doi":"10.1109/ITCC.2005.157","DOIUrl":"https://doi.org/10.1109/ITCC.2005.157","url":null,"abstract":"Results from any existing clustering algorithm that are used for segmentation are highly sensitive to features that limit their generalization. Shape is one important attribute of an object. The detection and separation of an object using fuzzy ring-shaped clustering (FKR) and elliptic ring-shaped clustering (FKE) already exists in the literature. Not all real objects however, are ring or elliptical in shape, so to address these issues, this paper introduces a new shape-based algorithm, called fuzzy image segmentation combining ring and elliptic shaped clustering algorithms (FCRE) by merging the initial segmented results produced by FKR and FKE. The distribution of unclassified pixels is performed by connectedness and fuzzy c-means (FCM) using a combination of pixel intensity and normalized pixel location. Both qualitative and quantitative analysis of the results for different varieties of images proves the superiority of the proposed FCRE algorithm compared with both FKR and FKE.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116903147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Verifiable secret sharing schemes (VSS) are schemes for the purpose of ensuring that the players are sharing a unique secret and this secret is the secret originally distributed by the dealer if the dealer was honest. However, such schemes do not ensure that the shared secret has any special characteristics (such as being a prime, safe prime or being with a specific bit-length). In this paper, we introduce a secret sharing scheme to allow a set of players to have confidence that they are sharing a large secret prime. Next, we introduce another scheme that allows the players to have confidence that they are sharing a large secret safe prime. Finally we give a subroutine that allows the players to ensure that the shared primes are of the appropriate bit-length. What we have in mind is to add fault-tolerance property to the recent all honest RSA function sharing protocol as presented in M. H. Ibrahim et al. (2004).
可验证的秘密共享方案(VSS)是一种旨在确保玩家共享唯一秘密的方案,如果经销商是诚实的,则该秘密是最初由经销商分发的秘密。然而,这样的方案不能确保共享的秘密具有任何特殊的特征(比如是素数、安全素数或具有特定的位长度)。在本文中,我们引入了一种秘密共享方案,允许一组参与者确信他们正在共享一个大的秘密素数。接下来,我们引入另一个方案,让玩家有信心他们正在分享一个大的秘密安全素数。最后,我们给出了一个子程序,它允许参与者确保共享素数具有适当的位长度。我们的想法是在M. H. Ibrahim et al.(2004)中提出的最近的全诚实RSA函数共享协议中添加容错属性。
{"title":"Verifiable threshold sharing of a large secret safe-prime","authors":"M.H. Ibrahi","doi":"10.1109/ITCC.2005.290","DOIUrl":"https://doi.org/10.1109/ITCC.2005.290","url":null,"abstract":"Verifiable secret sharing schemes (VSS) are schemes for the purpose of ensuring that the players are sharing a unique secret and this secret is the secret originally distributed by the dealer if the dealer was honest. However, such schemes do not ensure that the shared secret has any special characteristics (such as being a prime, safe prime or being with a specific bit-length). In this paper, we introduce a secret sharing scheme to allow a set of players to have confidence that they are sharing a large secret prime. Next, we introduce another scheme that allows the players to have confidence that they are sharing a large secret safe prime. Finally we give a subroutine that allows the players to ensure that the shared primes are of the appropriate bit-length. What we have in mind is to add fault-tolerance property to the recent all honest RSA function sharing protocol as presented in M. H. Ibrahim et al. (2004).","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114854990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A centralized collaborative system between nodes and Base Stations (BSs) is developed, and a new prediction mobility scheme with Data Mining techniques is proposed so that the service of the handoff calls can be guaranteed. This new approach belongs to the Direct Group Mobility (DGM) prediction scheme and is based on the Tree Path Construction Algorithm (TPCON) and the Merge Tree Algorithm (MTA). Two Call Admission Control (CAC) algorithms are developed for each BS according to the predictive or adaptive policy for the reservation operation in order to minimize the call dropping probability. This study deals with the system behavior only at exceptional congestion time periods (periodical events).
{"title":"A Tree Based Data Mining Prediction Scheme for Wireless Cellular Network","authors":"J. Tsiligaridis, R. Acharya","doi":"10.1109/ITCC.2005.50","DOIUrl":"https://doi.org/10.1109/ITCC.2005.50","url":null,"abstract":"A centralized collaborative system between nodes and Base Stations (BSs) is developed, and a new prediction mobility scheme with Data Mining techniques is proposed so that the service of the handoff calls can be guaranteed. This new approach belongs to the Direct Group Mobility (DGM) prediction scheme and is based on the Tree Path Construction Algorithm (TPCON) and the Merge Tree Algorithm (MTA). Two Call Admission Control (CAC) algorithms are developed for each BS according to the predictive or adaptive policy for the reservation operation in order to minimize the call dropping probability. This study deals with the system behavior only at exceptional congestion time periods (periodical events).","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121214539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bezier curves are robust tool for a wide array of applications ranging from computer-aided design to calligraphic character, outlining and object shape description. In terms of the control point generation process, existing shape descriptor techniques that employ Bezier curves do not distinguish between regions where an object's shape changes rapidly and those where the change is more gradual or flat. This can lead to an erroneous shape description, particularly where there are significantly sharp changes in shape, such as at sharp corners. This paper presents a novel shape description algorithm called a generic shape descriptor using Bezier curves (SDBC), which defines a new strategy for Bezier control point generation by integrating domain specific information about the shape of an object in a particular region. The strategy also includes an improved dynamic fixed length coding scheme for control points. The SDBC framework has been rigorously tested upon a number of arbitrary shapes, and both quantitative and qualitative analyses have confirmed its superior performance in comparison with existing algorithms.
{"title":"A generic shape descriptor using Bezier curves","authors":"Ferdous Sohel, G. Karmakar, L. Dooley","doi":"10.1109/ITCC.2005.11","DOIUrl":"https://doi.org/10.1109/ITCC.2005.11","url":null,"abstract":"Bezier curves are robust tool for a wide array of applications ranging from computer-aided design to calligraphic character, outlining and object shape description. In terms of the control point generation process, existing shape descriptor techniques that employ Bezier curves do not distinguish between regions where an object's shape changes rapidly and those where the change is more gradual or flat. This can lead to an erroneous shape description, particularly where there are significantly sharp changes in shape, such as at sharp corners. This paper presents a novel shape description algorithm called a generic shape descriptor using Bezier curves (SDBC), which defines a new strategy for Bezier control point generation by integrating domain specific information about the shape of an object in a particular region. The strategy also includes an improved dynamic fixed length coding scheme for control points. The SDBC framework has been rigorously tested upon a number of arbitrary shapes, and both quantitative and qualitative analyses have confirmed its superior performance in comparison with existing algorithms.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121321019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Two main technologies that stand out for the implementation of enterprise applications and Web services are Sun Microsystems' Java 2 Enterprise Edition (J2EE) and Microsoft's .NET framework. These two are competing to become the platform of choice for enterprise application and Web services developers. Each platform provides specific development tools and APIs to assist developers. The purpose of this research is to provide an unbiased comparison of the two platforms based on their features and services offered from the viewpoint of developers in the context of building an enterprise or Web application from design right through to deployment.
{"title":"Comparison of Web services technologies from a developer's perspective","authors":"S. Ahuja, R. Clark","doi":"10.1109/ITCC.2005.106","DOIUrl":"https://doi.org/10.1109/ITCC.2005.106","url":null,"abstract":"Two main technologies that stand out for the implementation of enterprise applications and Web services are Sun Microsystems' Java 2 Enterprise Edition (J2EE) and Microsoft's .NET framework. These two are competing to become the platform of choice for enterprise application and Web services developers. Each platform provides specific development tools and APIs to assist developers. The purpose of this research is to provide an unbiased comparison of the two platforms based on their features and services offered from the viewpoint of developers in the context of building an enterprise or Web application from design right through to deployment.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121948295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a hierarchical QoS multicast routing protocol (HQMRP) for mobile ad-hoc networks. It can provide QoS-sensitive routes in a scalable and flexible way, in the network environment with mobility. In the proposed HQMRP scheme, each local node just only needs to maintain local multicast routing information and/or summary information of other clusters (or domains), but does not requires any global ad hoc network states to be maintained. The HQMRP also allows that an ad-hoc group member can join/leave the multicast group dynamically, and supports multiple QoS constraints. The paper presents the proof of correctness and complexity analysis of the protocol. The performance measures of HQMRP are evaluated using simulation. The studies show that HQMRP can provide an available approach to QoS multicast routing for mobile ad-hoc networks.
{"title":"A QoS multicast routing protocol for mobile ad-hoc networks","authors":"L. Layuan, L. Chunlin","doi":"10.1109/ITCC.2005.28","DOIUrl":"https://doi.org/10.1109/ITCC.2005.28","url":null,"abstract":"This paper presents a hierarchical QoS multicast routing protocol (HQMRP) for mobile ad-hoc networks. It can provide QoS-sensitive routes in a scalable and flexible way, in the network environment with mobility. In the proposed HQMRP scheme, each local node just only needs to maintain local multicast routing information and/or summary information of other clusters (or domains), but does not requires any global ad hoc network states to be maintained. The HQMRP also allows that an ad-hoc group member can join/leave the multicast group dynamically, and supports multiple QoS constraints. The paper presents the proof of correctness and complexity analysis of the protocol. The performance measures of HQMRP are evaluated using simulation. The studies show that HQMRP can provide an available approach to QoS multicast routing for mobile ad-hoc networks.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122054783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software models evolve at different levels of abstraction, from the requirements specification to development of the source code. The models underlying this process are related and their elements are usually mutually dependent. To preserve consistency and enable synchronization when models are altered due to evolution, the underlying model dependencies need to be established and maintained. As there is a potentially large number of such relations, this process should be automated for suitable scenarios. This paper introduces a tractable approach to automating identification and encoding of model dependencies that can be used for model synchronization. The approach first uses association rules to map types between models and different levels of abstraction. It then makes use of formal concept analysis (FCA) on attributes of extracted models to identify clusters of model elements.
{"title":"Using formal concept analysis to establish model dependencies","authors":"Igor Ivkovic, K. Kontogiannis","doi":"10.1109/ITCC.2005.286","DOIUrl":"https://doi.org/10.1109/ITCC.2005.286","url":null,"abstract":"Software models evolve at different levels of abstraction, from the requirements specification to development of the source code. The models underlying this process are related and their elements are usually mutually dependent. To preserve consistency and enable synchronization when models are altered due to evolution, the underlying model dependencies need to be established and maintained. As there is a potentially large number of such relations, this process should be automated for suitable scenarios. This paper introduces a tractable approach to automating identification and encoding of model dependencies that can be used for model synchronization. The approach first uses association rules to map types between models and different levels of abstraction. It then makes use of formal concept analysis (FCA) on attributes of extracted models to identify clusters of model elements.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123330394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}