Source localization is a challenging issue for multisensor multitarget detection, tracking and estimation problems in wireless distributed sensor networks. In this paper, a novel source localization method, called passive source localization using power spectral analysis and decision fusion in wireless distributed sensor networks is presented. This includes an energy decay model for acoustic signals. The new method is computationally efficient and requires less bandwidth compared with current methods by making localization decisions at individual nodes and performing decision fusion at the manager node. This eliminates the requirement of sophisticated synchronization. A simulation of the proposed method is performed using different numbers of sources and sensor nodes. Simulation results confirmed the improved performance of this method under ideal and noisy conditions.
{"title":"Passive source localization using power spectral analysis and decision fusion in wireless distributed sensor networks","authors":"M. Z. Rahman, G. Karmakar, L. Dooley, G. Karmakar","doi":"10.1109/ITCC.2005.225","DOIUrl":"https://doi.org/10.1109/ITCC.2005.225","url":null,"abstract":"Source localization is a challenging issue for multisensor multitarget detection, tracking and estimation problems in wireless distributed sensor networks. In this paper, a novel source localization method, called passive source localization using power spectral analysis and decision fusion in wireless distributed sensor networks is presented. This includes an energy decay model for acoustic signals. The new method is computationally efficient and requires less bandwidth compared with current methods by making localization decisions at individual nodes and performing decision fusion at the manager node. This eliminates the requirement of sophisticated synchronization. A simulation of the proposed method is performed using different numbers of sources and sensor nodes. Simulation results confirmed the improved performance of this method under ideal and noisy conditions.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133751648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brent Doeksen, A. Abraham, Johnson P. Thomas, M. Paprzycki
The main focus of this study is to compare different performances of soft computing paradigms for predicting the direction of individuals stocks. Three different artificial intelligence techniques were used to predict the direction of both Microsoft and Intel stock prices over a period of thirteen years. We explore the performance of artificial neural networks trained using backpropagation and conjugate gradient algorithm and a Mamdani and Takagi Sugeno fuzzy inference system learned using neural learning and genetic algorithm. Once all the different models were built the last part of the experiment was to determine how much profit can be made using these methods versus a simple buy and hold technique.
{"title":"Real stock trading using soft computing models","authors":"Brent Doeksen, A. Abraham, Johnson P. Thomas, M. Paprzycki","doi":"10.1109/ITCC.2005.238","DOIUrl":"https://doi.org/10.1109/ITCC.2005.238","url":null,"abstract":"The main focus of this study is to compare different performances of soft computing paradigms for predicting the direction of individuals stocks. Three different artificial intelligence techniques were used to predict the direction of both Microsoft and Intel stock prices over a period of thirteen years. We explore the performance of artificial neural networks trained using backpropagation and conjugate gradient algorithm and a Mamdani and Takagi Sugeno fuzzy inference system learned using neural learning and genetic algorithm. Once all the different models were built the last part of the experiment was to determine how much profit can be made using these methods versus a simple buy and hold technique.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114331088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Verifiable secret sharing schemes (VSS) are schemes for the purpose of ensuring that the players are sharing a unique secret and this secret is the secret originally distributed by the dealer if the dealer was honest. However, such schemes do not ensure that the shared secret has any special characteristics (such as being a prime, safe prime or being with a specific bit-length). In this paper, we introduce a secret sharing scheme to allow a set of players to have confidence that they are sharing a large secret prime. Next, we introduce another scheme that allows the players to have confidence that they are sharing a large secret safe prime. Finally we give a subroutine that allows the players to ensure that the shared primes are of the appropriate bit-length. What we have in mind is to add fault-tolerance property to the recent all honest RSA function sharing protocol as presented in M. H. Ibrahim et al. (2004).
可验证的秘密共享方案(VSS)是一种旨在确保玩家共享唯一秘密的方案,如果经销商是诚实的,则该秘密是最初由经销商分发的秘密。然而,这样的方案不能确保共享的秘密具有任何特殊的特征(比如是素数、安全素数或具有特定的位长度)。在本文中,我们引入了一种秘密共享方案,允许一组参与者确信他们正在共享一个大的秘密素数。接下来,我们引入另一个方案,让玩家有信心他们正在分享一个大的秘密安全素数。最后,我们给出了一个子程序,它允许参与者确保共享素数具有适当的位长度。我们的想法是在M. H. Ibrahim et al.(2004)中提出的最近的全诚实RSA函数共享协议中添加容错属性。
{"title":"Verifiable threshold sharing of a large secret safe-prime","authors":"M.H. Ibrahi","doi":"10.1109/ITCC.2005.290","DOIUrl":"https://doi.org/10.1109/ITCC.2005.290","url":null,"abstract":"Verifiable secret sharing schemes (VSS) are schemes for the purpose of ensuring that the players are sharing a unique secret and this secret is the secret originally distributed by the dealer if the dealer was honest. However, such schemes do not ensure that the shared secret has any special characteristics (such as being a prime, safe prime or being with a specific bit-length). In this paper, we introduce a secret sharing scheme to allow a set of players to have confidence that they are sharing a large secret prime. Next, we introduce another scheme that allows the players to have confidence that they are sharing a large secret safe prime. Finally we give a subroutine that allows the players to ensure that the shared primes are of the appropriate bit-length. What we have in mind is to add fault-tolerance property to the recent all honest RSA function sharing protocol as presented in M. H. Ibrahim et al. (2004).","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114854990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the advancement in wireless technologies in general and mobile devices capabilities in particular, ubiquitous access of mobile Web services continues to be in the focal point of research. This paper presents a novel architecture for discovery and invocation of mobile Web services through automatically generated abstract multimodal user interface for these services. A prototype has been developed to auto-generate user interface based on XForms and VoiceXml from a WDSL file. In this proposed architecture, the discovered Web services are invoked dynamically with a transparent mechanism. Moreover, the proposed architecture is a component-based architecture that provides its core functionality as Web services.
{"title":"Mobile Web services discovery and invocation through auto-generation of abstract multimodal interface","authors":"R. Steele, K. Khankan, T. Dillon","doi":"10.1109/ITCC.2005.202","DOIUrl":"https://doi.org/10.1109/ITCC.2005.202","url":null,"abstract":"With the advancement in wireless technologies in general and mobile devices capabilities in particular, ubiquitous access of mobile Web services continues to be in the focal point of research. This paper presents a novel architecture for discovery and invocation of mobile Web services through automatically generated abstract multimodal user interface for these services. A prototype has been developed to auto-generate user interface based on XForms and VoiceXml from a WDSL file. In this proposed architecture, the discovered Web services are invoked dynamically with a transparent mechanism. Moreover, the proposed architecture is a component-based architecture that provides its core functionality as Web services.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117325505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software models evolve at different levels of abstraction, from the requirements specification to development of the source code. The models underlying this process are related and their elements are usually mutually dependent. To preserve consistency and enable synchronization when models are altered due to evolution, the underlying model dependencies need to be established and maintained. As there is a potentially large number of such relations, this process should be automated for suitable scenarios. This paper introduces a tractable approach to automating identification and encoding of model dependencies that can be used for model synchronization. The approach first uses association rules to map types between models and different levels of abstraction. It then makes use of formal concept analysis (FCA) on attributes of extracted models to identify clusters of model elements.
{"title":"Using formal concept analysis to establish model dependencies","authors":"Igor Ivkovic, K. Kontogiannis","doi":"10.1109/ITCC.2005.286","DOIUrl":"https://doi.org/10.1109/ITCC.2005.286","url":null,"abstract":"Software models evolve at different levels of abstraction, from the requirements specification to development of the source code. The models underlying this process are related and their elements are usually mutually dependent. To preserve consistency and enable synchronization when models are altered due to evolution, the underlying model dependencies need to be established and maintained. As there is a potentially large number of such relations, this process should be automated for suitable scenarios. This paper introduces a tractable approach to automating identification and encoding of model dependencies that can be used for model synchronization. The approach first uses association rules to map types between models and different levels of abstraction. It then makes use of formal concept analysis (FCA) on attributes of extracted models to identify clusters of model elements.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123330394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Location is one of the most important contexts used in pervasive computing environments. GPS systems are intensely used to detect the location information; they mainly work in outdoor environment. Applications call for precise, easy-to-build, and easy-to-use indoor location systems. This paper presents our work to implement an indoor location determination system for Microsoft-Windows-based platforms using a preexisting IEEE 802.11 wireless network. The location is determined from radio signal strength information collected from multiple base stations at different physical locations. Our experiments show a high accuracy rate of this approach.
{"title":"Towards an indoor location system using RF signal strength in IEEE 802.11 networks","authors":"A. Harder, Lanlan Song, Yu Wang","doi":"10.1109/ITCC.2005.278","DOIUrl":"https://doi.org/10.1109/ITCC.2005.278","url":null,"abstract":"Location is one of the most important contexts used in pervasive computing environments. GPS systems are intensely used to detect the location information; they mainly work in outdoor environment. Applications call for precise, easy-to-build, and easy-to-use indoor location systems. This paper presents our work to implement an indoor location determination system for Microsoft-Windows-based platforms using a preexisting IEEE 802.11 wireless network. The location is determined from radio signal strength information collected from multiple base stations at different physical locations. Our experiments show a high accuracy rate of this approach.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"172 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123566297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a scheme is proposed for parallel-pipelined implementation of the multialphabet arithmetic-coding algorithm used in lossless data compression. Using this scheme, it is possible to parallelize both the encoding and decoding operations used respectively in data compression and decompression. The compression performance of the proposed implementation for both order 0 and order 1 models have been evaluated and compared with existing sequential implementations in terms of compression ratios as well as the execution time using the Canterbury corpus benchmark set of files. The proposed scheme also facilitates hardware realisation of the respective modules and hence is suitable for integration into embedded microprocessor systems, an important area where lossless data compression is applied.
{"title":"A parallel scheme for implementing multialphabet arithmetic coding in high-speed programmable hardware","authors":"S. Mahapatra, Kuldeep Singh","doi":"10.1109/ITCC.2005.24","DOIUrl":"https://doi.org/10.1109/ITCC.2005.24","url":null,"abstract":"In this paper, a scheme is proposed for parallel-pipelined implementation of the multialphabet arithmetic-coding algorithm used in lossless data compression. Using this scheme, it is possible to parallelize both the encoding and decoding operations used respectively in data compression and decompression. The compression performance of the proposed implementation for both order 0 and order 1 models have been evaluated and compared with existing sequential implementations in terms of compression ratios as well as the execution time using the Canterbury corpus benchmark set of files. The proposed scheme also facilitates hardware realisation of the respective modules and hence is suitable for integration into embedded microprocessor systems, an important area where lossless data compression is applied.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"282 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122946150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anomaly detection based on monitoring of sequences of system calls has proved to be an effective approach for detection of previously unknown attacks on programs. This paper describes a new model for profiling normal program behavior that can be used to detect intrusions that change application execution flow. The model (hybrid push down automaton, HPDA) incorporates call stack information and can be learned by dynamic analysis of training data captured from the call stack log. The learning algorithm uses call stack information maintained by the program to build a finite state automaton. When compared to other approaches including VtPath which also uses call stack information, the HPDA model produces a more compact and general representation of control flow, handles recursion naturally, can be learned with less training data, and has a lower false positive rate when used for anomaly detection. In addition, dynamic learning can also be used to supplement a model acquired from static analysis.
{"title":"Dynamic learning of automata from the call stack log for anomaly detection","authors":"Z. Liu, S. Bridges","doi":"10.1109/ITCC.2005.136","DOIUrl":"https://doi.org/10.1109/ITCC.2005.136","url":null,"abstract":"Anomaly detection based on monitoring of sequences of system calls has proved to be an effective approach for detection of previously unknown attacks on programs. This paper describes a new model for profiling normal program behavior that can be used to detect intrusions that change application execution flow. The model (hybrid push down automaton, HPDA) incorporates call stack information and can be learned by dynamic analysis of training data captured from the call stack log. The learning algorithm uses call stack information maintained by the program to build a finite state automaton. When compared to other approaches including VtPath which also uses call stack information, the HPDA model produces a more compact and general representation of control flow, handles recursion naturally, can be learned with less training data, and has a lower false positive rate when used for anomaly detection. In addition, dynamic learning can also be used to supplement a model acquired from static analysis.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"05 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123520576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Declustering techniques reduce query response times through parallel I/O by distributing data among multiple devices. Most of the research on declustering is targeted at spatial range queries and investigates schemes with low additive error. Recently, declustering using replication is proposed to reduce the additive overhead. Replication significantly reduces retrieval cost of arbitrary queries. In this paper, we propose a disk allocation and retrieval mechanism for arbitrary queries based on design theory. Using proposed c-copy replicated declustering scheme, (c - 1)k/sup 2/ + ck buckets can be retrieved using at most k disk accesses. Retrieval algorithm is very efficient and is asymptotically optimal with /spl Theta/(|Q|) complexity for a query Q. In addition to the deterministic worst-case bound and efficient retrieval, proposed algorithm handles nonuniform data, high dimensions, supports incremental declustering and has good fault-tolerance property.
{"title":"Design theoretic approach to replicated declustering","authors":"A. Tosun","doi":"10.1109/ITCC.2005.124","DOIUrl":"https://doi.org/10.1109/ITCC.2005.124","url":null,"abstract":"Declustering techniques reduce query response times through parallel I/O by distributing data among multiple devices. Most of the research on declustering is targeted at spatial range queries and investigates schemes with low additive error. Recently, declustering using replication is proposed to reduce the additive overhead. Replication significantly reduces retrieval cost of arbitrary queries. In this paper, we propose a disk allocation and retrieval mechanism for arbitrary queries based on design theory. Using proposed c-copy replicated declustering scheme, (c - 1)k/sup 2/ + ck buckets can be retrieved using at most k disk accesses. Retrieval algorithm is very efficient and is asymptotically optimal with /spl Theta/(|Q|) complexity for a query Q. In addition to the deterministic worst-case bound and efficient retrieval, proposed algorithm handles nonuniform data, high dimensions, supports incremental declustering and has good fault-tolerance property.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121885323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Two main technologies that stand out for the implementation of enterprise applications and Web services are Sun Microsystems' Java 2 Enterprise Edition (J2EE) and Microsoft's .NET framework. These two are competing to become the platform of choice for enterprise application and Web services developers. Each platform provides specific development tools and APIs to assist developers. The purpose of this research is to provide an unbiased comparison of the two platforms based on their features and services offered from the viewpoint of developers in the context of building an enterprise or Web application from design right through to deployment.
{"title":"Comparison of Web services technologies from a developer's perspective","authors":"S. Ahuja, R. Clark","doi":"10.1109/ITCC.2005.106","DOIUrl":"https://doi.org/10.1109/ITCC.2005.106","url":null,"abstract":"Two main technologies that stand out for the implementation of enterprise applications and Web services are Sun Microsystems' Java 2 Enterprise Edition (J2EE) and Microsoft's .NET framework. These two are competing to become the platform of choice for enterprise application and Web services developers. Each platform provides specific development tools and APIs to assist developers. The purpose of this research is to provide an unbiased comparison of the two platforms based on their features and services offered from the viewpoint of developers in the context of building an enterprise or Web application from design right through to deployment.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121948295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}