Pub Date : 2009-12-15DOI: 10.1109/IIT.2009.5413762
E. Ahmadi, M. Taheri, N. Mirshekari, S. Hashemi, A. Sami, Ali K. Hamze
Fuzzy Inference Systems (FIS) are much considerable due to their interpretability and uncertainty factors. Hence, Fuzzy Rule-Based Classifier Systems (FRBCS) are widely investigated in aspects of construction and parameter learning. Also, decision trees are recursive structures which are not only simple and accurate, but also are fast in classification due to partitioning the feature space in a multi-stage process. Combination of fuzzy reasoning and decision trees gathers capabilities of both systems in an integrated one. In this paper, a novel fuzzy decision tree (FDT) is proposed for extracting fuzzy rules which are both accurate and cooperative due to dependency structure of decision tree. Furthermore, a weighting method is used to emphasize the corporation of the rules. Finally, the proposed method is compared with a well-known rule construction method named SRC on 8 UCI datasets. Experiments show a significant improvement on classification performance of the proposed method in comparison with SRC.
{"title":"Cooperative fuzzy rulebase construction based on a novel fuzzy decision tree","authors":"E. Ahmadi, M. Taheri, N. Mirshekari, S. Hashemi, A. Sami, Ali K. Hamze","doi":"10.1109/IIT.2009.5413762","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413762","url":null,"abstract":"Fuzzy Inference Systems (FIS) are much considerable due to their interpretability and uncertainty factors. Hence, Fuzzy Rule-Based Classifier Systems (FRBCS) are widely investigated in aspects of construction and parameter learning. Also, decision trees are recursive structures which are not only simple and accurate, but also are fast in classification due to partitioning the feature space in a multi-stage process. Combination of fuzzy reasoning and decision trees gathers capabilities of both systems in an integrated one. In this paper, a novel fuzzy decision tree (FDT) is proposed for extracting fuzzy rules which are both accurate and cooperative due to dependency structure of decision tree. Furthermore, a weighting method is used to emphasize the corporation of the rules. Finally, the proposed method is compared with a well-known rule construction method named SRC on 8 UCI datasets. Experiments show a significant improvement on classification performance of the proposed method in comparison with SRC.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124772889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-15DOI: 10.1109/IIT.2009.5413636
R. Mizouni, A. Serhani, R. Dssouli, A. Benharref
With the proliferation of mobile devices, the challenge today is to provide users with applications that are of real value. These applications are, in most of the cases, mobilized versions of desktop applications that fit the contextual requirements of mobility constraints. When developed from a desktop application, it is difficult to align the mobile application with user expectations because of the experience the user has from the desktop application. In addition, in current practices, we can notice a lack of relevant guidance that assists the analyst in building such applications. To overcome this shortcoming, we propose a methodology for requirements elicitation when mobilizing desktop applications. This methodology relies on using knowledge the user has from her/his experience on the desktop application on one hand and learning from strengths and limitations of desktop applications on the other hand. It helps the definition of the set of features that the mobile application should provide to meet users' expectations. An application has been mobilized following our methodology to evaluate it.
{"title":"Challenges in “mobilizing” desktop applications: a new methodology for requirements engineering","authors":"R. Mizouni, A. Serhani, R. Dssouli, A. Benharref","doi":"10.1109/IIT.2009.5413636","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413636","url":null,"abstract":"With the proliferation of mobile devices, the challenge today is to provide users with applications that are of real value. These applications are, in most of the cases, mobilized versions of desktop applications that fit the contextual requirements of mobility constraints. When developed from a desktop application, it is difficult to align the mobile application with user expectations because of the experience the user has from the desktop application. In addition, in current practices, we can notice a lack of relevant guidance that assists the analyst in building such applications. To overcome this shortcoming, we propose a methodology for requirements elicitation when mobilizing desktop applications. This methodology relies on using knowledge the user has from her/his experience on the desktop application on one hand and learning from strengths and limitations of desktop applications on the other hand. It helps the definition of the set of features that the mobile application should provide to meet users' expectations. An application has been mobilized following our methodology to evaluate it.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129748982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-15DOI: 10.1109/IIT.2009.5413778
S. Sharifian-R, Ardeshir Baharian, E. Asgarian, A. Rasooli
Disease association study is of great importance among various fields of study in bioinformatics. Computational methods happen to be advantageous specifically when experimental approaches fail to obtain accurate results. Haplotypes are believed to be the most responsible biological data for genetic diseases. In this paper, the problem of reconstructing haplotypes from error-containing SNP fragments is discussed. For this purpose, two new methods have been proposed by a combination of k-means clustering and particle swarm optimization algorithm. The methods and their implementation results on real biological and simulation datasets are represented which shows that they outperform the methods used alone.
{"title":"A combination of PSO and k-means methods to solve haplotype reconstruction problem","authors":"S. Sharifian-R, Ardeshir Baharian, E. Asgarian, A. Rasooli","doi":"10.1109/IIT.2009.5413778","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413778","url":null,"abstract":"Disease association study is of great importance among various fields of study in bioinformatics. Computational methods happen to be advantageous specifically when experimental approaches fail to obtain accurate results. Haplotypes are believed to be the most responsible biological data for genetic diseases. In this paper, the problem of reconstructing haplotypes from error-containing SNP fragments is discussed. For this purpose, two new methods have been proposed by a combination of k-means clustering and particle swarm optimization algorithm. The methods and their implementation results on real biological and simulation datasets are represented which shows that they outperform the methods used alone.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122754074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-15DOI: 10.1109/IIT.2009.5413793
Ziad Osman, L. Hamandi, R. Zantout, F. Sibai
Automatic recognition of printed and handwritten documents remains an active area of research. Arabic is one of the languages that present special problems. Arabic is cursive and therefore necessitates a segmentation process to determine the boundaries of a character. Arabic characters consist of multiple disconnected parts. Dots and Diacritics are used in many Arabic characters and can appear above or below the main body of the character. In Arabic, the same letter has up to four different forms depending on where it appears in the word and depending on the letters that are adjacent to it. In this paper, a novel approach is described that recognizes Arabic script documents. The method starts by preprocessing which involves binarization, noise reduction, and thinning. The text is then segmented into separate lines. Characters are then segmented by determining bifurcation points that are near the baseline. Segmented characters are then compared to prestored templates to identify the best match. The template comparisons are based on central moments, Hu moments, and Invariant moments. The method is proven to work satisfactorily for scanned printed Arabic text. The paper concludes with a discussion of the drawbacks of the method, and a description of possible solutions.
{"title":"Automatic processing of Arabic text","authors":"Ziad Osman, L. Hamandi, R. Zantout, F. Sibai","doi":"10.1109/IIT.2009.5413793","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413793","url":null,"abstract":"Automatic recognition of printed and handwritten documents remains an active area of research. Arabic is one of the languages that present special problems. Arabic is cursive and therefore necessitates a segmentation process to determine the boundaries of a character. Arabic characters consist of multiple disconnected parts. Dots and Diacritics are used in many Arabic characters and can appear above or below the main body of the character. In Arabic, the same letter has up to four different forms depending on where it appears in the word and depending on the letters that are adjacent to it. In this paper, a novel approach is described that recognizes Arabic script documents. The method starts by preprocessing which involves binarization, noise reduction, and thinning. The text is then segmented into separate lines. Characters are then segmented by determining bifurcation points that are near the baseline. Segmented characters are then compared to prestored templates to identify the best match. The template comparisons are based on central moments, Hu moments, and Invariant moments. The method is proven to work satisfactorily for scanned printed Arabic text. The paper concludes with a discussion of the drawbacks of the method, and a description of possible solutions.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133206369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-15DOI: 10.1109/IIT.2009.5413357
S. Mirhassani, M. Hosseini, A. Behrad
In many of vessel segmentation methods, Hessian based vessel enhancement filter as an efficient step is employed. In this paper, for segmentation of vessels, HBVF method is the first step of the algorithm. Afterward, to remove non-vessels from image, a high level threshold is applied to the filtered image. Since, as a result of threshold some of weak vessels are removed, recovering of vessels using Hough transform and morphological operations is accomplished. Then, the yielded image is combined with a version of vesselness filtered image which is converted to a binary image using a low level threshold. As a consequence of image combination, most of vessels are detected. In the final step, to reduce the false positives, fine particles are removed from the result according to their size. Experiments indicate the promising results which demonstrate the efficiency of the proposed algorithm.
{"title":"Improvement of Hessian based vessel segmentation using two stage threshold and morphological image recovering","authors":"S. Mirhassani, M. Hosseini, A. Behrad","doi":"10.1109/IIT.2009.5413357","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413357","url":null,"abstract":"In many of vessel segmentation methods, Hessian based vessel enhancement filter as an efficient step is employed. In this paper, for segmentation of vessels, HBVF method is the first step of the algorithm. Afterward, to remove non-vessels from image, a high level threshold is applied to the filtered image. Since, as a result of threshold some of weak vessels are removed, recovering of vessels using Hough transform and morphological operations is accomplished. Then, the yielded image is combined with a version of vesselness filtered image which is converted to a binary image using a low level threshold. As a consequence of image combination, most of vessels are detected. In the final step, to reduce the false positives, fine particles are removed from the result according to their size. Experiments indicate the promising results which demonstrate the efficiency of the proposed algorithm.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115481128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-15DOI: 10.1109/IIT.2009.5413375
I. Hababeh
The Distributed Database Management Systems (DDBMS) are measured by their Quality of Service (QoS) improvements on the real world applications. To analyze the behavior of the distributed database system and to measure its quality of service performance, an integrated tool for a DDBMS is developed and presented.
{"title":"A software development tool for improving Quality of Service in Distributed Database Systems","authors":"I. Hababeh","doi":"10.1109/IIT.2009.5413375","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413375","url":null,"abstract":"The Distributed Database Management Systems (DDBMS) are measured by their Quality of Service (QoS) improvements on the real world applications. To analyze the behavior of the distributed database system and to measure its quality of service performance, an integrated tool for a DDBMS is developed and presented.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"62 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114109979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-15DOI: 10.1109/IIT.2009.5413751
Yuchao Chen, Weiming Wang, M. Gao
We described an implementation of log managing structure to store log in a Small-Scale, High-Risk distributed environment, which protects the integrity for log even some of storage nodes fail and guarantees the security in the case of secret log divulgence, meanwhile will not cause large space consumption. After collecting log from agents, Collect Center disperses log into pieces using Rabin's Information Dispersal Algorithm (IDA), builds Distributed Fingerprint (DFP) for integrity check. Integrity though the structure gets, it is still not safe enough to avoid the divulgence of secret log in the process of log dispersal and retrieval, some cryptography technology is applied into the log management.
{"title":"Application of distributed safe log management in Small-Scale, High-Risk system","authors":"Yuchao Chen, Weiming Wang, M. Gao","doi":"10.1109/IIT.2009.5413751","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413751","url":null,"abstract":"We described an implementation of log managing structure to store log in a Small-Scale, High-Risk distributed environment, which protects the integrity for log even some of storage nodes fail and guarantees the security in the case of secret log divulgence, meanwhile will not cause large space consumption. After collecting log from agents, Collect Center disperses log into pieces using Rabin's Information Dispersal Algorithm (IDA), builds Distributed Fingerprint (DFP) for integrity check. Integrity though the structure gets, it is still not safe enough to avoid the divulgence of secret log in the process of log dispersal and retrieval, some cryptography technology is applied into the log management.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128754856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-15DOI: 10.1109/IIT.2009.5413773
B. Zibanezhad, K. Zamanifar, N. Nematbakhsh, F. Mardukhi
Web services composition based on QoS is the NP-hard problem, so the bionics optimization algorithms can solve it well. On the other hand, QoS of compound service is a key factor for satisfying the users. The users prefer different QoSs according to their desires. We have Proposed the services composition algorithm based on quality of services and gravitational search algorithm which is one of the recent optimization algorithms and it has many merits, for example rapid convergence speed, less memory use, considering a lot of special parameters such as the distance between solutions, etc. This paper presents a new approach to Service selection for Service Composition based on QoS and under the user's constraints. So in this approach, the QoS measures are considered based on the user's constraints and priorities. The experimental results show the method can achieve the composition effectively and it has a lot of potentiality for being applied.
{"title":"An approach for web services composition based on QoS and gravitational search algorithm","authors":"B. Zibanezhad, K. Zamanifar, N. Nematbakhsh, F. Mardukhi","doi":"10.1109/IIT.2009.5413773","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413773","url":null,"abstract":"Web services composition based on QoS is the NP-hard problem, so the bionics optimization algorithms can solve it well. On the other hand, QoS of compound service is a key factor for satisfying the users. The users prefer different QoSs according to their desires. We have Proposed the services composition algorithm based on quality of services and gravitational search algorithm which is one of the recent optimization algorithms and it has many merits, for example rapid convergence speed, less memory use, considering a lot of special parameters such as the distance between solutions, etc. This paper presents a new approach to Service selection for Service Composition based on QoS and under the user's constraints. So in this approach, the QoS measures are considered based on the user's constraints and priorities. The experimental results show the method can achieve the composition effectively and it has a lot of potentiality for being applied.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117040017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-15DOI: 10.1109/IIT.2009.5413761
S. Lazarova-Molnar
All state-space based simulation methods are doomed by the phenomenon of state-space explosion. The condition occurs when the simulation becomes memory-infeasible as simulation time advances due to the large number of states in the model. However, state-space explosion is not something that depends solely on the number of discrete states of the model as typically observed. While this is correct and completely sufficient for Markovian models, it is certainly not a sufficient criterion when models involve non-exponential probability distribution functions. In this paper we discuss the phenomenon of state-space explosion in terms of accurate complexity prediction for a general class of models. Its early diagnosis is especially significant in the case of proxel-based simulation, as it can lead towards hybridization of the method by employing discrete phase approximations for the critical states and transitions. This can significantly reduce the computational complexity of the simulation.
{"title":"True state-space complexity prediction: By the proxel-based simulation method","authors":"S. Lazarova-Molnar","doi":"10.1109/IIT.2009.5413761","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413761","url":null,"abstract":"All state-space based simulation methods are doomed by the phenomenon of state-space explosion. The condition occurs when the simulation becomes memory-infeasible as simulation time advances due to the large number of states in the model. However, state-space explosion is not something that depends solely on the number of discrete states of the model as typically observed. While this is correct and completely sufficient for Markovian models, it is certainly not a sufficient criterion when models involve non-exponential probability distribution functions. In this paper we discuss the phenomenon of state-space explosion in terms of accurate complexity prediction for a general class of models. Its early diagnosis is especially significant in the case of proxel-based simulation, as it can lead towards hybridization of the method by employing discrete phase approximations for the critical states and transitions. This can significantly reduce the computational complexity of the simulation.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131256229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-15DOI: 10.1109/IIT.2009.5413639
Yuji Wada, J. Sawamoto, Yuta Watanabe, T. Katoh
In this paper, our research objective is to develop a database virtualization technique so that data analysts or other users who apply data mining methods to their jobs can use all ubiquitous databases in the Internet as if they were recognized as a single database, thereby helping to reduce their workloads such as data collection from the databases and data cleansing works. In this study, firstly we examine XML scheme advantages and propose a database virtualization method by which such ubiquitous databases as relational databases, object-oriented databases, and XML databases are useful, as if they all behaved as a single database. Next, we show the method of virtualization of ubiquitous databases can describe ubiquitous database schema in a unified fashion using the XML schema. Moreover, it consists of a high-level concept of distributed database management of the same type and of different types, and also of a location transparency feature.
{"title":"Database virtualization technology in ubiquitous computing","authors":"Yuji Wada, J. Sawamoto, Yuta Watanabe, T. Katoh","doi":"10.1109/IIT.2009.5413639","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413639","url":null,"abstract":"In this paper, our research objective is to develop a database virtualization technique so that data analysts or other users who apply data mining methods to their jobs can use all ubiquitous databases in the Internet as if they were recognized as a single database, thereby helping to reduce their workloads such as data collection from the databases and data cleansing works. In this study, firstly we examine XML scheme advantages and propose a database virtualization method by which such ubiquitous databases as relational databases, object-oriented databases, and XML databases are useful, as if they all behaved as a single database. Next, we show the method of virtualization of ubiquitous databases can describe ubiquitous database schema in a unified fashion using the XML schema. Moreover, it consists of a high-level concept of distributed database management of the same type and of different types, and also of a location transparency feature.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131277792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}