首页 > 最新文献

2009 International Conference on Innovations in Information Technology (IIT)最新文献

英文 中文
Cooperative fuzzy rulebase construction based on a novel fuzzy decision tree 基于新型模糊决策树的协同模糊规则库构建
Pub Date : 2009-12-15 DOI: 10.1109/IIT.2009.5413762
E. Ahmadi, M. Taheri, N. Mirshekari, S. Hashemi, A. Sami, Ali K. Hamze
Fuzzy Inference Systems (FIS) are much considerable due to their interpretability and uncertainty factors. Hence, Fuzzy Rule-Based Classifier Systems (FRBCS) are widely investigated in aspects of construction and parameter learning. Also, decision trees are recursive structures which are not only simple and accurate, but also are fast in classification due to partitioning the feature space in a multi-stage process. Combination of fuzzy reasoning and decision trees gathers capabilities of both systems in an integrated one. In this paper, a novel fuzzy decision tree (FDT) is proposed for extracting fuzzy rules which are both accurate and cooperative due to dependency structure of decision tree. Furthermore, a weighting method is used to emphasize the corporation of the rules. Finally, the proposed method is compared with a well-known rule construction method named SRC on 8 UCI datasets. Experiments show a significant improvement on classification performance of the proposed method in comparison with SRC.
模糊推理系统(FIS)由于其可解释性和不确定性因素而受到广泛关注。因此,基于模糊规则的分类器系统(FRBCS)在构造和参数学习方面得到了广泛的研究。此外,决策树是一种递归结构,不仅简单准确,而且由于在多阶段过程中对特征空间进行了划分,因此分类速度很快。模糊推理和决策树的结合将两个系统的能力整合在一起。基于决策树的依赖结构,提出了一种新的模糊决策树(FDT),用于提取既准确又具有协作性的模糊规则。在此基础上,采用加权法来强调规则的共同性。最后,在8个UCI数据集上与一种著名的规则构建方法SRC进行了比较。实验结果表明,该方法在分类性能上有显著提高。
{"title":"Cooperative fuzzy rulebase construction based on a novel fuzzy decision tree","authors":"E. Ahmadi, M. Taheri, N. Mirshekari, S. Hashemi, A. Sami, Ali K. Hamze","doi":"10.1109/IIT.2009.5413762","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413762","url":null,"abstract":"Fuzzy Inference Systems (FIS) are much considerable due to their interpretability and uncertainty factors. Hence, Fuzzy Rule-Based Classifier Systems (FRBCS) are widely investigated in aspects of construction and parameter learning. Also, decision trees are recursive structures which are not only simple and accurate, but also are fast in classification due to partitioning the feature space in a multi-stage process. Combination of fuzzy reasoning and decision trees gathers capabilities of both systems in an integrated one. In this paper, a novel fuzzy decision tree (FDT) is proposed for extracting fuzzy rules which are both accurate and cooperative due to dependency structure of decision tree. Furthermore, a weighting method is used to emphasize the corporation of the rules. Finally, the proposed method is compared with a well-known rule construction method named SRC on 8 UCI datasets. Experiments show a significant improvement on classification performance of the proposed method in comparison with SRC.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124772889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Challenges in “mobilizing” desktop applications: a new methodology for requirements engineering “动员”桌面应用程序的挑战:需求工程的新方法
Pub Date : 2009-12-15 DOI: 10.1109/IIT.2009.5413636
R. Mizouni, A. Serhani, R. Dssouli, A. Benharref
With the proliferation of mobile devices, the challenge today is to provide users with applications that are of real value. These applications are, in most of the cases, mobilized versions of desktop applications that fit the contextual requirements of mobility constraints. When developed from a desktop application, it is difficult to align the mobile application with user expectations because of the experience the user has from the desktop application. In addition, in current practices, we can notice a lack of relevant guidance that assists the analyst in building such applications. To overcome this shortcoming, we propose a methodology for requirements elicitation when mobilizing desktop applications. This methodology relies on using knowledge the user has from her/his experience on the desktop application on one hand and learning from strengths and limitations of desktop applications on the other hand. It helps the definition of the set of features that the mobile application should provide to meet users' expectations. An application has been mobilized following our methodology to evaluate it.
随着移动设备的激增,今天的挑战是为用户提供真正有价值的应用程序。在大多数情况下,这些应用程序是桌面应用程序的移动版本,符合移动性约束的上下文需求。当从桌面应用程序开发时,由于用户从桌面应用程序获得的经验,很难使移动应用程序与用户期望保持一致。此外,在当前的实践中,我们可以注意到缺乏相关的指导来帮助分析人员构建这样的应用程序。为了克服这个缺点,我们提出了一种在调动桌面应用程序时进行需求引出的方法。这种方法一方面依赖于使用用户从她/他的桌面应用程序经验中获得的知识,另一方面依赖于从桌面应用程序的优点和局限性中学习。它有助于定义移动应用程序应该提供的功能集,以满足用户的期望。一个应用程序已经按照我们的方法进行了评估。
{"title":"Challenges in “mobilizing” desktop applications: a new methodology for requirements engineering","authors":"R. Mizouni, A. Serhani, R. Dssouli, A. Benharref","doi":"10.1109/IIT.2009.5413636","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413636","url":null,"abstract":"With the proliferation of mobile devices, the challenge today is to provide users with applications that are of real value. These applications are, in most of the cases, mobilized versions of desktop applications that fit the contextual requirements of mobility constraints. When developed from a desktop application, it is difficult to align the mobile application with user expectations because of the experience the user has from the desktop application. In addition, in current practices, we can notice a lack of relevant guidance that assists the analyst in building such applications. To overcome this shortcoming, we propose a methodology for requirements elicitation when mobilizing desktop applications. This methodology relies on using knowledge the user has from her/his experience on the desktop application on one hand and learning from strengths and limitations of desktop applications on the other hand. It helps the definition of the set of features that the mobile application should provide to meet users' expectations. An application has been mobilized following our methodology to evaluate it.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129748982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A combination of PSO and k-means methods to solve haplotype reconstruction problem 结合PSO和k-means方法解决单倍型重构问题
Pub Date : 2009-12-15 DOI: 10.1109/IIT.2009.5413778
S. Sharifian-R, Ardeshir Baharian, E. Asgarian, A. Rasooli
Disease association study is of great importance among various fields of study in bioinformatics. Computational methods happen to be advantageous specifically when experimental approaches fail to obtain accurate results. Haplotypes are believed to be the most responsible biological data for genetic diseases. In this paper, the problem of reconstructing haplotypes from error-containing SNP fragments is discussed. For this purpose, two new methods have been proposed by a combination of k-means clustering and particle swarm optimization algorithm. The methods and their implementation results on real biological and simulation datasets are represented which shows that they outperform the methods used alone.
疾病关联研究是生物信息学研究的重要领域之一。当实验方法不能得到准确的结果时,计算方法恰好是有利的。单倍型被认为是遗传疾病最可靠的生物学数据。本文讨论了含错误SNP片段重建单倍型的问题。为此,提出了两种结合k-means聚类和粒子群优化算法的新方法。给出了该方法及其在真实生物和仿真数据集上的实现结果,表明该方法优于单独使用的方法。
{"title":"A combination of PSO and k-means methods to solve haplotype reconstruction problem","authors":"S. Sharifian-R, Ardeshir Baharian, E. Asgarian, A. Rasooli","doi":"10.1109/IIT.2009.5413778","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413778","url":null,"abstract":"Disease association study is of great importance among various fields of study in bioinformatics. Computational methods happen to be advantageous specifically when experimental approaches fail to obtain accurate results. Haplotypes are believed to be the most responsible biological data for genetic diseases. In this paper, the problem of reconstructing haplotypes from error-containing SNP fragments is discussed. For this purpose, two new methods have been proposed by a combination of k-means clustering and particle swarm optimization algorithm. The methods and their implementation results on real biological and simulation datasets are represented which shows that they outperform the methods used alone.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122754074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Automatic processing of Arabic text 自动处理阿拉伯语文本
Pub Date : 2009-12-15 DOI: 10.1109/IIT.2009.5413793
Ziad Osman, L. Hamandi, R. Zantout, F. Sibai
Automatic recognition of printed and handwritten documents remains an active area of research. Arabic is one of the languages that present special problems. Arabic is cursive and therefore necessitates a segmentation process to determine the boundaries of a character. Arabic characters consist of multiple disconnected parts. Dots and Diacritics are used in many Arabic characters and can appear above or below the main body of the character. In Arabic, the same letter has up to four different forms depending on where it appears in the word and depending on the letters that are adjacent to it. In this paper, a novel approach is described that recognizes Arabic script documents. The method starts by preprocessing which involves binarization, noise reduction, and thinning. The text is then segmented into separate lines. Characters are then segmented by determining bifurcation points that are near the baseline. Segmented characters are then compared to prestored templates to identify the best match. The template comparisons are based on central moments, Hu moments, and Invariant moments. The method is proven to work satisfactorily for scanned printed Arabic text. The paper concludes with a discussion of the drawbacks of the method, and a description of possible solutions.
打印和手写文件的自动识别仍然是一个活跃的研究领域。阿拉伯语是存在特殊问题的语言之一。阿拉伯语是草书,因此需要分割过程来确定字符的边界。阿拉伯字符由多个不相连的部分组成。点和变音符符在许多阿拉伯字符中使用,可以出现在字符主体的上方或下方。在阿拉伯语中,同一个字母最多有四种不同的形式,这取决于它在单词中出现的位置以及与它相邻的字母。本文描述了一种识别阿拉伯文字文档的新方法。该方法首先进行预处理,包括二值化、降噪和细化。然后将文本分割成单独的行。然后通过确定基线附近的分岔点来分割字符。然后将分割的字符与预先存储的模板进行比较,以确定最佳匹配。模板比较基于中心矩、Hu矩和不变矩。该方法对扫描打印的阿拉伯文文本效果满意。本文最后讨论了该方法的缺点,并描述了可能的解决方案。
{"title":"Automatic processing of Arabic text","authors":"Ziad Osman, L. Hamandi, R. Zantout, F. Sibai","doi":"10.1109/IIT.2009.5413793","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413793","url":null,"abstract":"Automatic recognition of printed and handwritten documents remains an active area of research. Arabic is one of the languages that present special problems. Arabic is cursive and therefore necessitates a segmentation process to determine the boundaries of a character. Arabic characters consist of multiple disconnected parts. Dots and Diacritics are used in many Arabic characters and can appear above or below the main body of the character. In Arabic, the same letter has up to four different forms depending on where it appears in the word and depending on the letters that are adjacent to it. In this paper, a novel approach is described that recognizes Arabic script documents. The method starts by preprocessing which involves binarization, noise reduction, and thinning. The text is then segmented into separate lines. Characters are then segmented by determining bifurcation points that are near the baseline. Segmented characters are then compared to prestored templates to identify the best match. The template comparisons are based on central moments, Hu moments, and Invariant moments. The method is proven to work satisfactorily for scanned printed Arabic text. The paper concludes with a discussion of the drawbacks of the method, and a description of possible solutions.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133206369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Improvement of Hessian based vessel segmentation using two stage threshold and morphological image recovering 基于两阶段阈值和形态学图像恢复的Hessian血管分割改进
Pub Date : 2009-12-15 DOI: 10.1109/IIT.2009.5413357
S. Mirhassani, M. Hosseini, A. Behrad
In many of vessel segmentation methods, Hessian based vessel enhancement filter as an efficient step is employed. In this paper, for segmentation of vessels, HBVF method is the first step of the algorithm. Afterward, to remove non-vessels from image, a high level threshold is applied to the filtered image. Since, as a result of threshold some of weak vessels are removed, recovering of vessels using Hough transform and morphological operations is accomplished. Then, the yielded image is combined with a version of vesselness filtered image which is converted to a binary image using a low level threshold. As a consequence of image combination, most of vessels are detected. In the final step, to reduce the false positives, fine particles are removed from the result according to their size. Experiments indicate the promising results which demonstrate the efficiency of the proposed algorithm.
在许多血管分割方法中,采用基于Hessian的血管增强滤波器作为有效的分割步骤。在本文中,对于血管的分割,HBVF方法是算法的第一步。然后,对滤波后的图像应用高电平阈值去除图像中的非血管。由于阈值的存在,一些弱血管被去除,利用霍夫变换和形态学运算实现血管的恢复。然后,将生成的图像与使用低电平阈值将其转换为二值图像的容器滤波图像相结合。通过图像组合,检测出了大部分的血管。在最后一步,为了减少误报,根据细颗粒的大小从结果中去除。实验结果表明了该算法的有效性。
{"title":"Improvement of Hessian based vessel segmentation using two stage threshold and morphological image recovering","authors":"S. Mirhassani, M. Hosseini, A. Behrad","doi":"10.1109/IIT.2009.5413357","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413357","url":null,"abstract":"In many of vessel segmentation methods, Hessian based vessel enhancement filter as an efficient step is employed. In this paper, for segmentation of vessels, HBVF method is the first step of the algorithm. Afterward, to remove non-vessels from image, a high level threshold is applied to the filtered image. Since, as a result of threshold some of weak vessels are removed, recovering of vessels using Hough transform and morphological operations is accomplished. Then, the yielded image is combined with a version of vesselness filtered image which is converted to a binary image using a low level threshold. As a consequence of image combination, most of vessels are detected. In the final step, to reduce the false positives, fine particles are removed from the result according to their size. Experiments indicate the promising results which demonstrate the efficiency of the proposed algorithm.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115481128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A software development tool for improving Quality of Service in Distributed Database Systems 用于提高分布式数据库系统服务质量的软件开发工具
Pub Date : 2009-12-15 DOI: 10.1109/IIT.2009.5413375
I. Hababeh
The Distributed Database Management Systems (DDBMS) are measured by their Quality of Service (QoS) improvements on the real world applications. To analyze the behavior of the distributed database system and to measure its quality of service performance, an integrated tool for a DDBMS is developed and presented.
分布式数据库管理系统(DDBMS)是通过其对实际应用程序的服务质量(QoS)改进来衡量的。为了分析分布式数据库系统的行为并测量其服务性能的质量,开发并提出了一个用于DDBMS的集成工具。
{"title":"A software development tool for improving Quality of Service in Distributed Database Systems","authors":"I. Hababeh","doi":"10.1109/IIT.2009.5413375","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413375","url":null,"abstract":"The Distributed Database Management Systems (DDBMS) are measured by their Quality of Service (QoS) improvements on the real world applications. To analyze the behavior of the distributed database system and to measure its quality of service performance, an integrated tool for a DDBMS is developed and presented.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"62 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114109979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Application of distributed safe log management in Small-Scale, High-Risk system 分布式安全日志管理在小型、高风险系统中的应用
Pub Date : 2009-12-15 DOI: 10.1109/IIT.2009.5413751
Yuchao Chen, Weiming Wang, M. Gao
We described an implementation of log managing structure to store log in a Small-Scale, High-Risk distributed environment, which protects the integrity for log even some of storage nodes fail and guarantees the security in the case of secret log divulgence, meanwhile will not cause large space consumption. After collecting log from agents, Collect Center disperses log into pieces using Rabin's Information Dispersal Algorithm (IDA), builds Distributed Fingerprint (DFP) for integrity check. Integrity though the structure gets, it is still not safe enough to avoid the divulgence of secret log in the process of log dispersal and retrieval, some cryptography technology is applied into the log management.
本文介绍了一种日志管理结构的实现,用于在小规模、高风险的分布式环境中存储日志,既能在部分存储节点故障的情况下保护日志的完整性,又能在机密日志泄露的情况下保证日志的安全性,同时又不会造成过大的空间消耗。Collect Center收集agent的日志后,使用Rabin的信息分散算法(IDA)将日志分散,建立分布式指纹(DFP)进行完整性检查。虽然结构的完整性得到了保证,但在日志的分散和检索过程中,仍然存在避免秘密日志泄露的安全隐患,因此在日志管理中应用了一些加密技术。
{"title":"Application of distributed safe log management in Small-Scale, High-Risk system","authors":"Yuchao Chen, Weiming Wang, M. Gao","doi":"10.1109/IIT.2009.5413751","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413751","url":null,"abstract":"We described an implementation of log managing structure to store log in a Small-Scale, High-Risk distributed environment, which protects the integrity for log even some of storage nodes fail and guarantees the security in the case of secret log divulgence, meanwhile will not cause large space consumption. After collecting log from agents, Collect Center disperses log into pieces using Rabin's Information Dispersal Algorithm (IDA), builds Distributed Fingerprint (DFP) for integrity check. Integrity though the structure gets, it is still not safe enough to avoid the divulgence of secret log in the process of log dispersal and retrieval, some cryptography technology is applied into the log management.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128754856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An approach for web services composition based on QoS and gravitational search algorithm 一种基于QoS和引力搜索算法的web服务组合方法
Pub Date : 2009-12-15 DOI: 10.1109/IIT.2009.5413773
B. Zibanezhad, K. Zamanifar, N. Nematbakhsh, F. Mardukhi
Web services composition based on QoS is the NP-hard problem, so the bionics optimization algorithms can solve it well. On the other hand, QoS of compound service is a key factor for satisfying the users. The users prefer different QoSs according to their desires. We have Proposed the services composition algorithm based on quality of services and gravitational search algorithm which is one of the recent optimization algorithms and it has many merits, for example rapid convergence speed, less memory use, considering a lot of special parameters such as the distance between solutions, etc. This paper presents a new approach to Service selection for Service Composition based on QoS and under the user's constraints. So in this approach, the QoS measures are considered based on the user's constraints and priorities. The experimental results show the method can achieve the composition effectively and it has a lot of potentiality for being applied.
基于QoS的Web服务组合是np难题,仿生学优化算法可以很好地解决这一问题。另一方面,复合服务的QoS是满足用户需求的关键因素。用户根据自己的需求选择不同的qos。提出了基于服务质量的服务组合算法和引力搜索算法,引力搜索算法是一种最新的优化算法,具有收敛速度快、内存占用少、考虑了解间距离等许多特殊参数等优点。提出了一种基于服务质量和用户约束的服务组合选择方法。因此,在这种方法中,QoS度量是基于用户的约束和优先级来考虑的。实验结果表明,该方法可以有效地实现组合,具有很大的应用潜力。
{"title":"An approach for web services composition based on QoS and gravitational search algorithm","authors":"B. Zibanezhad, K. Zamanifar, N. Nematbakhsh, F. Mardukhi","doi":"10.1109/IIT.2009.5413773","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413773","url":null,"abstract":"Web services composition based on QoS is the NP-hard problem, so the bionics optimization algorithms can solve it well. On the other hand, QoS of compound service is a key factor for satisfying the users. The users prefer different QoSs according to their desires. We have Proposed the services composition algorithm based on quality of services and gravitational search algorithm which is one of the recent optimization algorithms and it has many merits, for example rapid convergence speed, less memory use, considering a lot of special parameters such as the distance between solutions, etc. This paper presents a new approach to Service selection for Service Composition based on QoS and under the user's constraints. So in this approach, the QoS measures are considered based on the user's constraints and priorities. The experimental results show the method can achieve the composition effectively and it has a lot of potentiality for being applied.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117040017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
True state-space complexity prediction: By the proxel-based simulation method 真状态空间复杂度预测:采用基于proxel的仿真方法
Pub Date : 2009-12-15 DOI: 10.1109/IIT.2009.5413761
S. Lazarova-Molnar
All state-space based simulation methods are doomed by the phenomenon of state-space explosion. The condition occurs when the simulation becomes memory-infeasible as simulation time advances due to the large number of states in the model. However, state-space explosion is not something that depends solely on the number of discrete states of the model as typically observed. While this is correct and completely sufficient for Markovian models, it is certainly not a sufficient criterion when models involve non-exponential probability distribution functions. In this paper we discuss the phenomenon of state-space explosion in terms of accurate complexity prediction for a general class of models. Its early diagnosis is especially significant in the case of proxel-based simulation, as it can lead towards hybridization of the method by employing discrete phase approximations for the critical states and transitions. This can significantly reduce the computational complexity of the simulation.
所有基于状态空间的仿真方法都存在状态空间爆炸现象。由于模型中存在大量状态,随着仿真时间的推移,仿真变得内存不可行的情况就会发生。然而,状态空间爆炸并不像通常观察到的那样仅仅取决于模型离散状态的数量。虽然这对于马尔可夫模型是正确的,并且是完全充分的,但当模型涉及非指数概率分布函数时,这当然不是一个充分的准则。本文从精确的复杂性预测的角度讨论了一类模型的状态空间爆炸现象。它的早期诊断在基于proxel的模拟中尤其重要,因为它可以通过对临界状态和过渡采用离散相位近似来导致方法的杂交。这可以显著降低仿真的计算复杂度。
{"title":"True state-space complexity prediction: By the proxel-based simulation method","authors":"S. Lazarova-Molnar","doi":"10.1109/IIT.2009.5413761","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413761","url":null,"abstract":"All state-space based simulation methods are doomed by the phenomenon of state-space explosion. The condition occurs when the simulation becomes memory-infeasible as simulation time advances due to the large number of states in the model. However, state-space explosion is not something that depends solely on the number of discrete states of the model as typically observed. While this is correct and completely sufficient for Markovian models, it is certainly not a sufficient criterion when models involve non-exponential probability distribution functions. In this paper we discuss the phenomenon of state-space explosion in terms of accurate complexity prediction for a general class of models. Its early diagnosis is especially significant in the case of proxel-based simulation, as it can lead towards hybridization of the method by employing discrete phase approximations for the critical states and transitions. This can significantly reduce the computational complexity of the simulation.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131256229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Database virtualization technology in ubiquitous computing 普适计算中的数据库虚拟化技术
Pub Date : 2009-12-15 DOI: 10.1109/IIT.2009.5413639
Yuji Wada, J. Sawamoto, Yuta Watanabe, T. Katoh
In this paper, our research objective is to develop a database virtualization technique so that data analysts or other users who apply data mining methods to their jobs can use all ubiquitous databases in the Internet as if they were recognized as a single database, thereby helping to reduce their workloads such as data collection from the databases and data cleansing works. In this study, firstly we examine XML scheme advantages and propose a database virtualization method by which such ubiquitous databases as relational databases, object-oriented databases, and XML databases are useful, as if they all behaved as a single database. Next, we show the method of virtualization of ubiquitous databases can describe ubiquitous database schema in a unified fashion using the XML schema. Moreover, it consists of a high-level concept of distributed database management of the same type and of different types, and also of a location transparency feature.
在本文中,我们的研究目标是开发一种数据库虚拟化技术,以便数据分析师或其他将数据挖掘方法应用于其工作的用户可以使用互联网上所有无处不在的数据库,就好像它们被识别为单个数据库一样,从而帮助减少他们的工作负载,例如从数据库收集数据和数据清理工作。在本研究中,我们首先考察了XML方案的优点,并提出了一种数据库虚拟化方法,通过这种方法,关系数据库、面向对象数据库和XML数据库等无处不在的数据库都是有用的,就好像它们都表现为单个数据库一样。接下来,我们将展示无处不在数据库的虚拟化方法,该方法可以使用XML模式以统一的方式描述无处不在的数据库模式。此外,它还包含相同类型和不同类型的分布式数据库管理的高级概念,以及位置透明特性。
{"title":"Database virtualization technology in ubiquitous computing","authors":"Yuji Wada, J. Sawamoto, Yuta Watanabe, T. Katoh","doi":"10.1109/IIT.2009.5413639","DOIUrl":"https://doi.org/10.1109/IIT.2009.5413639","url":null,"abstract":"In this paper, our research objective is to develop a database virtualization technique so that data analysts or other users who apply data mining methods to their jobs can use all ubiquitous databases in the Internet as if they were recognized as a single database, thereby helping to reduce their workloads such as data collection from the databases and data cleansing works. In this study, firstly we examine XML scheme advantages and propose a database virtualization method by which such ubiquitous databases as relational databases, object-oriented databases, and XML databases are useful, as if they all behaved as a single database. Next, we show the method of virtualization of ubiquitous databases can describe ubiquitous database schema in a unified fashion using the XML schema. Moreover, it consists of a high-level concept of distributed database management of the same type and of different types, and also of a location transparency feature.","PeriodicalId":239829,"journal":{"name":"2009 International Conference on Innovations in Information Technology (IIT)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131277792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2009 International Conference on Innovations in Information Technology (IIT)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1