首页 > 最新文献

International Review on Computers and Software最新文献

英文 中文
Algorithmic Model to Limit TCP Protocol Congestion in End-To-End Networks 端到端网络中限制TCP协议拥塞的算法模型
Pub Date : 2017-05-31 DOI: 10.15866/irecos.v12i3.13226
Andrés Felipe Hernández Leon, O. S. Parra, Miguel José Espitia Rico
An algorithmic model is presented to limit the congestion in end-to-end TCP networks, an evaluation of this type of current networks of this type is carried out and the present shortcomings are determined. An investigation is made about the metrics that are most important and determinant at the time of end-to-end TCP transmissions such as segment size, buffer capacity, ACK times and concurrent services and how they should be monitored and configured in order to obtain the best result under certain previously identified conditions of the network. The model is designed and presented as a series of steps to follow according to the actual factors that an end-to-end network presents as well as the implementation of its final design, using the same methodology during the testing and the simulation carried out with the ns2 software. Some simulations of the encountered scenarios are presented in comparison with the results of an actual end-to-end network, in order to determine the main obtained result. A series of recommendations are made and the conclusions drawn by the research are listed as well as some considerations on future work.
提出了一种限制端到端TCP网络拥塞的算法模型,对当前TCP网络的拥塞类型进行了评估,并确定了目前存在的不足。研究了端到端TCP传输时最重要和最具决定性的指标,如段大小、缓冲区容量、ACK时间和并发服务,以及如何监控和配置它们,以便在某些先前确定的网络条件下获得最佳结果。根据端到端网络呈现的实际因素以及其最终设计的实现,该模型被设计和呈现为一系列要遵循的步骤,在测试和用ns2软件进行的模拟中使用相同的方法。通过与实际端到端网络的仿真结果进行比较,确定了得到的主要结果。提出了一系列的建议,并提出了研究的结论和对今后工作的一些思考。
{"title":"Algorithmic Model to Limit TCP Protocol Congestion in End-To-End Networks","authors":"Andrés Felipe Hernández Leon, O. S. Parra, Miguel José Espitia Rico","doi":"10.15866/irecos.v12i3.13226","DOIUrl":"https://doi.org/10.15866/irecos.v12i3.13226","url":null,"abstract":"An algorithmic model is presented to limit the congestion in end-to-end TCP networks, an evaluation of this type of current networks of this type is carried out and the present shortcomings are determined. An investigation is made about the metrics that are most important and determinant at the time of end-to-end TCP transmissions such as segment size, buffer capacity, ACK times and concurrent services and how they should be monitored and configured in order to obtain the best result under certain previously identified conditions of the network. The model is designed and presented as a series of steps to follow according to the actual factors that an end-to-end network presents as well as the implementation of its final design, using the same methodology during the testing and the simulation carried out with the ns2 software. Some simulations of the encountered scenarios are presented in comparison with the results of an actual end-to-end network, in order to determine the main obtained result. A series of recommendations are made and the conclusions drawn by the research are listed as well as some considerations on future work.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126855276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real Time Vision Based Method for Finger Counting Through Shape Analysis with Convex Hull and PCA Techniques 基于凸包形状分析和主成分分析技术的实时视觉手指计数方法
Pub Date : 2017-05-31 DOI: 10.15866/irecos.v12i3.12278
O. B. Henia
This paper presents a real time vision-based finger counting method combining convex-hull detection and PCA techniques. The method starts by segmenting an input image to detect the area corresponding to the observed hand. For that purpose, a skin color detection method is used to differentiate the foreground containing the hand from the image background. Lighting variation can affect the accuracy of the segmentation which has an impact on the functioning of the proposed method. In order to deal with the lighting variation problem, HLS color space is used to represent the colors. Hand contour is then calculated and fingertips are detected through the detection of the convex-hull and convexity defects of the hand shape. The use of convex-hull algorithm is simple and gives accurate results when more than one finger is observed in the input image but the accuracy decreases when it comes to deal with the case where only one finger is observed. To overcome this problem, principal component analysis technique (PCA) is used to analyze the hand shape and detect the case where only one finger is observed with better accuracy. The proposed method could be utilized in Human Computer Interaction System (HCI) where the machine reacts to each detected number. Both real and synthetic images are used to test and demonstrate the potential of our method.
本文提出了一种结合凸壳检测和主成分分析技术的实时视觉手指计数方法。该方法首先分割输入图像以检测与观察到的手相对应的区域。为此,使用皮肤颜色检测方法将包含手的前景与图像背景区分开来。光照变化会影响分割的准确性,从而影响所提方法的功能。为了处理光照变化问题,采用HLS色彩空间来表示颜色。然后计算手部轮廓,通过检测手部形状的凸壳和凸缺陷来检测指尖。凸体算法的使用简单,当输入图像中观察到多个手指时,可以给出准确的结果,但当只观察到一个手指时,精度会下降。为了克服这一问题,采用主成分分析技术(PCA)对手部形状进行分析,并检测出只有一个手指被观察到的情况,准确率更高。所提出的方法可用于人机交互系统(HCI),其中机器对每个检测到的数字作出反应。真实图像和合成图像都被用来测试和展示我们的方法的潜力。
{"title":"Real Time Vision Based Method for Finger Counting Through Shape Analysis with Convex Hull and PCA Techniques","authors":"O. B. Henia","doi":"10.15866/irecos.v12i3.12278","DOIUrl":"https://doi.org/10.15866/irecos.v12i3.12278","url":null,"abstract":"This paper presents a real time vision-based finger counting method combining convex-hull detection and PCA techniques. The method starts by segmenting an input image to detect the area corresponding to the observed hand. For that purpose, a skin color detection method is used to differentiate the foreground containing the hand from the image background. Lighting variation can affect the accuracy of the segmentation which has an impact on the functioning of the proposed method. In order to deal with the lighting variation problem, HLS color space is used to represent the colors. Hand contour is then calculated and fingertips are detected through the detection of the convex-hull and convexity defects of the hand shape. The use of convex-hull algorithm is simple and gives accurate results when more than one finger is observed in the input image but the accuracy decreases when it comes to deal with the case where only one finger is observed. To overcome this problem, principal component analysis technique (PCA) is used to analyze the hand shape and detect the case where only one finger is observed with better accuracy. The proposed method could be utilized in Human Computer Interaction System (HCI) where the machine reacts to each detected number. Both real and synthetic images are used to test and demonstrate the potential of our method.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127076720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Machine Learning based Approach to Multiclass Classification of Customer Loyalty using Deep Nets 基于机器学习的深度网络客户忠诚度分类方法
Pub Date : 2017-03-31 DOI: 10.15866/IRECOS.V12I2.12354
Pooja Agarwal, Arti Arya, J. Suryaprasad, Abhijit Theophilus
Identification of customer’s loyalty is one of the most captivating area of today’s growing business scenario. For any organization, retaining customer is more important than exploring new customers. In this paper, Deep Belief Network (DBN) based approach is implemented for classifying customer loyalties. Training a Deep Belief Network (DBN) is a tedious task but once it trains, the accuracy of classification improves immensely. It also learns from its environment and does not need to be reprogrammed for new situations completely. After training, classifier relies on weight matrices to classify examples. The proposed approach is tested over real as well as sample datasets. The results so acquired are compared with Deep Neural Networks and Support Vector Machine based approaches, which shows Deep Belief Network (DBN) gives accuracy up to 99%.
识别客户的忠诚度是当今不断增长的业务场景中最吸引人的领域之一。对于任何组织来说,留住客户比开发新客户更重要。本文提出了一种基于深度信念网络(DBN)的客户忠诚分类方法。训练深度信念网络(DBN)是一项繁琐的任务,但一旦训练,分类的准确性就会大大提高。它还可以从环境中学习,不需要完全为新情况重新编程。训练后,分类器依靠权矩阵对样本进行分类。该方法在真实数据集和样本数据集上进行了测试。将所得结果与基于深度神经网络和支持向量机的方法进行了比较,结果表明深度信念网络(Deep Belief Network, DBN)的准确率高达99%。
{"title":"A Machine Learning based Approach to Multiclass Classification of Customer Loyalty using Deep Nets","authors":"Pooja Agarwal, Arti Arya, J. Suryaprasad, Abhijit Theophilus","doi":"10.15866/IRECOS.V12I2.12354","DOIUrl":"https://doi.org/10.15866/IRECOS.V12I2.12354","url":null,"abstract":"Identification of customer’s loyalty is one of the most captivating area of today’s growing business scenario. For any organization, retaining customer is more important than exploring new customers. In this paper, Deep Belief Network (DBN) based approach is implemented for classifying customer loyalties. Training a Deep Belief Network (DBN) is a tedious task but once it trains, the accuracy of classification improves immensely. It also learns from its environment and does not need to be reprogrammed for new situations completely. After training, classifier relies on weight matrices to classify examples. The proposed approach is tested over real as well as sample datasets. The results so acquired are compared with Deep Neural Networks and Support Vector Machine based approaches, which shows Deep Belief Network (DBN) gives accuracy up to 99%.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125914825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“Forced” Force Directed Placement: a New Algorithm for Large Graph Visualization “强制”力定向放置:一种大型图形可视化的新算法
Pub Date : 2017-03-31 DOI: 10.15866/IRECOS.V12I2.12002
Zakaria Boulouard, L. Koutti, Anass El Haddadi, B. Dousset
Graph Visualization is a technique that helps users to easily comprehend connected data (social networks, semantic networks, etc.) based on human perception. With the prevalence of Big Data, these graphs tend to be too large to decipher by the user’s visual abilities alone. One of the leading causes of this problem is when the nodes leave the visualization space. Many attempts have been made to optimize large graph visualization, but they all have limitations. Among these attempts, the most famous one is the Force Directed Placement Algorithm. This algorithm can provide beautiful visualizations for small to medium graphs, but when it comes to larger graphs it fails to keep some independent nodes or even subgraphs inside the visualization space. In this paper, we present an algorithm that we have named "Forced Force Directed Placement". This algorithm provides an enhancement of the classical Force Directed Placement algorithm by proposing a stronger force function. The “FForce”, as we have named it, can bring related nodes closer to each other before reaching an equilibrium position. This helped us gain more display space and that gave us the possibility to visualize larger graphs.
图可视化是一种基于人类感知,帮助用户轻松理解连接数据(社交网络、语义网络等)的技术。随着大数据的普及,这些图表往往太大,无法仅凭用户的视觉能力来解读。这个问题的主要原因之一是节点离开可视化空间。为了优化大型图形的可视化,已经进行了许多尝试,但它们都有局限性。在这些尝试中,最著名的是力定向放置算法。该算法可以为中小型图提供漂亮的可视化效果,但是当涉及到较大的图时,它无法在可视化空间中保留一些独立的节点甚至子图。在本文中,我们提出了一种算法,我们称之为“强制力定向放置”。该算法通过提出更强的力函数,对经典的力定向放置算法进行了改进。我们将其命名为“FForce”,它可以使相关节点在达到平衡位置之前相互靠近。这帮助我们获得了更多的显示空间,使我们能够可视化更大的图形。
{"title":"“Forced” Force Directed Placement: a New Algorithm for Large Graph Visualization","authors":"Zakaria Boulouard, L. Koutti, Anass El Haddadi, B. Dousset","doi":"10.15866/IRECOS.V12I2.12002","DOIUrl":"https://doi.org/10.15866/IRECOS.V12I2.12002","url":null,"abstract":"Graph Visualization is a technique that helps users to easily comprehend connected data (social networks, semantic networks, etc.) based on human perception. With the prevalence of Big Data, these graphs tend to be too large to decipher by the user’s visual abilities alone. One of the leading causes of this problem is when the nodes leave the visualization space. Many attempts have been made to optimize large graph visualization, but they all have limitations. Among these attempts, the most famous one is the Force Directed Placement Algorithm. This algorithm can provide beautiful visualizations for small to medium graphs, but when it comes to larger graphs it fails to keep some independent nodes or even subgraphs inside the visualization space. In this paper, we present an algorithm that we have named \"Forced Force Directed Placement\". This algorithm provides an enhancement of the classical Force Directed Placement algorithm by proposing a stronger force function. The “FForce”, as we have named it, can bring related nodes closer to each other before reaching an equilibrium position. This helped us gain more display space and that gave us the possibility to visualize larger graphs.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133074660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Grid Self-Load-Balancing: the Agent Process Paradigm 网格自负载平衡:代理进程范式
Pub Date : 2017-03-31 DOI: 10.15866/irecos.v12i2.12718
Ahmed Adnane, A. Lebbat, H. Medromi, M. Radoui
Load balancing aims to exploit networked resources equitably in such a way that no resources are overloaded while others are under-loaded or idle. Many approaches have been proposed and implemented, but as new infrastructures emerge like grids and Global Computing (GC), new challenges are raised with regard to network latency. The location policy, as one of the main fundamentals of load balancing solutions, aims to locate overloaded and under-loaded nodes in a network. To do so, multiple communication messages are sent across the network. This technique wastes network resources and causes remarkable network delays in environments like GC, which makes it impractical. In this paper, we propose a new paradigm for adaptive distributed load balancing inspired by swarm intelligence and multi-agent systems. In such a paradigm, no load balancing service would be required. In fact, work tasks are self-aware and capable of self-load-balancing over an unknown-load network. By its nature and based on stigmergy mechanisms, communication frequency of the proposed paradigm is significantly reduced compared to existing solutions. The present work explains the fundamentals of this paradigm, coined the Agent Process Paradigm (APP), as well as its underlying algorithms. Results of performance evaluation are presented and discussed at the end of this paper.
负载平衡的目的是公平地利用网络资源,使任何资源都不会过载,而其他资源则负载不足或空闲。已经提出并实现了许多方法,但是随着网格和全局计算(GC)等新基础设施的出现,网络延迟方面也提出了新的挑战。定位策略是负载均衡解决方案的主要基础之一,其目的是对网络中过载和欠负载的节点进行定位。为此,需要通过网络发送多个通信消息。这种技术浪费了网络资源,并在GC等环境中导致了显著的网络延迟,这使得它变得不切实际。本文提出了一种受群体智能和多智能体系统启发的自适应分布式负载平衡新范式。在这种范例中,不需要负载平衡服务。事实上,工作任务是自我感知的,能够在未知负载网络上进行自我负载平衡。由于其性质和基于污名机制,与现有解决方案相比,所提出的范式的通信频率显着降低。目前的工作解释了这种范式的基本原理,创造了代理过程范式(APP),以及它的底层算法。本文最后给出了性能评估的结果并进行了讨论。
{"title":"Grid Self-Load-Balancing: the Agent Process Paradigm","authors":"Ahmed Adnane, A. Lebbat, H. Medromi, M. Radoui","doi":"10.15866/irecos.v12i2.12718","DOIUrl":"https://doi.org/10.15866/irecos.v12i2.12718","url":null,"abstract":"Load balancing aims to exploit networked resources equitably in such a way that no resources are overloaded while others are under-loaded or idle. Many approaches have been proposed and implemented, but as new infrastructures emerge like grids and Global Computing (GC), new challenges are raised with regard to network latency. The location policy, as one of the main fundamentals of load balancing solutions, aims to locate overloaded and under-loaded nodes in a network. To do so, multiple communication messages are sent across the network. This technique wastes network resources and causes remarkable network delays in environments like GC, which makes it impractical. In this paper, we propose a new paradigm for adaptive distributed load balancing inspired by swarm intelligence and multi-agent systems. In such a paradigm, no load balancing service would be required. In fact, work tasks are self-aware and capable of self-load-balancing over an unknown-load network. By its nature and based on stigmergy mechanisms, communication frequency of the proposed paradigm is significantly reduced compared to existing solutions. The present work explains the fundamentals of this paradigm, coined the Agent Process Paradigm (APP), as well as its underlying algorithms. Results of performance evaluation are presented and discussed at the end of this paper.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134453901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Software Library Investment Metrics: a New Approach, Issues and Recommendations 软件库投资度量:新方法、问题和建议
Pub Date : 2017-03-31 DOI: 10.15866/IRECOS.V12I2.12228
M. Shatnawi, Ismail Hmeidi, Anas Shatnawi
Software quality is considered as one of the most highly interacting aspects in software engineering. It has many dimensions that vary depending on the users' requirements and their points of view. Thus, the varying dimensions lead to complications in measuring and defining the software quality appropriately. The use of libraries increases software quality more than using generic programming as these libraries are prepared and tested in advance. Moreover, these libraries reduce the effort spent in the designing, testing and the maintaining processes. In this research, a new model is introduced to calculate the saved effort that results from using libraries instead of generic programming in testing, coding, and productivity processes. The proposed model consists of three metrics. These metrics are the library investment ratio, the library investment level, and program simplicity. An experimental analysis has been done onto ten software products to compare the outcomes of the model with reuse percent. The outcomes show that the model gives better results than reuse percent, because the model is deepening in the source code more than the reuse percent does. Also, the model has a better effect on the improvement of software quality and productivity, rather than reuse percent.
软件质量被认为是软件工程中相互影响最大的方面之一。它有许多维度,这些维度根据用户的需求和他们的观点而变化。因此,不同的维度导致了适当地度量和定义软件质量的复杂性。使用库比使用泛型编程更能提高软件质量,因为这些库是事先准备和测试的。此外,这些库减少了在设计、测试和维护过程中花费的精力。在这项研究中,引入了一个新的模型来计算在测试、编码和生产过程中使用库而不是泛型编程所节省的工作量。提出的模型由三个指标组成。这些指标是库投资比率、库投资水平和程序简单性。对10个软件产品进行了实验分析,以比较模型与重用率的结果。结果表明,模型比重用百分比给出了更好的结果,因为模型比重用百分比在源代码中更深入。此外,该模型对软件质量和生产力的提高有更好的效果,而不是重用率的提高。
{"title":"Software Library Investment Metrics: a New Approach, Issues and Recommendations","authors":"M. Shatnawi, Ismail Hmeidi, Anas Shatnawi","doi":"10.15866/IRECOS.V12I2.12228","DOIUrl":"https://doi.org/10.15866/IRECOS.V12I2.12228","url":null,"abstract":"Software quality is considered as one of the most highly interacting aspects in software engineering. It has many dimensions that vary depending on the users' requirements and their points of view. Thus, the varying dimensions lead to complications in measuring and defining the software quality appropriately. The use of libraries increases software quality more than using generic programming as these libraries are prepared and tested in advance. Moreover, these libraries reduce the effort spent in the designing, testing and the maintaining processes. In this research, a new model is introduced to calculate the saved effort that results from using libraries instead of generic programming in testing, coding, and productivity processes. The proposed model consists of three metrics. These metrics are the library investment ratio, the library investment level, and program simplicity. An experimental analysis has been done onto ten software products to compare the outcomes of the model with reuse percent. The outcomes show that the model gives better results than reuse percent, because the model is deepening in the source code more than the reuse percent does. Also, the model has a better effect on the improvement of software quality and productivity, rather than reuse percent.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124905953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introducing Model-Driven Testing in Scrum Process Using U2TP and AndroMDA 在Scrum过程中使用U2TP和AndroMDA引入模型驱动测试
Pub Date : 2017-01-31 DOI: 10.15866/irecos.v12i1.11334
Meryem El Allaoui, Khalid Nafil, R. Touahni
In Scrum agile software development, the increasing complexity of the system and the short sprint cycle of a product makes it difficult to thoroughly test products and ensuring software quality. Furthermore, manual testing is time consuming and requires expertise. As a result, automated testing has emerged as a solution to face this challenge. In this paper, we present an approach to generate test cases from UML sequence diagrams integrated with the Scrum agile process. Previously, the authors presented a new technique for automatic generation of UML 2 sequence diagrams from a set of user stories. In this paper, we propose two new cartridges for AndroMDA Framework. The first cartridge for M2M transformation takes UML 2 sequence diagrams as input and produces U2TP sequence diagrams; and the second cartridge for M2T transformation takes U2TP sequence diagrams as input and generates test cases.
在Scrum敏捷软件开发中,系统的复杂性日益增加,产品的sprint周期也越来越短,这使得彻底测试产品和确保软件质量变得困难。此外,手工测试非常耗时,并且需要专业知识。因此,自动化测试作为应对这一挑战的解决方案出现了。在本文中,我们提出了一种从UML序列图与Scrum敏捷过程集成中生成测试用例的方法。先前,作者提出了一种从一组用户场景中自动生成UML 2序列图的新技术。在本文中,我们提出了两个新的AndroMDA框架的墨盒。M2M转换的第一个工具箱将UML 2序列图作为输入,并生成U2TP序列图;第二个用于M2T转换的工具箱将U2TP序列图作为输入并生成测试用例。
{"title":"Introducing Model-Driven Testing in Scrum Process Using U2TP and AndroMDA","authors":"Meryem El Allaoui, Khalid Nafil, R. Touahni","doi":"10.15866/irecos.v12i1.11334","DOIUrl":"https://doi.org/10.15866/irecos.v12i1.11334","url":null,"abstract":"In Scrum agile software development, the increasing complexity of the system and the short sprint cycle of a product makes it difficult to thoroughly test products and ensuring software quality. Furthermore, manual testing is time consuming and requires expertise. As a result, automated testing has emerged as a solution to face this challenge. In this paper, we present an approach to generate test cases from UML sequence diagrams integrated with the Scrum agile process. Previously, the authors presented a new technique for automatic generation of UML 2 sequence diagrams from a set of user stories. In this paper, we propose two new cartridges for AndroMDA Framework. The first cartridge for M2M transformation takes UML 2 sequence diagrams as input and produces U2TP sequence diagrams; and the second cartridge for M2T transformation takes U2TP sequence diagrams as input and generates test cases.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134350094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
On Detecting Wi-Fi Unauthorized Access Utilizing Software Define Network (SDN) and Machine Learning Algorithms 利用软件定义网络(SDN)和机器学习算法检测Wi-Fi未经授权访问
Pub Date : 2017-01-31 DOI: 10.15866/IRECOS.V12I1.11020
M. Masoud, Yousef Jaradat, Ismael Jannoud
Software Defined Network (SDN) emerged as a new paradigm to tackle issues in computer networks field. In this paradigm, data plane and control plan are separated. A controller is introduced in the network. This controller acts on behalf of network middle boxes. In this work, the implication of anomaly breaches in wireless networks is investigated. The ossified authentication techniques of wireless access points are not sufficient to secure their networks. To this end, hybrid network intrusion detection algorithm (HNID) is proposed based on user behaviors in the network. This algorithm adopts two different machine learning algorithms. The first algorithm utilizes Artificial Neural Network (ANN) model with genetic algorithm (GANN-AD) to detect anomaly behaviors in the network. The second algorithm tailored the unsupervised soft-clustering based on estimation maximization (EM) model(SCAD).HNID adopts these models to train the first model from the output of the second model if anomaly is detected in the second model only. The algorithm works in real time and the models can be trained on the fly. To test the proposed model, HNID has been implemented in Ryu controller. A testbed has been implemented using openflow enabled HP-2920 switch. Our results show that GANN-AD model detected anomaly with 88% and negative detection of 5%. Moreover, SCAD detected anomaly with 80% and produces a probability of 45% anomaly for 35% of traffic. When combining these algorithms in HNID, the accuracy reached 92%.
软件定义网络(SDN)是解决计算机网络问题的一种新范式。在这种范式中,数据平面和控制平面是分离的。网络中引入了一个控制器。这个控制器代表网络中间箱。在这项工作中,研究了无线网络中异常破坏的含义。无线接入点僵化的身份验证技术不足以保证其网络的安全。为此,提出了基于网络用户行为的混合网络入侵检测算法(HNID)。该算法采用了两种不同的机器学习算法。第一种算法利用人工神经网络(ANN)模型和遗传算法(gan - ad)检测网络中的异常行为。第二种算法是基于估计最大化(EM)模型的无监督软聚类算法。如果仅在第二个模型中检测到异常,则HNID使用这些模型从第二个模型的输出中训练第一个模型。该算法是实时工作的,模型可以在飞行中训练。为了验证所提出的模型,在Ryu控制器中实现了HNID。使用openflow启用的HP-2920交换机实现了一个测试平台。结果表明,GANN-AD模型异常检出率为88%,阴性检出率为5%。此外,SCAD检测异常的概率为80%,对35%的流量产生45%的异常概率。在HNID中结合这些算法,准确率达到92%。
{"title":"On Detecting Wi-Fi Unauthorized Access Utilizing Software Define Network (SDN) and Machine Learning Algorithms","authors":"M. Masoud, Yousef Jaradat, Ismael Jannoud","doi":"10.15866/IRECOS.V12I1.11020","DOIUrl":"https://doi.org/10.15866/IRECOS.V12I1.11020","url":null,"abstract":"Software Defined Network (SDN) emerged as a new paradigm to tackle issues in computer networks field. In this paradigm, data plane and control plan are separated. A controller is introduced in the network. This controller acts on behalf of network middle boxes. In this work, the implication of anomaly breaches in wireless networks is investigated. The ossified authentication techniques of wireless access points are not sufficient to secure their networks. To this end, hybrid network intrusion detection algorithm (HNID) is proposed based on user behaviors in the network. This algorithm adopts two different machine learning algorithms. The first algorithm utilizes Artificial Neural Network (ANN) model with genetic algorithm (GANN-AD) to detect anomaly behaviors in the network. The second algorithm tailored the unsupervised soft-clustering based on estimation maximization (EM) model(SCAD).HNID adopts these models to train the first model from the output of the second model if anomaly is detected in the second model only. The algorithm works in real time and the models can be trained on the fly. To test the proposed model, HNID has been implemented in Ryu controller. A testbed has been implemented using openflow enabled HP-2920 switch. Our results show that GANN-AD model detected anomaly with 88% and negative detection of 5%. Moreover, SCAD detected anomaly with 80% and produces a probability of 45% anomaly for 35% of traffic. When combining these algorithms in HNID, the accuracy reached 92%.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133693871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Image Modeling Based on Complex Wavelet Decomposition: Application to Image Compression and Texture Analysis 基于复小波分解的图像建模:在图像压缩和纹理分析中的应用
Pub Date : 2017-01-31 DOI: 10.15866/IRECOS.V12I1.10586
Riahi Wafa, J. Mbainaibeye
Natural image is defined in big dimensional space and it is not often easy to manipulate. It is necessary to use the projection of the image on the reduced space dimensions. Image modeling consists to find the best projection of the image allowing the good comprehension of the observed phenomenon and the good representation. Independently of the application, the modeling must give an efficient and an almost complete description of the image. The wavelet based image modeling is widely treated in the literature and in general the real wavelet decomposition is used. However, the real wavelet decomposition is not enough directional. Only three directions are considered in real wavelet decomposition: horizontal, vertical and diagonal directions. Complex wavelet decomposition allows these three directions and all other directions depending of the phase . This paper presents our contribution to the modeling of natural images using complex wavelet decomposition and its application to image compression and texture analysis. In this contribution, algorithms are developed, taking in account the wavelet coefficients and their arguments defining the phase information. In particular, an algorithm for magnitude modeling and an algorithm for phase modeling are implemented. Furthermore, a function is implemented which allows to determinate the model parameters as well for wavelet coefficients modeling as for phase modeling in the context of generalized Gaussian model. The simulations are done on some standard test images and the results are presented in terms of modeling curves and numerical parameters of the model. The modeling curves are obtained as well for coefficient magnitude as for phase information. The obtained results are applied to image compression and texture analysis. For image compression, one of the determined modeling parameters which is the standard deviation σ is used. The simulations are done on some standard test images and the results show that best image quality is possible, depending of the application, by the adjustment of the value of σ. For texture analysis, the phase information is used as a window to observe the texture; depending on the length of the angular interval, the texture may be observed or not in this window. The main contribution of this work is the modeling of the phase information and its application on the texture observation in one hand and the other hand the application of the magnitude coefficient modeling to image compression.
自然图像是在大维度空间中定义的,通常不容易操作。有必要使用图像在降维空间上的投影。图像建模包括找到图像的最佳投影,以便对观察到的现象有很好的理解和很好的表示。独立于应用程序,建模必须给出一个有效的和几乎完整的图像描述。基于小波的图像建模在文献中得到了广泛的研究,通常使用的是真实的小波分解。然而,真正的小波分解是不够定向的。在实际的小波分解中只考虑三个方向:水平方向、垂直方向和对角方向。复小波分解允许这三个方向和所有其他方向取决于相位。本文介绍了复小波分解在自然图像建模方面的贡献及其在图像压缩和纹理分析方面的应用。在这个贡献中,算法被开发,考虑到小波系数和它们的参数定义相位信息。具体地,实现了一种幅度建模算法和一种相位建模算法。此外,实现了一个函数,该函数允许确定模型参数,以及小波系数建模和广义高斯模型下的相位建模。在一些标准测试图像上进行了仿真,并给出了模型的建模曲线和数值参数。得到了系数幅值和相位信息的建模曲线。将所得结果应用于图像压缩和纹理分析。对于图像压缩,使用确定的建模参数之一,即标准差σ。在一些标准测试图像上进行了仿真,结果表明,根据不同的应用,通过调整σ的值可以获得最佳的图像质量。在纹理分析中,将相位信息作为观察纹理的窗口;根据角度间隔的长度,纹理可以在此窗口中被观察到,也可以不被观察到。本工作的主要贡献一方面是相位信息的建模及其在纹理观测中的应用,另一方面是星等系数建模在图像压缩中的应用。
{"title":"Image Modeling Based on Complex Wavelet Decomposition: Application to Image Compression and Texture Analysis","authors":"Riahi Wafa, J. Mbainaibeye","doi":"10.15866/IRECOS.V12I1.10586","DOIUrl":"https://doi.org/10.15866/IRECOS.V12I1.10586","url":null,"abstract":"Natural image is defined in big dimensional space and it is not often easy to manipulate. It is necessary to use the projection of the image on the reduced space dimensions. Image modeling consists to find the best projection of the image allowing the good comprehension of the observed phenomenon and the good representation. Independently of the application, the modeling must give an efficient and an almost complete description of the image. The wavelet based image modeling is widely treated in the literature and in general the real wavelet decomposition is used. However, the real wavelet decomposition is not enough directional. Only three directions are considered in real wavelet decomposition: horizontal, vertical and diagonal directions. Complex wavelet decomposition allows these three directions and all other directions depending of the phase . This paper presents our contribution to the modeling of natural images using complex wavelet decomposition and its application to image compression and texture analysis. In this contribution, algorithms are developed, taking in account the wavelet coefficients and their arguments defining the phase information. In particular, an algorithm for magnitude modeling and an algorithm for phase modeling are implemented. Furthermore, a function is implemented which allows to determinate the model parameters as well for wavelet coefficients modeling as for phase modeling in the context of generalized Gaussian model. The simulations are done on some standard test images and the results are presented in terms of modeling curves and numerical parameters of the model. The modeling curves are obtained as well for coefficient magnitude as for phase information. The obtained results are applied to image compression and texture analysis. For image compression, one of the determined modeling parameters which is the standard deviation σ is used. The simulations are done on some standard test images and the results show that best image quality is possible, depending of the application, by the adjustment of the value of σ. For texture analysis, the phase information is used as a window to observe the texture; depending on the length of the angular interval, the texture may be observed or not in this window. The main contribution of this work is the modeling of the phase information and its application on the texture observation in one hand and the other hand the application of the magnitude coefficient modeling to image compression.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115674813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Region Merging Strategy Using Statistical Analysis for Interactive Image Segmentation on Dental Panoramic Radiographs 基于统计分析的区域合并策略在牙科全景x线片交互式图像分割中的应用
Pub Date : 2017-01-31 DOI: 10.15866/IRECOS.V12I1.10825
A. Arifin, R. Indraswari, N. Suciati, E. Astuti, D. A. Navastara
In low contrast images such as dental panoramic radiographs, the optimum parameters for automatic image segmentation is not easily determined. Semi-automatic image segmentation which is interactively guided by user is one alternative that could provide a good segmentation results. In this paper we proposed a novel strategy of region merging in interactive image segmentation using discriminant analysis on dental panoramic radiographs. A new similarity measurement among regions is introduced. This measurement merges regions which have minimal inter-class variance either with object or background cluster. Since the representative sample regions are selected by user, the similarity between merged regions with the corresponded samples could be preserved. Experimental results show that the proposed region merging strategy give a high segmentation accuracy both for low contrast and natural images.
在牙科全景x线片等低对比度图像中,自动图像分割的最佳参数不容易确定。半自动图像分割由用户交互式地引导另一个选择,可以提供一个良好的分割结果。本文提出了一种基于判别分析的交互式图像分割区域合并策略。提出了一种新的区域间相似性度量方法。该测量将类间方差最小的区域与目标或背景聚类合并。由于具有代表性的样本区域是由用户选择的,因此可以保持合并后的区域与相应样本的相似性。实验结果表明,所提出的区域合并策略对低对比度和自然图像都有较高的分割精度。
{"title":"Region Merging Strategy Using Statistical Analysis for Interactive Image Segmentation on Dental Panoramic Radiographs","authors":"A. Arifin, R. Indraswari, N. Suciati, E. Astuti, D. A. Navastara","doi":"10.15866/IRECOS.V12I1.10825","DOIUrl":"https://doi.org/10.15866/IRECOS.V12I1.10825","url":null,"abstract":"In low contrast images such as dental panoramic radiographs, the optimum parameters for automatic image segmentation is not easily determined. Semi-automatic image segmentation which is interactively guided by user is one alternative that could provide a good segmentation results. In this paper we proposed a novel strategy of region merging in interactive image segmentation using discriminant analysis on dental panoramic radiographs. A new similarity measurement among regions is introduced. This measurement merges regions which have minimal inter-class variance either with object or background cluster. Since the representative sample regions are selected by user, the similarity between merged regions with the corresponded samples could be preserved. Experimental results show that the proposed region merging strategy give a high segmentation accuracy both for low contrast and natural images.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131394012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
期刊
International Review on Computers and Software
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1