Pub Date : 2017-05-31DOI: 10.15866/irecos.v12i3.13226
Andrés Felipe Hernández Leon, O. S. Parra, Miguel José Espitia Rico
An algorithmic model is presented to limit the congestion in end-to-end TCP networks, an evaluation of this type of current networks of this type is carried out and the present shortcomings are determined. An investigation is made about the metrics that are most important and determinant at the time of end-to-end TCP transmissions such as segment size, buffer capacity, ACK times and concurrent services and how they should be monitored and configured in order to obtain the best result under certain previously identified conditions of the network. The model is designed and presented as a series of steps to follow according to the actual factors that an end-to-end network presents as well as the implementation of its final design, using the same methodology during the testing and the simulation carried out with the ns2 software. Some simulations of the encountered scenarios are presented in comparison with the results of an actual end-to-end network, in order to determine the main obtained result. A series of recommendations are made and the conclusions drawn by the research are listed as well as some considerations on future work.
{"title":"Algorithmic Model to Limit TCP Protocol Congestion in End-To-End Networks","authors":"Andrés Felipe Hernández Leon, O. S. Parra, Miguel José Espitia Rico","doi":"10.15866/irecos.v12i3.13226","DOIUrl":"https://doi.org/10.15866/irecos.v12i3.13226","url":null,"abstract":"An algorithmic model is presented to limit the congestion in end-to-end TCP networks, an evaluation of this type of current networks of this type is carried out and the present shortcomings are determined. An investigation is made about the metrics that are most important and determinant at the time of end-to-end TCP transmissions such as segment size, buffer capacity, ACK times and concurrent services and how they should be monitored and configured in order to obtain the best result under certain previously identified conditions of the network. The model is designed and presented as a series of steps to follow according to the actual factors that an end-to-end network presents as well as the implementation of its final design, using the same methodology during the testing and the simulation carried out with the ns2 software. Some simulations of the encountered scenarios are presented in comparison with the results of an actual end-to-end network, in order to determine the main obtained result. A series of recommendations are made and the conclusions drawn by the research are listed as well as some considerations on future work.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126855276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-31DOI: 10.15866/irecos.v12i3.12278
O. B. Henia
This paper presents a real time vision-based finger counting method combining convex-hull detection and PCA techniques. The method starts by segmenting an input image to detect the area corresponding to the observed hand. For that purpose, a skin color detection method is used to differentiate the foreground containing the hand from the image background. Lighting variation can affect the accuracy of the segmentation which has an impact on the functioning of the proposed method. In order to deal with the lighting variation problem, HLS color space is used to represent the colors. Hand contour is then calculated and fingertips are detected through the detection of the convex-hull and convexity defects of the hand shape. The use of convex-hull algorithm is simple and gives accurate results when more than one finger is observed in the input image but the accuracy decreases when it comes to deal with the case where only one finger is observed. To overcome this problem, principal component analysis technique (PCA) is used to analyze the hand shape and detect the case where only one finger is observed with better accuracy. The proposed method could be utilized in Human Computer Interaction System (HCI) where the machine reacts to each detected number. Both real and synthetic images are used to test and demonstrate the potential of our method.
{"title":"Real Time Vision Based Method for Finger Counting Through Shape Analysis with Convex Hull and PCA Techniques","authors":"O. B. Henia","doi":"10.15866/irecos.v12i3.12278","DOIUrl":"https://doi.org/10.15866/irecos.v12i3.12278","url":null,"abstract":"This paper presents a real time vision-based finger counting method combining convex-hull detection and PCA techniques. The method starts by segmenting an input image to detect the area corresponding to the observed hand. For that purpose, a skin color detection method is used to differentiate the foreground containing the hand from the image background. Lighting variation can affect the accuracy of the segmentation which has an impact on the functioning of the proposed method. In order to deal with the lighting variation problem, HLS color space is used to represent the colors. Hand contour is then calculated and fingertips are detected through the detection of the convex-hull and convexity defects of the hand shape. The use of convex-hull algorithm is simple and gives accurate results when more than one finger is observed in the input image but the accuracy decreases when it comes to deal with the case where only one finger is observed. To overcome this problem, principal component analysis technique (PCA) is used to analyze the hand shape and detect the case where only one finger is observed with better accuracy. The proposed method could be utilized in Human Computer Interaction System (HCI) where the machine reacts to each detected number. Both real and synthetic images are used to test and demonstrate the potential of our method.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127076720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-31DOI: 10.15866/IRECOS.V12I2.12354
Pooja Agarwal, Arti Arya, J. Suryaprasad, Abhijit Theophilus
Identification of customer’s loyalty is one of the most captivating area of today’s growing business scenario. For any organization, retaining customer is more important than exploring new customers. In this paper, Deep Belief Network (DBN) based approach is implemented for classifying customer loyalties. Training a Deep Belief Network (DBN) is a tedious task but once it trains, the accuracy of classification improves immensely. It also learns from its environment and does not need to be reprogrammed for new situations completely. After training, classifier relies on weight matrices to classify examples. The proposed approach is tested over real as well as sample datasets. The results so acquired are compared with Deep Neural Networks and Support Vector Machine based approaches, which shows Deep Belief Network (DBN) gives accuracy up to 99%.
{"title":"A Machine Learning based Approach to Multiclass Classification of Customer Loyalty using Deep Nets","authors":"Pooja Agarwal, Arti Arya, J. Suryaprasad, Abhijit Theophilus","doi":"10.15866/IRECOS.V12I2.12354","DOIUrl":"https://doi.org/10.15866/IRECOS.V12I2.12354","url":null,"abstract":"Identification of customer’s loyalty is one of the most captivating area of today’s growing business scenario. For any organization, retaining customer is more important than exploring new customers. In this paper, Deep Belief Network (DBN) based approach is implemented for classifying customer loyalties. Training a Deep Belief Network (DBN) is a tedious task but once it trains, the accuracy of classification improves immensely. It also learns from its environment and does not need to be reprogrammed for new situations completely. After training, classifier relies on weight matrices to classify examples. The proposed approach is tested over real as well as sample datasets. The results so acquired are compared with Deep Neural Networks and Support Vector Machine based approaches, which shows Deep Belief Network (DBN) gives accuracy up to 99%.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125914825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-31DOI: 10.15866/IRECOS.V12I2.12002
Zakaria Boulouard, L. Koutti, Anass El Haddadi, B. Dousset
Graph Visualization is a technique that helps users to easily comprehend connected data (social networks, semantic networks, etc.) based on human perception. With the prevalence of Big Data, these graphs tend to be too large to decipher by the user’s visual abilities alone. One of the leading causes of this problem is when the nodes leave the visualization space. Many attempts have been made to optimize large graph visualization, but they all have limitations. Among these attempts, the most famous one is the Force Directed Placement Algorithm. This algorithm can provide beautiful visualizations for small to medium graphs, but when it comes to larger graphs it fails to keep some independent nodes or even subgraphs inside the visualization space. In this paper, we present an algorithm that we have named "Forced Force Directed Placement". This algorithm provides an enhancement of the classical Force Directed Placement algorithm by proposing a stronger force function. The “FForce”, as we have named it, can bring related nodes closer to each other before reaching an equilibrium position. This helped us gain more display space and that gave us the possibility to visualize larger graphs.
{"title":"“Forced” Force Directed Placement: a New Algorithm for Large Graph Visualization","authors":"Zakaria Boulouard, L. Koutti, Anass El Haddadi, B. Dousset","doi":"10.15866/IRECOS.V12I2.12002","DOIUrl":"https://doi.org/10.15866/IRECOS.V12I2.12002","url":null,"abstract":"Graph Visualization is a technique that helps users to easily comprehend connected data (social networks, semantic networks, etc.) based on human perception. With the prevalence of Big Data, these graphs tend to be too large to decipher by the user’s visual abilities alone. One of the leading causes of this problem is when the nodes leave the visualization space. Many attempts have been made to optimize large graph visualization, but they all have limitations. Among these attempts, the most famous one is the Force Directed Placement Algorithm. This algorithm can provide beautiful visualizations for small to medium graphs, but when it comes to larger graphs it fails to keep some independent nodes or even subgraphs inside the visualization space. In this paper, we present an algorithm that we have named \"Forced Force Directed Placement\". This algorithm provides an enhancement of the classical Force Directed Placement algorithm by proposing a stronger force function. The “FForce”, as we have named it, can bring related nodes closer to each other before reaching an equilibrium position. This helped us gain more display space and that gave us the possibility to visualize larger graphs.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133074660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-31DOI: 10.15866/irecos.v12i2.12718
Ahmed Adnane, A. Lebbat, H. Medromi, M. Radoui
Load balancing aims to exploit networked resources equitably in such a way that no resources are overloaded while others are under-loaded or idle. Many approaches have been proposed and implemented, but as new infrastructures emerge like grids and Global Computing (GC), new challenges are raised with regard to network latency. The location policy, as one of the main fundamentals of load balancing solutions, aims to locate overloaded and under-loaded nodes in a network. To do so, multiple communication messages are sent across the network. This technique wastes network resources and causes remarkable network delays in environments like GC, which makes it impractical. In this paper, we propose a new paradigm for adaptive distributed load balancing inspired by swarm intelligence and multi-agent systems. In such a paradigm, no load balancing service would be required. In fact, work tasks are self-aware and capable of self-load-balancing over an unknown-load network. By its nature and based on stigmergy mechanisms, communication frequency of the proposed paradigm is significantly reduced compared to existing solutions. The present work explains the fundamentals of this paradigm, coined the Agent Process Paradigm (APP), as well as its underlying algorithms. Results of performance evaluation are presented and discussed at the end of this paper.
{"title":"Grid Self-Load-Balancing: the Agent Process Paradigm","authors":"Ahmed Adnane, A. Lebbat, H. Medromi, M. Radoui","doi":"10.15866/irecos.v12i2.12718","DOIUrl":"https://doi.org/10.15866/irecos.v12i2.12718","url":null,"abstract":"Load balancing aims to exploit networked resources equitably in such a way that no resources are overloaded while others are under-loaded or idle. Many approaches have been proposed and implemented, but as new infrastructures emerge like grids and Global Computing (GC), new challenges are raised with regard to network latency. The location policy, as one of the main fundamentals of load balancing solutions, aims to locate overloaded and under-loaded nodes in a network. To do so, multiple communication messages are sent across the network. This technique wastes network resources and causes remarkable network delays in environments like GC, which makes it impractical. In this paper, we propose a new paradigm for adaptive distributed load balancing inspired by swarm intelligence and multi-agent systems. In such a paradigm, no load balancing service would be required. In fact, work tasks are self-aware and capable of self-load-balancing over an unknown-load network. By its nature and based on stigmergy mechanisms, communication frequency of the proposed paradigm is significantly reduced compared to existing solutions. The present work explains the fundamentals of this paradigm, coined the Agent Process Paradigm (APP), as well as its underlying algorithms. Results of performance evaluation are presented and discussed at the end of this paper.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134453901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-31DOI: 10.15866/IRECOS.V12I2.12228
M. Shatnawi, Ismail Hmeidi, Anas Shatnawi
Software quality is considered as one of the most highly interacting aspects in software engineering. It has many dimensions that vary depending on the users' requirements and their points of view. Thus, the varying dimensions lead to complications in measuring and defining the software quality appropriately. The use of libraries increases software quality more than using generic programming as these libraries are prepared and tested in advance. Moreover, these libraries reduce the effort spent in the designing, testing and the maintaining processes. In this research, a new model is introduced to calculate the saved effort that results from using libraries instead of generic programming in testing, coding, and productivity processes. The proposed model consists of three metrics. These metrics are the library investment ratio, the library investment level, and program simplicity. An experimental analysis has been done onto ten software products to compare the outcomes of the model with reuse percent. The outcomes show that the model gives better results than reuse percent, because the model is deepening in the source code more than the reuse percent does. Also, the model has a better effect on the improvement of software quality and productivity, rather than reuse percent.
{"title":"Software Library Investment Metrics: a New Approach, Issues and Recommendations","authors":"M. Shatnawi, Ismail Hmeidi, Anas Shatnawi","doi":"10.15866/IRECOS.V12I2.12228","DOIUrl":"https://doi.org/10.15866/IRECOS.V12I2.12228","url":null,"abstract":"Software quality is considered as one of the most highly interacting aspects in software engineering. It has many dimensions that vary depending on the users' requirements and their points of view. Thus, the varying dimensions lead to complications in measuring and defining the software quality appropriately. The use of libraries increases software quality more than using generic programming as these libraries are prepared and tested in advance. Moreover, these libraries reduce the effort spent in the designing, testing and the maintaining processes. In this research, a new model is introduced to calculate the saved effort that results from using libraries instead of generic programming in testing, coding, and productivity processes. The proposed model consists of three metrics. These metrics are the library investment ratio, the library investment level, and program simplicity. An experimental analysis has been done onto ten software products to compare the outcomes of the model with reuse percent. The outcomes show that the model gives better results than reuse percent, because the model is deepening in the source code more than the reuse percent does. Also, the model has a better effect on the improvement of software quality and productivity, rather than reuse percent.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124905953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-31DOI: 10.15866/irecos.v12i1.11334
Meryem El Allaoui, Khalid Nafil, R. Touahni
In Scrum agile software development, the increasing complexity of the system and the short sprint cycle of a product makes it difficult to thoroughly test products and ensuring software quality. Furthermore, manual testing is time consuming and requires expertise. As a result, automated testing has emerged as a solution to face this challenge. In this paper, we present an approach to generate test cases from UML sequence diagrams integrated with the Scrum agile process. Previously, the authors presented a new technique for automatic generation of UML 2 sequence diagrams from a set of user stories. In this paper, we propose two new cartridges for AndroMDA Framework. The first cartridge for M2M transformation takes UML 2 sequence diagrams as input and produces U2TP sequence diagrams; and the second cartridge for M2T transformation takes U2TP sequence diagrams as input and generates test cases.
{"title":"Introducing Model-Driven Testing in Scrum Process Using U2TP and AndroMDA","authors":"Meryem El Allaoui, Khalid Nafil, R. Touahni","doi":"10.15866/irecos.v12i1.11334","DOIUrl":"https://doi.org/10.15866/irecos.v12i1.11334","url":null,"abstract":"In Scrum agile software development, the increasing complexity of the system and the short sprint cycle of a product makes it difficult to thoroughly test products and ensuring software quality. Furthermore, manual testing is time consuming and requires expertise. As a result, automated testing has emerged as a solution to face this challenge. In this paper, we present an approach to generate test cases from UML sequence diagrams integrated with the Scrum agile process. Previously, the authors presented a new technique for automatic generation of UML 2 sequence diagrams from a set of user stories. In this paper, we propose two new cartridges for AndroMDA Framework. The first cartridge for M2M transformation takes UML 2 sequence diagrams as input and produces U2TP sequence diagrams; and the second cartridge for M2T transformation takes U2TP sequence diagrams as input and generates test cases.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134350094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-31DOI: 10.15866/IRECOS.V12I1.11020
M. Masoud, Yousef Jaradat, Ismael Jannoud
Software Defined Network (SDN) emerged as a new paradigm to tackle issues in computer networks field. In this paradigm, data plane and control plan are separated. A controller is introduced in the network. This controller acts on behalf of network middle boxes. In this work, the implication of anomaly breaches in wireless networks is investigated. The ossified authentication techniques of wireless access points are not sufficient to secure their networks. To this end, hybrid network intrusion detection algorithm (HNID) is proposed based on user behaviors in the network. This algorithm adopts two different machine learning algorithms. The first algorithm utilizes Artificial Neural Network (ANN) model with genetic algorithm (GANN-AD) to detect anomaly behaviors in the network. The second algorithm tailored the unsupervised soft-clustering based on estimation maximization (EM) model(SCAD).HNID adopts these models to train the first model from the output of the second model if anomaly is detected in the second model only. The algorithm works in real time and the models can be trained on the fly. To test the proposed model, HNID has been implemented in Ryu controller. A testbed has been implemented using openflow enabled HP-2920 switch. Our results show that GANN-AD model detected anomaly with 88% and negative detection of 5%. Moreover, SCAD detected anomaly with 80% and produces a probability of 45% anomaly for 35% of traffic. When combining these algorithms in HNID, the accuracy reached 92%.
{"title":"On Detecting Wi-Fi Unauthorized Access Utilizing Software Define Network (SDN) and Machine Learning Algorithms","authors":"M. Masoud, Yousef Jaradat, Ismael Jannoud","doi":"10.15866/IRECOS.V12I1.11020","DOIUrl":"https://doi.org/10.15866/IRECOS.V12I1.11020","url":null,"abstract":"Software Defined Network (SDN) emerged as a new paradigm to tackle issues in computer networks field. In this paradigm, data plane and control plan are separated. A controller is introduced in the network. This controller acts on behalf of network middle boxes. In this work, the implication of anomaly breaches in wireless networks is investigated. The ossified authentication techniques of wireless access points are not sufficient to secure their networks. To this end, hybrid network intrusion detection algorithm (HNID) is proposed based on user behaviors in the network. This algorithm adopts two different machine learning algorithms. The first algorithm utilizes Artificial Neural Network (ANN) model with genetic algorithm (GANN-AD) to detect anomaly behaviors in the network. The second algorithm tailored the unsupervised soft-clustering based on estimation maximization (EM) model(SCAD).HNID adopts these models to train the first model from the output of the second model if anomaly is detected in the second model only. The algorithm works in real time and the models can be trained on the fly. To test the proposed model, HNID has been implemented in Ryu controller. A testbed has been implemented using openflow enabled HP-2920 switch. Our results show that GANN-AD model detected anomaly with 88% and negative detection of 5%. Moreover, SCAD detected anomaly with 80% and produces a probability of 45% anomaly for 35% of traffic. When combining these algorithms in HNID, the accuracy reached 92%.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133693871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-31DOI: 10.15866/IRECOS.V12I1.10586
Riahi Wafa, J. Mbainaibeye
Natural image is defined in big dimensional space and it is not often easy to manipulate. It is necessary to use the projection of the image on the reduced space dimensions. Image modeling consists to find the best projection of the image allowing the good comprehension of the observed phenomenon and the good representation. Independently of the application, the modeling must give an efficient and an almost complete description of the image. The wavelet based image modeling is widely treated in the literature and in general the real wavelet decomposition is used. However, the real wavelet decomposition is not enough directional. Only three directions are considered in real wavelet decomposition: horizontal, vertical and diagonal directions. Complex wavelet decomposition allows these three directions and all other directions depending of the phase . This paper presents our contribution to the modeling of natural images using complex wavelet decomposition and its application to image compression and texture analysis. In this contribution, algorithms are developed, taking in account the wavelet coefficients and their arguments defining the phase information. In particular, an algorithm for magnitude modeling and an algorithm for phase modeling are implemented. Furthermore, a function is implemented which allows to determinate the model parameters as well for wavelet coefficients modeling as for phase modeling in the context of generalized Gaussian model. The simulations are done on some standard test images and the results are presented in terms of modeling curves and numerical parameters of the model. The modeling curves are obtained as well for coefficient magnitude as for phase information. The obtained results are applied to image compression and texture analysis. For image compression, one of the determined modeling parameters which is the standard deviation σ is used. The simulations are done on some standard test images and the results show that best image quality is possible, depending of the application, by the adjustment of the value of σ. For texture analysis, the phase information is used as a window to observe the texture; depending on the length of the angular interval, the texture may be observed or not in this window. The main contribution of this work is the modeling of the phase information and its application on the texture observation in one hand and the other hand the application of the magnitude coefficient modeling to image compression.
{"title":"Image Modeling Based on Complex Wavelet Decomposition: Application to Image Compression and Texture Analysis","authors":"Riahi Wafa, J. Mbainaibeye","doi":"10.15866/IRECOS.V12I1.10586","DOIUrl":"https://doi.org/10.15866/IRECOS.V12I1.10586","url":null,"abstract":"Natural image is defined in big dimensional space and it is not often easy to manipulate. It is necessary to use the projection of the image on the reduced space dimensions. Image modeling consists to find the best projection of the image allowing the good comprehension of the observed phenomenon and the good representation. Independently of the application, the modeling must give an efficient and an almost complete description of the image. The wavelet based image modeling is widely treated in the literature and in general the real wavelet decomposition is used. However, the real wavelet decomposition is not enough directional. Only three directions are considered in real wavelet decomposition: horizontal, vertical and diagonal directions. Complex wavelet decomposition allows these three directions and all other directions depending of the phase . This paper presents our contribution to the modeling of natural images using complex wavelet decomposition and its application to image compression and texture analysis. In this contribution, algorithms are developed, taking in account the wavelet coefficients and their arguments defining the phase information. In particular, an algorithm for magnitude modeling and an algorithm for phase modeling are implemented. Furthermore, a function is implemented which allows to determinate the model parameters as well for wavelet coefficients modeling as for phase modeling in the context of generalized Gaussian model. The simulations are done on some standard test images and the results are presented in terms of modeling curves and numerical parameters of the model. The modeling curves are obtained as well for coefficient magnitude as for phase information. The obtained results are applied to image compression and texture analysis. For image compression, one of the determined modeling parameters which is the standard deviation σ is used. The simulations are done on some standard test images and the results show that best image quality is possible, depending of the application, by the adjustment of the value of σ. For texture analysis, the phase information is used as a window to observe the texture; depending on the length of the angular interval, the texture may be observed or not in this window. The main contribution of this work is the modeling of the phase information and its application on the texture observation in one hand and the other hand the application of the magnitude coefficient modeling to image compression.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115674813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-31DOI: 10.15866/IRECOS.V12I1.10825
A. Arifin, R. Indraswari, N. Suciati, E. Astuti, D. A. Navastara
In low contrast images such as dental panoramic radiographs, the optimum parameters for automatic image segmentation is not easily determined. Semi-automatic image segmentation which is interactively guided by user is one alternative that could provide a good segmentation results. In this paper we proposed a novel strategy of region merging in interactive image segmentation using discriminant analysis on dental panoramic radiographs. A new similarity measurement among regions is introduced. This measurement merges regions which have minimal inter-class variance either with object or background cluster. Since the representative sample regions are selected by user, the similarity between merged regions with the corresponded samples could be preserved. Experimental results show that the proposed region merging strategy give a high segmentation accuracy both for low contrast and natural images.
{"title":"Region Merging Strategy Using Statistical Analysis for Interactive Image Segmentation on Dental Panoramic Radiographs","authors":"A. Arifin, R. Indraswari, N. Suciati, E. Astuti, D. A. Navastara","doi":"10.15866/IRECOS.V12I1.10825","DOIUrl":"https://doi.org/10.15866/IRECOS.V12I1.10825","url":null,"abstract":"In low contrast images such as dental panoramic radiographs, the optimum parameters for automatic image segmentation is not easily determined. Semi-automatic image segmentation which is interactively guided by user is one alternative that could provide a good segmentation results. In this paper we proposed a novel strategy of region merging in interactive image segmentation using discriminant analysis on dental panoramic radiographs. A new similarity measurement among regions is introduced. This measurement merges regions which have minimal inter-class variance either with object or background cluster. Since the representative sample regions are selected by user, the similarity between merged regions with the corresponded samples could be preserved. Experimental results show that the proposed region merging strategy give a high segmentation accuracy both for low contrast and natural images.","PeriodicalId":392163,"journal":{"name":"International Review on Computers and Software","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131394012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}