Pub Date : 2014-08-24DOI: 10.14419/JACST.V3I2.3196
Fatemeh Alirezazadeh, M. Ahmadzadeh
Musical staff detection and removal is one of the most important preprocessing steps of an Optical Music Recognition (OMR) system. This paper proposes a new method for detecting and restoring staff lines from global information of music sheets. First of all the location of staff lines is determined. Therefore, music staff is sliced. The staff line segments are recognized at each slice and then with adequate knowledge of staff line locations, the deformed, interrupted or partly removed staff lines can be rebuilt. A new approach for staff removal algorithm is suggested in this paper fundamentally based on removing all detected staff lines. At last, the Fourier transform and Gaussian lowpass filter will help to reconstruct the separated and interrupted symbols. It has been tested on the dataset of the musical staff removal competition held under ICDAR 2012. The experimental results show the effectiveness of this method under various kinds of deformations in staff lines. Keywords : Fourier Transform, Gaussian Low Pass Filter, Optical Music Recognition, Run Length Coding, Staff Line Removal.
{"title":"Effective staff line detection, restoration and removal approach for different quality of scanned handwritten music sheets","authors":"Fatemeh Alirezazadeh, M. Ahmadzadeh","doi":"10.14419/JACST.V3I2.3196","DOIUrl":"https://doi.org/10.14419/JACST.V3I2.3196","url":null,"abstract":"Musical staff detection and removal is one of the most important preprocessing steps of an Optical Music Recognition (OMR) system. This paper proposes a new method for detecting and restoring staff lines from global information of music sheets. First of all the location of staff lines is determined. Therefore, music staff is sliced. The staff line segments are recognized at each slice and then with adequate knowledge of staff line locations, the deformed, interrupted or partly removed staff lines can be rebuilt. A new approach for staff removal algorithm is suggested in this paper fundamentally based on removing all detected staff lines. At last, the Fourier transform and Gaussian lowpass filter will help to reconstruct the separated and interrupted symbols. It has been tested on the dataset of the musical staff removal competition held under ICDAR 2012. The experimental results show the effectiveness of this method under various kinds of deformations in staff lines. Keywords : Fourier Transform, Gaussian Low Pass Filter, Optical Music Recognition, Run Length Coding, Staff Line Removal.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114602160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-07-11DOI: 10.14419/JACST.V3I2.2818
Mateur Khalid, E. Rachid, Dahou Hamad, Hlou
The very important objective of the digital designer when using discrete gates for implement a Boolean function is to keep the number of used gates to a minimum and win a memory space without lost the original information. In this sense, the Simplification is very important and could be achieved by a purely algebraic process, but it can be tedious when it arrived to a very large number of variables. In this paper we describe an automat solution based on finite state machine (FSM) for simplify and practically optimize the complex logical functions. This method is programmed and tested on a display system which is based on light emitting diodes (LED) matrix and programmable platform with Field Programmable Gate Array (FPGA). The module is implemented in Spartan 3E family XC3S500E FPGA board. Keywords : Display Board, FPGA, FSM, LED, LED-Driver, Logic Simplification, Multiplex.
{"title":"Optimized design for controlling LED display matrix by an FPGA board","authors":"Mateur Khalid, E. Rachid, Dahou Hamad, Hlou","doi":"10.14419/JACST.V3I2.2818","DOIUrl":"https://doi.org/10.14419/JACST.V3I2.2818","url":null,"abstract":"The very important objective of the digital designer when using discrete gates for implement a Boolean function is to keep the number of used gates to a minimum and win a memory space without lost the original information. In this sense, the Simplification is very important and could be achieved by a purely algebraic process, but it can be tedious when it arrived to a very large number of variables. In this paper we describe an automat solution based on finite state machine (FSM) for simplify and practically optimize the complex logical functions. This method is programmed and tested on a display system which is based on light emitting diodes (LED) matrix and programmable platform with Field Programmable Gate Array (FPGA). The module is implemented in Spartan 3E family XC3S500E FPGA board. Keywords : Display Board, FPGA, FSM, LED, LED-Driver, Logic Simplification, Multiplex.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130313234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-14DOI: 10.14419/JACST.V3I2.2514
M. Ardakani, S. M. Hashemi
In the global village ecosystem, enterprises need collaborating using Information Technology (IT) and other tools to succeed in this dynamic and heterogeneous business environment. The Global Village Services Reference Model (GVSRM) is a reference model based on SOSA (Service Oriented Strategies and Architectures) ontology for global village services Realization. In this model, three architectural abstraction layers have been considered for global village: ‘infrastructure for global village services’, ‘global village services provisioning’, and ‘using global village services’. Despite relative completeness of this model, one of its obvious shortcomings is lack of attention to the crucial issue of interoperability in the global village. Based on this model, the grid of global village is comprised of VHGs (Virtual Holding Governance). The VHG is a temporary, scalable, dynamic cluster/association comprising of existing or newly service provider organizations which its objective is satisfying the requirements of global village actors through electronic processes. In this paper, we will propose a federated approach for interoperability among the VHGs of the global village.
{"title":"A federated approach for global village services interaction","authors":"M. Ardakani, S. M. Hashemi","doi":"10.14419/JACST.V3I2.2514","DOIUrl":"https://doi.org/10.14419/JACST.V3I2.2514","url":null,"abstract":"In the global village ecosystem, enterprises need collaborating using Information Technology (IT) and other tools to succeed in this dynamic and heterogeneous business environment. The Global Village Services Reference Model (GVSRM) is a reference model based on SOSA (Service Oriented Strategies and Architectures) ontology for global village services Realization. In this model, three architectural abstraction layers have been considered for global village: ‘infrastructure for global village services’, ‘global village services provisioning’, and ‘using global village services’. Despite relative completeness of this model, one of its obvious shortcomings is lack of attention to the crucial issue of interoperability in the global village. Based on this model, the grid of global village is comprised of VHGs (Virtual Holding Governance). The VHG is a temporary, scalable, dynamic cluster/association comprising of existing or newly service provider organizations which its objective is satisfying the requirements of global village actors through electronic processes. In this paper, we will propose a federated approach for interoperability among the VHGs of the global village.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115666804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-02DOI: 10.14419/JACST.V3I2.2746
Safa A. Najim
Information visualization can be considered a process of transforming similarity relationships between data points to a geometric representation in order to see unseen information. High-dimensionality data sets are one of the main problems of information visualization. Dimensionality Reduction (DR) is therefore a useful strategy to project high-dimensional space onto low-dimensional space, which it can be visualized directly. The application of this technique has several benefits. First, DR can minimize the amount of storage needed by reducing the size of the data sets. Second, it helps to understand the data sets by discarding any irrelevant features, and to focus on the main important features. DR can enable the discovery of rich information, which assists the task of data analysis. Visualization of high-dimensional data sets is widely used in many fields, such as remote sensing imagery, biology, computer vision, and computer graphics. The visualization is a simple way to understand the high-dimensional space because the relationship between original data points is incomprehensible. A large number of DR methods which attempt to minimize the loss of original information. This paper discuss and analys some DR methods to support the idea of dimensionality reduction to get trustworthy visualization. Keywords: Dimensionality Reduction, Information visualization, Information retrieval.
{"title":"Information visualization by dimensionality reduction: a review","authors":"Safa A. Najim","doi":"10.14419/JACST.V3I2.2746","DOIUrl":"https://doi.org/10.14419/JACST.V3I2.2746","url":null,"abstract":"Information visualization can be considered a process of transforming similarity relationships between data points to a geometric representation in order to see unseen information. High-dimensionality data sets are one of the main problems of information visualization. Dimensionality Reduction (DR) is therefore a useful strategy to project high-dimensional space onto low-dimensional space, which it can be visualized directly. The application of this technique has several benefits. First, DR can minimize the amount of storage needed by reducing the size of the data sets. Second, it helps to understand the data sets by discarding any irrelevant features, and to focus on the main important features. DR can enable the discovery of rich information, which assists the task of data analysis. Visualization of high-dimensional data sets is widely used in many fields, such as remote sensing imagery, biology, computer vision, and computer graphics. The visualization is a simple way to understand the high-dimensional space because the relationship between original data points is incomprehensible. A large number of DR methods which attempt to minimize the loss of original information. This paper discuss and analys some DR methods to support the idea of dimensionality reduction to get trustworthy visualization. Keywords: Dimensionality Reduction, Information visualization, Information retrieval.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128365999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile ad hoc network is a temporary network that consists of a set of mobile nodes with wireless communication. There are several problems in the relationship between the components of these networks. Some of these problems are related to the routing problem. The main challenge in routing protocols of mobile ad hoc networks is links break phenomenon. This phenomenon has many negative impacts on the performance of routing protocols. In this paper, we attempted to study 10 routing protocols in mobile ad hoc networks that try to improve the performance of standard protocols of this type of network using the Restoration Links Break. By studying the behavior of these protocols, a common model used by all these protocols was introduced as Restoration Links Break model. Then the performance of each of the protocols was described based on the proposed model. On the other hand, we divided them into two categories according to protocol functions in Restoration Links Break. First category provides alternate routes before the Links Break event and the second category performs the replacement of route after the links break. Finally, the simulation of results revealed that the first category of protocols has a better delivery rate than the second category but the routing overhead of the second category is less than the first category.
{"title":"RLBM model: modeling of Manet's routing protocols based on restoration links break","authors":"Zahra Abdoly, Seyyed Javad Mirabedini, Peyman Arebi","doi":"10.14419/JACST.V3I2.3004","DOIUrl":"https://doi.org/10.14419/JACST.V3I2.3004","url":null,"abstract":"Mobile ad hoc network is a temporary network that consists of a set of mobile nodes with wireless communication. There are several problems in the relationship between the components of these networks. Some of these problems are related to the routing problem. The main challenge in routing protocols of mobile ad hoc networks is links break phenomenon. This phenomenon has many negative impacts on the performance of routing protocols. In this paper, we attempted to study 10 routing protocols in mobile ad hoc networks that try to improve the performance of standard protocols of this type of network using the Restoration Links Break. By studying the behavior of these protocols, a common model used by all these protocols was introduced as Restoration Links Break model. Then the performance of each of the protocols was described based on the proposed model. On the other hand, we divided them into two categories according to protocol functions in Restoration Links Break. First category provides alternate routes before the Links Break event and the second category performs the replacement of route after the links break. Finally, the simulation of results revealed that the first category of protocols has a better delivery rate than the second category but the routing overhead of the second category is less than the first category.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125254306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-19DOI: 10.14419/JACST.V3I1.2258
R. Kaula
Business process intelligence aims to provide timely information to improve business process effectiveness and align it with business objectives in order to compete successfully in the marketplace. Generally such information not only improves an organizations ability to accomplish business objectives, but may also lead to the identification of information that could facilitate competitive advantage. This paper outlines an approach to develop an information flow model that involves the specification of activity dimensions during business process modeling to develop dimensional models to identify process metrics through strategic business rules that aligns a business process with business objectives. The paper illustrates the concepts through a marketing business process Lead to forecast prototype which is implemented in Oracle’s PL/SQL language.
{"title":"Strategic business rules for business process intelligence : An oracle prototype","authors":"R. Kaula","doi":"10.14419/JACST.V3I1.2258","DOIUrl":"https://doi.org/10.14419/JACST.V3I1.2258","url":null,"abstract":"Business process intelligence aims to provide timely information to improve business process effectiveness and align it with business objectives in order to compete successfully in the marketplace. Generally such information not only improves an organizations ability to accomplish business objectives, but may also lead to the identification of information that could facilitate competitive advantage. This paper outlines an approach to develop an information flow model that involves the specification of activity dimensions during business process modeling to develop dimensional models to identify process metrics through strategic business rules that aligns a business process with business objectives. The paper illustrates the concepts through a marketing business process Lead to forecast prototype which is implemented in Oracle’s PL/SQL language.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116297602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-16DOI: 10.14419/JACST.V3I1.2138
G. Bosque, del Campo, J. Echanobe
In a great diversity of knowledge areas, the variables that are involved in the behavior of a complex system, perform normally, a non-linear system. The search of a function that express those behavior, requires techniques as mathematics optimization techniques or others. The new paradigms introduced in the soft computing, as fuzzy logic, neuronal networks, genetics algorithms and the fusion of them like the neuro-fuzzy systems, and so on, represent a new point of view to deal this kind of problems due to the approximation properties of those systems (universal approximators). This work shows a methodology to develop a tool based on a neuro-fuzzy system of ANFIS (Adaptive Neuro-Fuzzy Inference System) type with piecewise multilinear (PWM) behaviour (introducing some restrictions on the membership functions -triangular- chosen in the ANFIS system). The obtained tool is named PWM-ANFIS Tool, that allows modelize a n-dimensional system with one output and, also, permits a comparison between the neuro-fuzzy system modelized, a purely PWM-ANFIS model, with a generic ANFIS (Gaussian membership functions) modelized with the same tool. The proposed tool is an efficient tool to deal non-linearly complicated systems. Keywords: ANFIS model, Function approximation, Matlab environment, Neuro-Fuzzy CAD tool, Neuro-Fuzzy modelling.
{"title":"Modelizing a non-linear system: a computational effcient adaptive neuro-fuzzy system tool based on matlab","authors":"G. Bosque, del Campo, J. Echanobe","doi":"10.14419/JACST.V3I1.2138","DOIUrl":"https://doi.org/10.14419/JACST.V3I1.2138","url":null,"abstract":"In a great diversity of knowledge areas, the variables that are involved in the behavior of a complex system, perform normally, a non-linear system. The search of a function that express those behavior, requires techniques as mathematics optimization techniques or others. The new paradigms introduced in the soft computing, as fuzzy logic, neuronal networks, genetics algorithms and the fusion of them like the neuro-fuzzy systems, and so on, represent a new point of view to deal this kind of problems due to the approximation properties of those systems (universal approximators). This work shows a methodology to develop a tool based on a neuro-fuzzy system of ANFIS (Adaptive Neuro-Fuzzy Inference System) type with piecewise multilinear (PWM) behaviour (introducing some restrictions on the membership functions -triangular- chosen in the ANFIS system). The obtained tool is named PWM-ANFIS Tool, that allows modelize a n-dimensional system with one output and, also, permits a comparison between the neuro-fuzzy system modelized, a purely PWM-ANFIS model, with a generic ANFIS (Gaussian membership functions) modelized with the same tool. The proposed tool is an efficient tool to deal non-linearly complicated systems. Keywords: ANFIS model, Function approximation, Matlab environment, Neuro-Fuzzy CAD tool, Neuro-Fuzzy modelling.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116877899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-09DOI: 10.14419/JACST.V3I1.2153
D. W. Thomas
Software to support the Monte Carlo method generates large vectors of pseudo-random numbers and uses these as operands in complex mathematical expressions. When such software is run on standard PC-based hardware, the volume of data involved often exceeds the physical RAM available. To address this problem, vectors must be paged out to disk and paged back in when required. This paging is often the performance bottleneck limiting the execution speed of the software. Because the mathematical expressions are specified in advance of execution, predictive solutions are possible – for instance, by treating the problem similarly to register allocation. The problem of allocating scalar variables to processor registers is a widely studied aspect of compiler implementation. A register allocation algorithm decides which variable is held in which register, when the value in a register can be overwritten, and when a value is stored in, or later retrieved from, main memory. In this paper, register allocation techniques are used to plan the paging of vectors in Monte Carlo software. Two register allocation algorithms are applied to invented vector programs written in a prototype low-level vector language and the results are compared.
{"title":"Register-allocated paging for big data calculations","authors":"D. W. Thomas","doi":"10.14419/JACST.V3I1.2153","DOIUrl":"https://doi.org/10.14419/JACST.V3I1.2153","url":null,"abstract":"Software to support the Monte Carlo method generates large vectors of pseudo-random numbers and uses these as operands in complex mathematical expressions. When such software is run on standard PC-based hardware, the volume of data involved often exceeds the physical RAM available. To address this problem, vectors must be paged out to disk and paged back in when required. This paging is often the performance bottleneck limiting the execution speed of the software. Because the mathematical expressions are specified in advance of execution, predictive solutions are possible – for instance, by treating the problem similarly to register allocation. The problem of allocating scalar variables to processor registers is a widely studied aspect of compiler implementation. A register allocation algorithm decides which variable is held in which register, when the value in a register can be overwritten, and when a value is stored in, or later retrieved from, main memory. In this paper, register allocation techniques are used to plan the paging of vectors in Monte Carlo software. Two register allocation algorithms are applied to invented vector programs written in a prototype low-level vector language and the results are compared.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128675051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-17DOI: 10.14419/JACST.V3I1.2018
K. Batiha
IPv4 protocol is now no more sufficient due to its limited address space where this protocol uses only 32-bit for addressing. IPv6 protocol is the next protocol which was introduced as a new protocol that provide a huge address space .In This paper we discussed the significant overhead in the IPv6 standard packet due to its 128 bits address size, so we develop three different studies in order to generate a prediction of exhibition date for several address sizes so we can suggest another size for address size in IPv6 to improve the overall performance of the internet and tolerate the overhead by reducing the address size. In the same time this size should accommodate the accelerated growth in needs for unassigned blocks of addresses for very long time.
{"title":"The new candidate IPV6 address size prediction","authors":"K. Batiha","doi":"10.14419/JACST.V3I1.2018","DOIUrl":"https://doi.org/10.14419/JACST.V3I1.2018","url":null,"abstract":"IPv4 protocol is now no more sufficient due to its limited address space where this protocol uses only 32-bit for addressing. IPv6 protocol is the next protocol which was introduced as a new protocol that provide a huge address space .In This paper we discussed the significant overhead in the IPv6 standard packet due to its 128 bits address size, so we develop three different studies in order to generate a prediction of exhibition date for several address sizes so we can suggest another size for address size in IPv6 to improve the overall performance of the internet and tolerate the overhead by reducing the address size. In the same time this size should accommodate the accelerated growth in needs for unassigned blocks of addresses for very long time.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124815208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-02-28DOI: 10.14419/JACST.V3I1.1753
M. Rahmaninia
In MLP networks with hundreds of thousands of weights which must be trained on millions of samples, the time and space complexity may become greatly large and sometimes the training of network by EBP algorithm may be impractical. Sequential Partial Updating is an effective method to reduce computational load and power consumption in implementation. This new approach is very useful for the MLP networks with large number of weights in each layer that updating of each weight in each round of execution of EBP algorithm will be costly. Although this idea reduces computational cost and elapsed CPU time in each round but sometimes maybe increases number of epochs required to convergence and this leads to increase time of convergence. That is, to speed up more the convergence rate of the SPU−EBP algorithm, we propose a Variable Step Size (VSS) approach. In VSS SPU−EBP algorithm, we use a gradient based learning rate in SPU-EBP algorithm to speed up the convergence of training algorithm. In this method we derive an upper bound for the step size of SPU_EBP algorithm.
{"title":"VSS SPU-EBP: Variable step size sequential partial update error back propagation algorithm","authors":"M. Rahmaninia","doi":"10.14419/JACST.V3I1.1753","DOIUrl":"https://doi.org/10.14419/JACST.V3I1.1753","url":null,"abstract":"In MLP networks with hundreds of thousands of weights which must be trained on millions of samples, the time and space complexity may become greatly large and sometimes the training of network by EBP algorithm may be impractical. Sequential Partial Updating is an effective method to reduce computational load and power consumption in implementation. This new approach is very useful for the MLP networks with large number of weights in each layer that updating of each weight in each round of execution of EBP algorithm will be costly. Although this idea reduces computational cost and elapsed CPU time in each round but sometimes maybe increases number of epochs required to convergence and this leads to increase time of convergence. That is, to speed up more the convergence rate of the SPU−EBP algorithm, we propose a Variable Step Size (VSS) approach. In VSS SPU−EBP algorithm, we use a gradient based learning rate in SPU-EBP algorithm to speed up the convergence of training algorithm. In this method we derive an upper bound for the step size of SPU_EBP algorithm.","PeriodicalId":445404,"journal":{"name":"Journal of Advanced Computer Science and Technology","volume":"225 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123640051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}