Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7334062
A. Kalyaev, I. Korovin, M. Khisamutdinov, G. Schaefer, Md Atiqur Rahman Ahad
In this paper, we offer a new method for solving control tasks in software-defined networks for use in corporate networks. Its main feature is the ability to increase failure free operation due to employing distributed computing resources available in the corporate network. To achieve the ability of effective usage of personal computers, our proposed method uses a multi-agent approach, where a proactive agent controls every personal computer and the process of task solution is dispatched in a decentralised way through interactions of agents. To solve every control task, the agents of the system collaborate and thus facilitate dispatch and computation.
{"title":"A novel method of organisation of a software defined network control system","authors":"A. Kalyaev, I. Korovin, M. Khisamutdinov, G. Schaefer, Md Atiqur Rahman Ahad","doi":"10.1109/ICIEV.2015.7334062","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7334062","url":null,"abstract":"In this paper, we offer a new method for solving control tasks in software-defined networks for use in corporate networks. Its main feature is the ability to increase failure free operation due to employing distributed computing resources available in the corporate network. To achieve the ability of effective usage of personal computers, our proposed method uses a multi-agent approach, where a proactive agent controls every personal computer and the process of task solution is dispatched in a decentralised way through interactions of agents. To solve every control task, the agents of the system collaborate and thus facilitate dispatch and computation.","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123453371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7334060
Taira Nagashima, H. Takano, K. Nakamura
Visual saliency map has been proposed as a computational model for estimating the visual attention of a human. The saliency map is used to estimate the bottom-up visual attention for still and moving images. However, under the condition including the top-down visual attention, e.g. advertisement, the accuracy of visual attention estimated from the saliency map decreased. The character feature is considered as one of the factors that the deterioration of the saliency map accuracy is induced. In this study, we hypothesized that the features of characters have the saliency to induce the visual attention. Two types of experiments were performed to test this hypothesis. The still images inserted with both of characters (HIRAGANAs in Japanese or Thai strings) and simple symbols were used as visual stimuli in the experiments. The visual stimuli were presented to the subjects for a short period (2s) to exclude the effect of top-down attention. In result, the fixation probability of the character (both HIRAGANAs and Thais) region in the image was higher than that of the symbol region. The paired t-test provided the significant difference of the fixation ratio between HIRAGANAs and symbols (p <; 0.001). The same is true for the paired t-test between Thais and symbols (p <; 0.001). Thus, the present results indicate the visual saliency of characters.
{"title":"Visual saliency of character feature in an image","authors":"Taira Nagashima, H. Takano, K. Nakamura","doi":"10.1109/ICIEV.2015.7334060","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7334060","url":null,"abstract":"Visual saliency map has been proposed as a computational model for estimating the visual attention of a human. The saliency map is used to estimate the bottom-up visual attention for still and moving images. However, under the condition including the top-down visual attention, e.g. advertisement, the accuracy of visual attention estimated from the saliency map decreased. The character feature is considered as one of the factors that the deterioration of the saliency map accuracy is induced. In this study, we hypothesized that the features of characters have the saliency to induce the visual attention. Two types of experiments were performed to test this hypothesis. The still images inserted with both of characters (HIRAGANAs in Japanese or Thai strings) and simple symbols were used as visual stimuli in the experiments. The visual stimuli were presented to the subjects for a short period (2s) to exclude the effect of top-down attention. In result, the fixation probability of the character (both HIRAGANAs and Thais) region in the image was higher than that of the symbol region. The paired t-test provided the significant difference of the fixation ratio between HIRAGANAs and symbols (p <; 0.001). The same is true for the paired t-test between Thais and symbols (p <; 0.001). Thus, the present results indicate the visual saliency of characters.","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124336723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7334070
Shuqiang Guo, Liqun Wang
Lung segmentation is an important task for quantitative lung CT image analysis and computer aided diagnosis. However, accurate and automated lung CT image segmentation may be made difficult by the presence of the abnormalities. Since many lung diseases change tissue density, resulting in intensity changes in the CT image data, intensity only segmentation algorithms will not work for most pathological lung cases. In this paper, a modified Chan-Vese algorithm is proposed for image segmentation, which is based on the similarity between each point and center point in the neighborhood. This algorithm can capture the details of local region to realize the image segmentation in gray-level heterogeneous area. Experimental results show that this method can segment the lungs CT image with high accuracy, adapt ability and more stable performance compared with the traditional Chan-Vese model.
{"title":"Automatic CT image segmentation of the lungs with an iterative Chan-Vese algorithm","authors":"Shuqiang Guo, Liqun Wang","doi":"10.1109/ICIEV.2015.7334070","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7334070","url":null,"abstract":"Lung segmentation is an important task for quantitative lung CT image analysis and computer aided diagnosis. However, accurate and automated lung CT image segmentation may be made difficult by the presence of the abnormalities. Since many lung diseases change tissue density, resulting in intensity changes in the CT image data, intensity only segmentation algorithms will not work for most pathological lung cases. In this paper, a modified Chan-Vese algorithm is proposed for image segmentation, which is based on the similarity between each point and center point in the neighborhood. This algorithm can capture the details of local region to realize the image segmentation in gray-level heterogeneous area. Experimental results show that this method can segment the lungs CT image with high accuracy, adapt ability and more stable performance compared with the traditional Chan-Vese model.","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"252 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120979622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7334039
M. Zareei, A. M. Muzahidul Islam, N. Mansoor, S. Baharun, E. M. Mohamed, S. Sampei
This paper proposes a novel cross-layer mobility-aware MAC protocol for cluster-based cognitive radio sensor network. A primary focus is on the cluster formation and maintenance. The proposed clustering mechanism divides the network into clusters based on three values: spectrum availability, power level of node and current speed of the node. Therefore, clusters form with the highest stability and flexibility to avoid frequent re-clustering in the network. Moreover, the proposed method integrates the spectrum sensing at physical (PHY) layer with the packet scheduling at MAC layer to be more robust to Primary Users (PUs) activity as well as node mobility in a network. The simulation results show that the proposed protocol can guarantee a good number of common channels per cluster and outperforms the conventional protocols in terms of throughput, power consumption and packet transmission delay.
{"title":"Cross-layer mobility-aware MAC protocol for cognitive radio sensor network","authors":"M. Zareei, A. M. Muzahidul Islam, N. Mansoor, S. Baharun, E. M. Mohamed, S. Sampei","doi":"10.1109/ICIEV.2015.7334039","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7334039","url":null,"abstract":"This paper proposes a novel cross-layer mobility-aware MAC protocol for cluster-based cognitive radio sensor network. A primary focus is on the cluster formation and maintenance. The proposed clustering mechanism divides the network into clusters based on three values: spectrum availability, power level of node and current speed of the node. Therefore, clusters form with the highest stability and flexibility to avoid frequent re-clustering in the network. Moreover, the proposed method integrates the spectrum sensing at physical (PHY) layer with the packet scheduling at MAC layer to be more robust to Primary Users (PUs) activity as well as node mobility in a network. The simulation results show that the proposed protocol can guarantee a good number of common channels per cluster and outperforms the conventional protocols in terms of throughput, power consumption and packet transmission delay.","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"325 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117067897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7334058
Katsuhiro Honda, Takaya Nakano, S. Ubukata, A. Notsu
Bag-of-Words data analysis is a fundamental issue in web data mining for Big Data utilization, and Co-clustering is often applied to cooccurrence information analysis in such problems of document-keyword association research. In probabilistic partition models such as Multinomial Mixtures and Fuzzy c-Means-type ones, different partition constraints are forced to rows (objects) and columns (items), and then item memberships may not be useful in revealing item partitions. A possible approach in clarifying the interpretability of item partitions is additional penalization for exclusive item memberships, which was shown to emphasize cluster-wise representative items in document analysis. In this paper, the utility of the penalization approach is further studied through comparisons of partition qualities with several benchmark data sets. Several experimental results show that the additional penalty may sometime contribute to slightly improving the partition quality in addition to improvement of interpretability of co-cluster partitions.
{"title":"A study on partition quality of Fuzzy Co-clustering with exclusive item memberships","authors":"Katsuhiro Honda, Takaya Nakano, S. Ubukata, A. Notsu","doi":"10.1109/ICIEV.2015.7334058","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7334058","url":null,"abstract":"Bag-of-Words data analysis is a fundamental issue in web data mining for Big Data utilization, and Co-clustering is often applied to cooccurrence information analysis in such problems of document-keyword association research. In probabilistic partition models such as Multinomial Mixtures and Fuzzy c-Means-type ones, different partition constraints are forced to rows (objects) and columns (items), and then item memberships may not be useful in revealing item partitions. A possible approach in clarifying the interpretability of item partitions is additional penalization for exclusive item memberships, which was shown to emphasize cluster-wise representative items in document analysis. In this paper, the utility of the penalization approach is further studied through comparisons of partition qualities with several benchmark data sets. Several experimental results show that the additional penalty may sometime contribute to slightly improving the partition quality in addition to improvement of interpretability of co-cluster partitions.","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127441939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7334027
Makoto Kawamura, H. Kawanaka, Shunsuke Doi, Takahiro Suzuki, H. Takase, S. Tsuruoka
By the diffusion of Hospital Information Systems, many medical documents have been computerized. In addition, most of paper documents before computerization have been also scanned and archived as document images. These were usually converted to text data by using document analysis techniques and Optical Character Reader (OCR) and archived for medical document retrieval. However, the resolutions of some documents are not sufficient for character recognition because of storage spaces, scanning regulations and so on. Therefore, we cannot search desired keywords in the documents, as a result, these documents are not still used effectively in medical document retrieval systems. In this study, we discuss a keyword detection and extraction methods for these document images. As the first step of this study, this paper proposes a method to detect and extract desired words from these documents by using weighted dissimilarity and character sequence. Evaluation experiments using actual medical documents are conducted to discuss the effectiveness of the proposed method.
{"title":"A study on keyword detection using weighted similarity and character sequence for low-resolution medical documents","authors":"Makoto Kawamura, H. Kawanaka, Shunsuke Doi, Takahiro Suzuki, H. Takase, S. Tsuruoka","doi":"10.1109/ICIEV.2015.7334027","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7334027","url":null,"abstract":"By the diffusion of Hospital Information Systems, many medical documents have been computerized. In addition, most of paper documents before computerization have been also scanned and archived as document images. These were usually converted to text data by using document analysis techniques and Optical Character Reader (OCR) and archived for medical document retrieval. However, the resolutions of some documents are not sufficient for character recognition because of storage spaces, scanning regulations and so on. Therefore, we cannot search desired keywords in the documents, as a result, these documents are not still used effectively in medical document retrieval systems. In this study, we discuss a keyword detection and extraction methods for these document images. As the first step of this study, this paper proposes a method to detect and extract desired words from these documents by using weighted dissimilarity and character sequence. Evaluation experiments using actual medical documents are conducted to discuss the effectiveness of the proposed method.","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"11 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126101302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7334036
N. Ueno, Kouki Iwasaki, Chao Xu, Y. Fujio
A novel technique has been developed to observe the stress distribution by the mechanoluminescent (ML) sensor. The ML materials are able to convert mechanical action to light intensity directly. Dynamic stress distributions on surface of various structure are visualized as patterns of light intensity by the ML paint sensor that is composed of ML micro-particle and binder. This technique has been applied for evaluation of artificial hard tissue such as synthetic femur. It should be noted that ML phenomenon is accompanied with undesirable afterglow which intensity decreases according to time progressing. In this study, a novel extraction method of the ML patterns based on afterglow images is proposed. We assumed uniformity of decreasing function of afterglow intensity. An average pattern of afterglow images provides base pattern of afterglow. Polynomial approximation of dot products between observed images and the base pattern provides component values of afterglow pattern. By subtracting computed afterglow pattern from observed images during some load working, ML patterns are successfully extracted.
{"title":"Extraction of mechanoluminescent pattern based on afterglow images","authors":"N. Ueno, Kouki Iwasaki, Chao Xu, Y. Fujio","doi":"10.1109/ICIEV.2015.7334036","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7334036","url":null,"abstract":"A novel technique has been developed to observe the stress distribution by the mechanoluminescent (ML) sensor. The ML materials are able to convert mechanical action to light intensity directly. Dynamic stress distributions on surface of various structure are visualized as patterns of light intensity by the ML paint sensor that is composed of ML micro-particle and binder. This technique has been applied for evaluation of artificial hard tissue such as synthetic femur. It should be noted that ML phenomenon is accompanied with undesirable afterglow which intensity decreases according to time progressing. In this study, a novel extraction method of the ML patterns based on afterglow images is proposed. We assumed uniformity of decreasing function of afterglow intensity. An average pattern of afterglow images provides base pattern of afterglow. Polynomial approximation of dot products between observed images and the base pattern provides component values of afterglow pattern. By subtracting computed afterglow pattern from observed images during some load working, ML patterns are successfully extracted.","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133510362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7334045
M. Aion, M. N. Bhuiyan, Akib Jabed
Today is the age of modern technology and the vast growing IT industry have created a better opportunity for everyone. One of the burning questions of today's modern world is the growth of energy demands and price. Besides that the impact on environment and the depletion of fossil fuels have brought a crisis in the energy related issues. Today's Information Technology is all about data centers thus cloud computing. The data centers around the world require a great amount of energy everyday which has created impact on the energy supply and environmental conditions. This is why the uncertainty of continuous energy supply in the future is in question. This paper indicates a clear study of the energy consumption of the data centers and how this can be minimized and prepare for the quest of global energy saving and make the ICT greener.
{"title":"Making the cloud energy efficient an approach to make the data centers greener","authors":"M. Aion, M. N. Bhuiyan, Akib Jabed","doi":"10.1109/ICIEV.2015.7334045","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7334045","url":null,"abstract":"Today is the age of modern technology and the vast growing IT industry have created a better opportunity for everyone. One of the burning questions of today's modern world is the growth of energy demands and price. Besides that the impact on environment and the depletion of fossil fuels have brought a crisis in the energy related issues. Today's Information Technology is all about data centers thus cloud computing. The data centers around the world require a great amount of energy everyday which has created impact on the energy supply and environmental conditions. This is why the uncertainty of continuous energy supply in the future is in question. This paper indicates a clear study of the energy consumption of the data centers and how this can be minimized and prepare for the quest of global energy saving and make the ICT greener.","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117320983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7334016
Tsukasa Nakamura, Yasufumi Touma, H. Hagiwara, K. Asami, M. Komori
This paper introduces the image processing system of scene recognition using gradient feature and its FPGA implementation for mobile robots usage. We propose a hierarchical gradient feature descriptor, which could be easily implemented in a compact size of logic circuit on the FPGA. The gradient feature includes the function of corner detection by using the dispersion of directional gradient. The proposed hierarchical gradient feature analyzes the magnitude and direction in 17 regional blocks, where the input image is smoothed by the 7 line buffers of Gaussian filter with 8 parallel circuits as preprocessing.
{"title":"Scene recognition based on gradient feature for autonomous mobile robot and its FPGA implementation","authors":"Tsukasa Nakamura, Yasufumi Touma, H. Hagiwara, K. Asami, M. Komori","doi":"10.1109/ICIEV.2015.7334016","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7334016","url":null,"abstract":"This paper introduces the image processing system of scene recognition using gradient feature and its FPGA implementation for mobile robots usage. We propose a hierarchical gradient feature descriptor, which could be easily implemented in a compact size of logic circuit on the FPGA. The gradient feature includes the function of corner detection by using the dispersion of directional gradient. The proposed hierarchical gradient feature analyzes the magnitude and direction in 17 regional blocks, where the input image is smoothed by the 7 line buffers of Gaussian filter with 8 parallel circuits as preprocessing.","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125128187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-15DOI: 10.1109/ICIEV.2015.7334054
Z. Mustaffa, M. Sulaiman, M. Kahar
This paper presents a hybrid forecasting model namely Grey Wolf Optimizer-Least Squares Support Vector Machines (GWO-LSSVM). In this study, a great deal of attention was paid in determining LSSVM's hyper parameters. For that matter, the GWO is utilized an optimization tool for optimizing the said hyper parameters. Realized in gold price forecasting, the feasibility of GWO-LSSVM is measured based on Mean Absolute Percentage Error (MAPE) and Root Mean Square Percentage Error (RMSPE). Upon completing the simulation tasks, the comparison against two hybrid methods suggested that the GWO-LSSVM capable to produce lower forecasting error as compared to the identified forecasting techniques.
{"title":"Training LSSVM with GWO for price forecasting","authors":"Z. Mustaffa, M. Sulaiman, M. Kahar","doi":"10.1109/ICIEV.2015.7334054","DOIUrl":"https://doi.org/10.1109/ICIEV.2015.7334054","url":null,"abstract":"This paper presents a hybrid forecasting model namely Grey Wolf Optimizer-Least Squares Support Vector Machines (GWO-LSSVM). In this study, a great deal of attention was paid in determining LSSVM's hyper parameters. For that matter, the GWO is utilized an optimization tool for optimizing the said hyper parameters. Realized in gold price forecasting, the feasibility of GWO-LSSVM is measured based on Mean Absolute Percentage Error (MAPE) and Root Mean Square Percentage Error (RMSPE). Upon completing the simulation tasks, the comparison against two hybrid methods suggested that the GWO-LSSVM capable to produce lower forecasting error as compared to the identified forecasting techniques.","PeriodicalId":367355,"journal":{"name":"2015 International Conference on Informatics, Electronics & Vision (ICIEV)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130689831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}