Pub Date : 2012-11-01DOI: 10.1109/ISDA.2012.6416515
J. L. Olmo, Alberto Cano, J. Romero, Sebastián Ventura
Classification in imbalanced domains is a challenging task, since most of its real domain applications present skewed distributions of data. However, there are still some open issues in this kind of problem. This paper presents a multi-objective grammar-based ant programming algorithm for imbalanced classification, capable of addressing this task from both the binary and multiclass sides, unlike most of the solutions presented so far. We carry out two experimental studies comparing our algorithm against binary and multiclass solutions, demonstrating that it achieves an excellent performance for both binary and multiclass imbalanced data sets.
{"title":"Binary and multiclass imbalanced classification using multi-objective ant programming","authors":"J. L. Olmo, Alberto Cano, J. Romero, Sebastián Ventura","doi":"10.1109/ISDA.2012.6416515","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416515","url":null,"abstract":"Classification in imbalanced domains is a challenging task, since most of its real domain applications present skewed distributions of data. However, there are still some open issues in this kind of problem. This paper presents a multi-objective grammar-based ant programming algorithm for imbalanced classification, capable of addressing this task from both the binary and multiclass sides, unlike most of the solutions presented so far. We carry out two experimental studies comparing our algorithm against binary and multiclass solutions, demonstrating that it achieves an excellent performance for both binary and multiclass imbalanced data sets.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127150544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-01DOI: 10.1109/ISDA.2012.6416559
Hélio Perroni Filho, A. D. Souza
The multichannel model is a complete reassessment of how neurons work at the biochemical level. Its results can be extended into an overarching theory of how vision, memory and cognition come to be in the living brain. This article documents a first attempt at testing the model's validity, by applying its principles to the construction of an image template-matching framework, which is then used to solve a Graphical User Interface (GUI) automation problem. It is found that the template-matching application thus implemented can consistently locate required visual controls, even when template and match differ by color palette or (to a slighter degree) feature proportion. The article concludes by discussing the significance of these results and directions for further research.
{"title":"A biology-based template-matching framework","authors":"Hélio Perroni Filho, A. D. Souza","doi":"10.1109/ISDA.2012.6416559","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416559","url":null,"abstract":"The multichannel model is a complete reassessment of how neurons work at the biochemical level. Its results can be extended into an overarching theory of how vision, memory and cognition come to be in the living brain. This article documents a first attempt at testing the model's validity, by applying its principles to the construction of an image template-matching framework, which is then used to solve a Graphical User Interface (GUI) automation problem. It is found that the template-matching application thus implemented can consistently locate required visual controls, even when template and match differ by color palette or (to a slighter degree) feature proportion. The article concludes by discussing the significance of these results and directions for further research.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127257256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-01DOI: 10.1109/ISDA.2012.6416553
Trishita Ghosh, S. Mitra
The wireless access in vehicular environment system is developed for enhancing the driving safety and comfort of automotive users. However, such system suffers from quality of service degradation for safety applications caused by the channel congestion in scenarios with high vehicle density. The present work is a congestion control mechanism in vehicular ad-hoc network. It supports the communication of safe and unsafe messages among vehicles and infrastructure. Each node maintains a control queue to store the safe messages and a service queue to store the unsafe messages. The control channel is used for the transmission of safe messages and service channel is used for the transmission of unsafe messages. Each node computes its own priority depending upon the number of waiting messages in control queue and service queue. Each node reserves a fraction of control channel and service channel dynamically depending upon the number of waiting messages in its queue. The unsafe messages at a node may also be transmitted using control channel provided the control channel is free and service channel is overloaded which helps to reduce the loss of unsafe message at a node which in turn reduces the congestion level of a node and also improves its quality of service. The available bandwidth is also distributed among the nodes dynamically depending upon their priority. The performance of the proposed scheme is evaluated on the basis of average loss of unsafe message, average delay in safe and unsafe message, storage overhead per node.
{"title":"Congestion control by dynamic sharing of bandwidth among vehicles in VANET","authors":"Trishita Ghosh, S. Mitra","doi":"10.1109/ISDA.2012.6416553","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416553","url":null,"abstract":"The wireless access in vehicular environment system is developed for enhancing the driving safety and comfort of automotive users. However, such system suffers from quality of service degradation for safety applications caused by the channel congestion in scenarios with high vehicle density. The present work is a congestion control mechanism in vehicular ad-hoc network. It supports the communication of safe and unsafe messages among vehicles and infrastructure. Each node maintains a control queue to store the safe messages and a service queue to store the unsafe messages. The control channel is used for the transmission of safe messages and service channel is used for the transmission of unsafe messages. Each node computes its own priority depending upon the number of waiting messages in control queue and service queue. Each node reserves a fraction of control channel and service channel dynamically depending upon the number of waiting messages in its queue. The unsafe messages at a node may also be transmitted using control channel provided the control channel is free and service channel is overloaded which helps to reduce the loss of unsafe message at a node which in turn reduces the congestion level of a node and also improves its quality of service. The available bandwidth is also distributed among the nodes dynamically depending upon their priority. The performance of the proposed scheme is evaluated on the basis of average loss of unsafe message, average delay in safe and unsafe message, storage overhead per node.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130589421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-01DOI: 10.1109/ISDA.2012.6416581
Hossam M. Moftah, Walaa H. Elmasry, Nashwa El-Bendary, A. Hassanien, K. Nakamatsu
Image segmentation is an essential process for most analysis tasks of medical images. That's because having good segmentation results is useful for both physicians and patients via providing important information for surgical planning and early disease detection. This paper aims at evaluating the performance of the K-means clustering algorithm. To achieve this, we applied the K-means approach on different medical images including liver CT and breast MRI images. Experimental results obtained show that the overall segmentation accuracy offered by the K-means approach is high compared to segmentation accuracy by the well-known normalized cuts segmentation approach.
{"title":"Evaluating the effects of K-means clustering approach on medical images","authors":"Hossam M. Moftah, Walaa H. Elmasry, Nashwa El-Bendary, A. Hassanien, K. Nakamatsu","doi":"10.1109/ISDA.2012.6416581","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416581","url":null,"abstract":"Image segmentation is an essential process for most analysis tasks of medical images. That's because having good segmentation results is useful for both physicians and patients via providing important information for surgical planning and early disease detection. This paper aims at evaluating the performance of the K-means clustering algorithm. To achieve this, we applied the K-means approach on different medical images including liver CT and breast MRI images. Experimental results obtained show that the overall segmentation accuracy offered by the K-means approach is high compared to segmentation accuracy by the well-known normalized cuts segmentation approach.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121414972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-01DOI: 10.1109/ISDA.2012.6416673
S. M. N. Arosha Senanayake, O. A. Malik, M. Petra, Dansih Zaheer
This paper presents a hybrid intelligent system for recovery and performance evaluation of athletes after anterior cruciate ligament (ACL) injury/reconstruction. The fuzzy logic and case based reasoning approaches have been combined to build an assistive tool for sports trainers, coaches and clinicians for maintaining athletes' profile, monitoring progress of recovery, classifying recovery status and adjusting the recovery protocols for individuals. The kinematics and neuromuscular data are collected for subjects after ACL injury/reconstruction using self adjusted body-mounted wireless sensors Upon feature extraction and transformation using principal component analysis, the fuzzy clustering with automatic detection of clusters is employed to group the data according to current recovery status. A knowledge base has been designed to store subjects' profiles, recovery sessions' data and problem/solution pairs. The recovery classification and selection of similar cases has been done using fuzzy k-nearest neighbor (f-knn) and cosine similarity measure. Once relevant cases are selected, adaptation is performed and the performance evaluation will be done. The proposed system has been tested on a group of healthy and post-operated athletes and the classification accuracy of the system is found to be more than 94% using leave-one out cross validation method for walking/running activity.
{"title":"A hybrid intelligent system for recovery and performance evaluation after","authors":"S. M. N. Arosha Senanayake, O. A. Malik, M. Petra, Dansih Zaheer","doi":"10.1109/ISDA.2012.6416673","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416673","url":null,"abstract":"This paper presents a hybrid intelligent system for recovery and performance evaluation of athletes after anterior cruciate ligament (ACL) injury/reconstruction. The fuzzy logic and case based reasoning approaches have been combined to build an assistive tool for sports trainers, coaches and clinicians for maintaining athletes' profile, monitoring progress of recovery, classifying recovery status and adjusting the recovery protocols for individuals. The kinematics and neuromuscular data are collected for subjects after ACL injury/reconstruction using self adjusted body-mounted wireless sensors Upon feature extraction and transformation using principal component analysis, the fuzzy clustering with automatic detection of clusters is employed to group the data according to current recovery status. A knowledge base has been designed to store subjects' profiles, recovery sessions' data and problem/solution pairs. The recovery classification and selection of similar cases has been done using fuzzy k-nearest neighbor (f-knn) and cosine similarity measure. Once relevant cases are selected, adaptation is performed and the performance evaluation will be done. The proposed system has been tested on a group of healthy and post-operated athletes and the classification accuracy of the system is found to be more than 94% using leave-one out cross validation method for walking/running activity.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122558418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-01DOI: 10.1109/ISDA.2012.6416600
Gopalakrishna M T, M. Ravishankar, D. Babu
In recent years, the numbers of Visual Surveillance systems have greatly increased, and these systems have developed into intellectual systems that automatically detect, track, and recognize objects in video. Automatic moving object detection and tracking is a very challenging task in video surveillance applications. In this regard, many methods have been proposed for Moving Object Detection and Tracking based on edge, color, texture information. Due to unpredictable characteristics of objects in foggy videos, the task of object detection remains a challenging problem. In this paper, we propose a novel scheme for moving object detection based on Log Gabor filter (LGF) and Dominant Eigen Map (DEM) approaches. Location of the moving object is obtained by performing connected component analysis. In turn, a Moving Object is Tracked based on the centroid manipulation. Number of experiments is performed using indoor and outdoor video sequences. The proposed method is tested on standard PETS datasets and many real time video sequences. Results obtained are satisfactory and are compared with existing well known traditional methods.
{"title":"LoG-DEM: Log Gabor filter and Dominant Eigen Map approaches for moving object detection and","authors":"Gopalakrishna M T, M. Ravishankar, D. Babu","doi":"10.1109/ISDA.2012.6416600","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416600","url":null,"abstract":"In recent years, the numbers of Visual Surveillance systems have greatly increased, and these systems have developed into intellectual systems that automatically detect, track, and recognize objects in video. Automatic moving object detection and tracking is a very challenging task in video surveillance applications. In this regard, many methods have been proposed for Moving Object Detection and Tracking based on edge, color, texture information. Due to unpredictable characteristics of objects in foggy videos, the task of object detection remains a challenging problem. In this paper, we propose a novel scheme for moving object detection based on Log Gabor filter (LGF) and Dominant Eigen Map (DEM) approaches. Location of the moving object is obtained by performing connected component analysis. In turn, a Moving Object is Tracked based on the centroid manipulation. Number of experiments is performed using indoor and outdoor video sequences. The proposed method is tested on standard PETS datasets and many real time video sequences. Results obtained are satisfactory and are compared with existing well known traditional methods.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121053807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-01DOI: 10.1109/ISDA.2012.6416556
L. Veronese, Lauro Jose Lyrio Junior, Filipe Wall Mutz, Jorcy de Oliveira Neto, Vitor Barbirato
Virtual Generalizing Random Access Memory Weightless Neural Networks (VG-RAM WNN) is an effective machine learning technique that offers simple implementation and fast training and test. We examined the performance of VG-RAM WNN on binocular dense stereo matching using the Middlebury Stereo Datasets. Our experimental results showed that, even without tackling occlusions and discontinuities in the stereo image pairs examined, our VG-RAM WNN architecture for stereo matching was able to rank at 114th position in the Middlebury Stereo Evaluation system. This result is promising, because the difference in performance among approaches ranked in distinct positions is very small.
{"title":"Stereo matching with VG-RAM Weightless Neural Networks","authors":"L. Veronese, Lauro Jose Lyrio Junior, Filipe Wall Mutz, Jorcy de Oliveira Neto, Vitor Barbirato","doi":"10.1109/ISDA.2012.6416556","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416556","url":null,"abstract":"Virtual Generalizing Random Access Memory Weightless Neural Networks (VG-RAM WNN) is an effective machine learning technique that offers simple implementation and fast training and test. We examined the performance of VG-RAM WNN on binocular dense stereo matching using the Middlebury Stereo Datasets. Our experimental results showed that, even without tackling occlusions and discontinuities in the stereo image pairs examined, our VG-RAM WNN architecture for stereo matching was able to rank at 114th position in the Middlebury Stereo Evaluation system. This result is promising, because the difference in performance among approaches ranked in distinct positions is very small.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115491565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-01DOI: 10.1109/ISDA.2012.6416619
N. Dey, D. Biswas, A. B. Roy, Achintya Das, S. S. Chaudhuri
At present most of the hospitals and diagnostic centers globally, use wireless media to exchange biomedical information for mutual availability of therapeutic case studies. The required level of security and authenticity for transmitting biomedical information through the internet is quite high. Level of security can be increased; authenticity of the information can be verified and control over the copy process can be ascertained by adding watermark as “ownership” information in multimedia content. In this proposed method different types of gray scale biomedical images can be used as added ownership (watermark) data. Electrooculography is a medical test used by the ophthalmologists for monitoring eyeball movement in Rapid Eye Movement (REM) and non-REM sleep, to detect the disorders of human eyes and to measure the resting potential of the eye. In this present work 1-D EOG signal is transformed into 2-D signal. DWT, DCT, SVD are applied on the transformed 2D signal to embed watermark in it. Extraction of watermark image is done by applying inverse DWT, inverse DCT and SVD. The Peak Signal to Noise Ratio (PSNR) of the original EOG signal vs. watermarked signal and the correlation value between the original and extracted watermark image are calculated to prove the efficacy of the proposed method.
{"title":"DWT-DCT-SVD based blind watermarking technique of gray image in electrooculogram signal","authors":"N. Dey, D. Biswas, A. B. Roy, Achintya Das, S. S. Chaudhuri","doi":"10.1109/ISDA.2012.6416619","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416619","url":null,"abstract":"At present most of the hospitals and diagnostic centers globally, use wireless media to exchange biomedical information for mutual availability of therapeutic case studies. The required level of security and authenticity for transmitting biomedical information through the internet is quite high. Level of security can be increased; authenticity of the information can be verified and control over the copy process can be ascertained by adding watermark as “ownership” information in multimedia content. In this proposed method different types of gray scale biomedical images can be used as added ownership (watermark) data. Electrooculography is a medical test used by the ophthalmologists for monitoring eyeball movement in Rapid Eye Movement (REM) and non-REM sleep, to detect the disorders of human eyes and to measure the resting potential of the eye. In this present work 1-D EOG signal is transformed into 2-D signal. DWT, DCT, SVD are applied on the transformed 2D signal to embed watermark in it. Extraction of watermark image is done by applying inverse DWT, inverse DCT and SVD. The Peak Signal to Noise Ratio (PSNR) of the original EOG signal vs. watermarked signal and the correlation value between the original and extracted watermark image are calculated to prove the efficacy of the proposed method.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125059487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-01DOI: 10.1109/ISDA.2012.6416621
H. Banati, Punam Bedi, P. Marwaha
Nowadays, companies recognize the need to be customer driven by providing superior service to satisfy customers' needs. But as customers and their needs grow increasing diverse, unnecessary cost and complexity are inevitably added to operations. Service providers discovered the new frontier in business competition: “Collaborative Customization.” This approach follows three steps: first to conduct a dialogue with individual customers to help them articulate their needs; second, to identify the precise offering that fulfills those needs; and third, to make customized products for them. Web services deployed over the Web are accessible to a wider user base. Web services are designed and contracted to meet the need of large number of users. Many a times, multiple customizations of the base functionality is required to cater the need of multiple set of users. This forces service provider to deploy multiple Web services customized for each set of users, which results in increasing cost of infrastructure and maintenance. Since, multiple versions of customized Web services are deployed multiple times at different URLs, it is difficult and costlier to maintain, update and backup these services and their data. The objective of this work is to reduce the efforts and cost that resulted due to these multiple versions of the Web Services. We have extended WSDL and WSDL-T to WSDL-TC that aims at reducing the cost by maintaining the different collaborative customized versions of operations of the Web service in a single deployment. The approach also manages access control of these operations to their respective groups. WSDL-TC being an extension of WSDL-T is capable of managing versions of each customized operation that resulted due to the changes in their business requirements over a period of time. The paper presents an example of hotel reservation web service to present WSDL-TC approach. WSDL-TC also eases the task of web service administrators as they have to manage the single instance instead of multiple instances of a Web service. Moreover, WSDL-TC based services when deployed in the cloud environment may help in achieving greater degree of multi-tenancy further reducing the cost for service producers.
{"title":"WSDL-TC: Collaborative customization of web services","authors":"H. Banati, Punam Bedi, P. Marwaha","doi":"10.1109/ISDA.2012.6416621","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416621","url":null,"abstract":"Nowadays, companies recognize the need to be customer driven by providing superior service to satisfy customers' needs. But as customers and their needs grow increasing diverse, unnecessary cost and complexity are inevitably added to operations. Service providers discovered the new frontier in business competition: “Collaborative Customization.” This approach follows three steps: first to conduct a dialogue with individual customers to help them articulate their needs; second, to identify the precise offering that fulfills those needs; and third, to make customized products for them. Web services deployed over the Web are accessible to a wider user base. Web services are designed and contracted to meet the need of large number of users. Many a times, multiple customizations of the base functionality is required to cater the need of multiple set of users. This forces service provider to deploy multiple Web services customized for each set of users, which results in increasing cost of infrastructure and maintenance. Since, multiple versions of customized Web services are deployed multiple times at different URLs, it is difficult and costlier to maintain, update and backup these services and their data. The objective of this work is to reduce the efforts and cost that resulted due to these multiple versions of the Web Services. We have extended WSDL and WSDL-T to WSDL-TC that aims at reducing the cost by maintaining the different collaborative customized versions of operations of the Web service in a single deployment. The approach also manages access control of these operations to their respective groups. WSDL-TC being an extension of WSDL-T is capable of managing versions of each customized operation that resulted due to the changes in their business requirements over a period of time. The paper presents an example of hotel reservation web service to present WSDL-TC approach. WSDL-TC also eases the task of web service administrators as they have to manage the single instance instead of multiple instances of a Web service. Moreover, WSDL-TC based services when deployed in the cloud environment may help in achieving greater degree of multi-tenancy further reducing the cost for service producers.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":"38 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114047087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-01DOI: 10.1109/ISDA.2012.6416677
T. Ganesan, P. Vasant, I. Elamvazuthi, K. Shaari
Most optimization cases in recent times present themselves in a multi-objective (MO) setting. Hence, it is vital for the decision maker (DM) to have in hand multiple solutions prior to selecting the best solution. In this work, the weighted sum scalarization approach is used in conjunction with two meta-heuristic algorithms; differential evolution (DE) and gravitational search algorithm (GSA). These methods are then used to generate the approximate Pareto frontier to the green sand mould system problem. Some comparative studies were then carried out with the algorithms in this work and that from the previous work. Examinations on the performance and the quality of the solutions obtained by these algorithms are shown here.
{"title":"Multiobjective optimization of green sand mould system using DE and GSA","authors":"T. Ganesan, P. Vasant, I. Elamvazuthi, K. Shaari","doi":"10.1109/ISDA.2012.6416677","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416677","url":null,"abstract":"Most optimization cases in recent times present themselves in a multi-objective (MO) setting. Hence, it is vital for the decision maker (DM) to have in hand multiple solutions prior to selecting the best solution. In this work, the weighted sum scalarization approach is used in conjunction with two meta-heuristic algorithms; differential evolution (DE) and gravitational search algorithm (GSA). These methods are then used to generate the approximate Pareto frontier to the green sand mould system problem. Some comparative studies were then carried out with the algorithms in this work and that from the previous work. Examinations on the performance and the quality of the solutions obtained by these algorithms are shown here.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127641878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}