One of the main challenges in artificial intelligence or computational linguistics is understanding the meaning of a word or concept. We argue that the connotation of the term “understanding,” or the meaning of the word “meaning,” is merely a word mapping game due to unavoidable circular definitions. These circular definitions arise when an individual defines a concept, the concepts in its definition, and so on, eventually forming a personalized network of concepts, which we call an iWordNet. Such an iWordNet serves as an external representation of an individual’s knowledge and state of mind at the time of the network construction. As a result, “understanding” and knowledge can be regarded as a calculable statistical property of iWordNet topology. We will discuss the construction and analysis of the iWordNet, as well as the proposed “Path of Understanding” in an iWordNet that characterizes an individual’s understanding of a complex concept such as a written passage. In our pilot study of 20 subjects we used a regression model to demonstrate that the topological properties of an individual’s iWordNet are related to his IQ score, a relationship that suggests iWordNets as a potential new methodology to studying cognitive science and artificial intelligence.
{"title":"iWordNet: A New Approach to Cognitive Science and Artificial Intelligence","authors":"Mark Chang, Monica Chang","doi":"10.1155/2017/1948317","DOIUrl":"https://doi.org/10.1155/2017/1948317","url":null,"abstract":"One of the main challenges in artificial intelligence or computational linguistics is understanding the meaning of a word or concept. We argue that the connotation of the term “understanding,” or the meaning of the word “meaning,” is merely a word mapping game due to unavoidable circular definitions. These circular definitions arise when an individual defines a concept, the concepts in its definition, and so on, eventually forming a personalized network of concepts, which we call an iWordNet. Such an iWordNet serves as an external representation of an individual’s knowledge and state of mind at the time of the network construction. As a result, “understanding” and knowledge can be regarded as a calculable statistical property of iWordNet topology. We will discuss the construction and analysis of the iWordNet, as well as the proposed “Path of Understanding” in an iWordNet that characterizes an individual’s understanding of a complex concept such as a written passage. In our pilot study of 20 subjects we used a regression model to demonstrate that the topological properties of an individual’s iWordNet are related to his IQ score, a relationship that suggests iWordNets as a potential new methodology to studying cognitive science and artificial intelligence.","PeriodicalId":7253,"journal":{"name":"Adv. Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88894719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the geolocation field where high-level programs and low-level devices coexist, it is often difficult to find a friendly user interface to configure all the parameters. The challenge addressed in this paper is to propose intuitive and simple, thus natural language interfaces to interact with low-level devices. Such interfaces contain natural language processing (NLP) and fuzzy representations of words that facilitate the elicitation of business-level objectives in our context. A complete methodology is proposed, from the lexicon construction to a dialogue software agent including a fuzzy linguistic representation, based on synonymy.
{"title":"Natural Language Processing and Fuzzy Tools for Business Processes in a Geolocation Context","authors":"I. Truck, Mohammed-Amine Abchir","doi":"10.1155/2017/9462457","DOIUrl":"https://doi.org/10.1155/2017/9462457","url":null,"abstract":"In the geolocation field where high-level programs and low-level devices coexist, it is often difficult to find a friendly user interface to configure all the parameters. The challenge addressed in this paper is to propose intuitive and simple, thus natural language interfaces to interact with low-level devices. Such interfaces contain natural language processing (NLP) and fuzzy representations of words that facilitate the elicitation of business-level objectives in our context. A complete methodology is proposed, from the lexicon construction to a dialogue software agent including a fuzzy linguistic representation, based on synonymy.","PeriodicalId":7253,"journal":{"name":"Adv. Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84016188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the data mining, the analysis of high-dimensional data is a critical but thorny research topic. The LASSO (least absolute shrinkage and selection operator) algorithm avoids the limitations, which generally employ stepwise regression with information criteria to choose the optimal model, existing in traditional methods. The improved-LARS (Least Angle Regression) algorithm solves the LASSO effectively. This paper presents an improved-LARS algorithm, which is constructed on the basis of multidimensional weight and intends to solve the problems in LASSO. Specifically, in order to distinguish the impact of each variable in the regression, we have separately introduced part of principal component analysis (Part_PCA), Independent Weight evaluation, and CRITIC, into our proposal. We have explored that these methods supported by our proposal change the regression track by weighted every individual, to optimize the approach direction, as well as the approach variable selection. As a consequence, our proposed algorithm can yield better results in the promise direction. Furthermore, we have illustrated the excellent property of LARS algorithm based on multidimensional weight by the Pima Indians Diabetes. The experiment results show an attractive performance improvement resulting from the proposed method, compared with the improved-LARS, when they are subjected to the same threshold value.
在数据挖掘中,高维数据的分析是一个关键而棘手的研究课题。LASSO (least absolute contraction and selection operator,最小绝对收缩和选择算子)算法避免了传统方法一般采用带信息准则的逐步回归来选择最优模型的局限性。改进的最小角度回归(lars)算法有效地解决了LASSO问题。本文提出了一种基于多维权值的改进lars算法,旨在解决LASSO算法中存在的问题。具体来说,为了区分回归中每个变量的影响,我们在提案中分别引入了部分主成分分析(Part_PCA)、独立权重评估和CRITIC。我们探索了这些方法通过对每个个体进行加权来改变回归轨迹,以优化逼近方向,以及逼近变量的选择。因此,我们提出的算法在承诺方向上可以产生更好的结果。此外,我们还以皮马印第安人糖尿病为例说明了基于多维权值的LARS算法的优异性能。实验结果表明,在阈值相同的情况下,与改进后的lars相比,该方法的性能有明显提高。
{"title":"Method for Solving LASSO Problem Based on Multidimensional Weight","authors":"Chunrong Chen, S. Chen, Chen Lin, Yuchen Zhu","doi":"10.1155/2017/1736389","DOIUrl":"https://doi.org/10.1155/2017/1736389","url":null,"abstract":"In the data mining, the analysis of high-dimensional data is a critical but thorny research topic. The LASSO (least absolute shrinkage and selection operator) algorithm avoids the limitations, which generally employ stepwise regression with information criteria to choose the optimal model, existing in traditional methods. The improved-LARS (Least Angle Regression) algorithm solves the LASSO effectively. This paper presents an improved-LARS algorithm, which is constructed on the basis of multidimensional weight and intends to solve the problems in LASSO. Specifically, in order to distinguish the impact of each variable in the regression, we have separately introduced part of principal component analysis (Part_PCA), Independent Weight evaluation, and CRITIC, into our proposal. We have explored that these methods supported by our proposal change the regression track by weighted every individual, to optimize the approach direction, as well as the approach variable selection. As a consequence, our proposed algorithm can yield better results in the promise direction. Furthermore, we have illustrated the excellent property of LARS algorithm based on multidimensional weight by the Pima Indians Diabetes. The experiment results show an attractive performance improvement resulting from the proposed method, compared with the improved-LARS, when they are subjected to the same threshold value.","PeriodicalId":7253,"journal":{"name":"Adv. Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81968041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A precise estimation of isotherm model parameters and selection of isotherms from the measured data are essential for the fate and transport of toxic contaminants in the environment. Nonlinear least-square techniques are widely used for fitting the isotherm model on the experimental data. However, such conventional techniques pose several limitations in the parameter estimation and the choice of appropriate isotherm model as shown in this paper. It is demonstrated in the present work that the classical deterministic techniques are sensitive to the initial guess and thus the performance is impeded by the presence of local optima. A novel solver based on modified artificial bee-colony (MABC) algorithm is proposed in this work for the selection and configuration of appropriate sorption isotherms. The performance of the proposed solver is compared with the other three solvers based on swarm intelligence for model parameter estimation using measured data from 21 soils. Performance comparison of developed solvers on the measured data reveals that the proposed solver demonstrates excellent convergence capabilities due to the superior exploration-exploitation abilities. The estimated solutions by the proposed solver are almost identical to the mean fitness values obtained over 20 independent runs. The advantages of the proposed solver are presented.
{"title":"Selection and Configuration of Sorption Isotherm Models in Soils Using Artificial Bees Guided by the Particle Swarm","authors":"T. V. Bharat","doi":"10.1155/2017/3497652","DOIUrl":"https://doi.org/10.1155/2017/3497652","url":null,"abstract":"A precise estimation of isotherm model parameters and selection of isotherms from the measured data are essential for the fate and transport of toxic contaminants in the environment. Nonlinear least-square techniques are widely used for fitting the isotherm model on the experimental data. However, such conventional techniques pose several limitations in the parameter estimation and the choice of appropriate isotherm model as shown in this paper. It is demonstrated in the present work that the classical deterministic techniques are sensitive to the initial guess and thus the performance is impeded by the presence of local optima. A novel solver based on modified artificial bee-colony (MABC) algorithm is proposed in this work for the selection and configuration of appropriate sorption isotherms. The performance of the proposed solver is compared with the other three solvers based on swarm intelligence for model parameter estimation using measured data from 21 soils. Performance comparison of developed solvers on the measured data reveals that the proposed solver demonstrates excellent convergence capabilities due to the superior exploration-exploitation abilities. The estimated solutions by the proposed solver are almost identical to the mean fitness values obtained over 20 independent runs. The advantages of the proposed solver are presented.","PeriodicalId":7253,"journal":{"name":"Adv. Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77718688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automation of the smart home binds together services of hardware and software to provide support for its human inhabitants. The rise of web technologies offers applicable concepts and technologies for service composition that can be exploited for automated planning of the smart home, which can be further enhanced by implementation based on service oriented architecture SOA. SOA supports loose coupling and late binding of devices, enabling a more declarative approach in defining services and simplifying home configurations. One such declarative approach is to represent and solve automated planning through constraint satisfaction problem CSP, which has the advantage of handling larger domains of home states. But CSP uses hard constraints and thus cannot perform optimization and handle contradictory goals and partial goal fulfillment, which are practical issues smart environments will face if humans are involved. This paper extends this approach to Weighted Constraint Satisfaction Problem WCSP. Branch and bound depth first search is used, where its lower bound is estimated by bacterial memetic algorithm BMA on a relaxed version of the original optimization problem. Experiments up to 16-step planning of home services demonstrate the applicability and practicality of the approach, with the inclusion of local search for trivial service combinations in BMA that produces performance enhancements. Besides, this work aims to set the groundwork for further research in the field.
{"title":"Weighted Constraint Satisfaction for Smart Home Automation and Optimization","authors":"Nuo Wi Noel Tay, János Botzheim, N. Kubota","doi":"10.1155/2016/2959508","DOIUrl":"https://doi.org/10.1155/2016/2959508","url":null,"abstract":"Automation of the smart home binds together services of hardware and software to provide support for its human inhabitants. The rise of web technologies offers applicable concepts and technologies for service composition that can be exploited for automated planning of the smart home, which can be further enhanced by implementation based on service oriented architecture SOA. SOA supports loose coupling and late binding of devices, enabling a more declarative approach in defining services and simplifying home configurations. One such declarative approach is to represent and solve automated planning through constraint satisfaction problem CSP, which has the advantage of handling larger domains of home states. But CSP uses hard constraints and thus cannot perform optimization and handle contradictory goals and partial goal fulfillment, which are practical issues smart environments will face if humans are involved. This paper extends this approach to Weighted Constraint Satisfaction Problem WCSP. Branch and bound depth first search is used, where its lower bound is estimated by bacterial memetic algorithm BMA on a relaxed version of the original optimization problem. Experiments up to 16-step planning of home services demonstrate the applicability and practicality of the approach, with the inclusion of local search for trivial service combinations in BMA that produces performance enhancements. Besides, this work aims to set the groundwork for further research in the field.","PeriodicalId":7253,"journal":{"name":"Adv. Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79030067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In multiple instance learning (MIL) framework, an object is represented by a set of instances referred to as bag. A positive class label is assigned to a bag if it contains at least one positive instance; otherwise a bag is labeled with negative class label. Therefore, the task of MIL is to learn a classifier at bag level rather than at instance level. Traditional supervised learning approaches cannot be applied directly in such kind of situation. In this study, we represent each bag by a vector of its dissimilarities to the other existing bags in the training dataset and propose a multiple instance learning based Twin Support Vector Machine (MIL-TWSVM) classifier. We have used different ways to represent the dissimilarity between two bags and performed a comparative analysis of them. The experimental results on ten benchmark MIL datasets demonstrate that the proposed MIL-TWSVM classifier is computationally inexpensive and competitive with state-of-the-art approaches. The significance of the experimental results has been tested by using Friedman statistic and Nemenyi post hoc tests.
{"title":"Twin Support Vector Machine for Multiple Instance Learning Based on Bag Dissimilarities","authors":"Divya Tomar, Sonali Agarwal","doi":"10.1155/2016/1269708","DOIUrl":"https://doi.org/10.1155/2016/1269708","url":null,"abstract":"In multiple instance learning (MIL) framework, an object is represented by a set of instances referred to as bag. A positive class label is assigned to a bag if it contains at least one positive instance; otherwise a bag is labeled with negative class label. Therefore, the task of MIL is to learn a classifier at bag level rather than at instance level. Traditional supervised learning approaches cannot be applied directly in such kind of situation. In this study, we represent each bag by a vector of its dissimilarities to the other existing bags in the training dataset and propose a multiple instance learning based Twin Support Vector Machine (MIL-TWSVM) classifier. We have used different ways to represent the dissimilarity between two bags and performed a comparative analysis of them. The experimental results on ten benchmark MIL datasets demonstrate that the proposed MIL-TWSVM classifier is computationally inexpensive and competitive with state-of-the-art approaches. The significance of the experimental results has been tested by using Friedman statistic and Nemenyi post hoc tests.","PeriodicalId":7253,"journal":{"name":"Adv. Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80500541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recommender systems are widespread due to their ability to help Web users surf the Internet in a personalized way. For example, collaborative recommender system is a powerful Web personalization tool for suggesting many useful items to a given user based on opinions collected from his neighbors. Among many, similarity measure is an important factor affecting the performance of the collaborative recommender system. However, the similarity measure itself largely depends on the overlapping between the user profiles. Most of the previous systems are tested on a predefined number of common items and neighbors. However, the system performance may vary if we changed these parameters. The main aim of this paper is to examine the performance of the collaborative recommender system under many similarity measures, common set cardinalities, rating mean groups, and neighborhood set sizes. For this purpose, we propose a modified version for the mean difference weight similarity measure and a new evaluation metric called users’ coverage for measuring the recommender system ability for helping users. The experimental results show that the modified mean difference weight similarity measure outperforms other similarity measures and the collaborative recommender system performance varies by varying its parameters; hence we must specify the system parameters in advance.
{"title":"Effect of Collaborative Recommender System Parameters: Common Set Cardinality and the Similarity Measure","authors":"Mohammad Yahya H. Al-Shamri","doi":"10.1155/2016/9386368","DOIUrl":"https://doi.org/10.1155/2016/9386368","url":null,"abstract":"Recommender systems are widespread due to their ability to help Web users surf the Internet in a personalized way. For example, collaborative recommender system is a powerful Web personalization tool for suggesting many useful items to a given user based on opinions collected from his neighbors. Among many, similarity measure is an important factor affecting the performance of the collaborative recommender system. However, the similarity measure itself largely depends on the overlapping between the user profiles. Most of the previous systems are tested on a predefined number of common items and neighbors. However, the system performance may vary if we changed these parameters. The main aim of this paper is to examine the performance of the collaborative recommender system under many similarity measures, common set cardinalities, rating mean groups, and neighborhood set sizes. For this purpose, we propose a modified version for the mean difference weight similarity measure and a new evaluation metric called users’ coverage for measuring the recommender system ability for helping users. The experimental results show that the modified mean difference weight similarity measure outperforms other similarity measures and the collaborative recommender system performance varies by varying its parameters; hence we must specify the system parameters in advance.","PeriodicalId":7253,"journal":{"name":"Adv. Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91378264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently in the Computer Vision field, a subject of interest, at least in almost every video application based on scene content, is video segmentation. Some of these applications are indexing, surveillance, medical imaging, event analysis, and computer-guided surgery, for naming some of them. To achieve their goals, these applications need meaningful information about a video sequence, in order to understand the events in its corresponding scene. Therefore, we need semantic information which can be obtained from objects of interest that are present in the scene. In order to recognize objects we need to compute features which aid the finding of similarities and dissimilarities, among other characteristics. For this reason, one of the most important tasks for video and image processing is segmentation. The segmentation process consists in separating data into groups that share similar features. Based on this, in this work we propose a novel framework for video representation and segmentation. The main workflow of this framework is given by the processing of an input frame sequence in order to obtain, as output, a segmented version. For video representation we use the Extreme Vertices Model in the -Dimensional Space while we use the Discrete Compactness descriptor as feature and Kohonen Self-Organizing Maps for segmentation purposes.
{"title":"Automatic Representation and Segmentation of Video Sequences via a Novel Framework Based on the nD-EVM and Kohonen Networks","authors":"José-Yovany Luis-García, R. Pérez-Aguila","doi":"10.1155/2016/6361237","DOIUrl":"https://doi.org/10.1155/2016/6361237","url":null,"abstract":"Recently in the Computer Vision field, a subject of interest, at least in almost every video application based on scene content, is video segmentation. Some of these applications are indexing, surveillance, medical imaging, event analysis, and computer-guided surgery, for naming some of them. To achieve their goals, these applications need meaningful information about a video sequence, in order to understand the events in its corresponding scene. Therefore, we need semantic information which can be obtained from objects of interest that are present in the scene. In order to recognize objects we need to compute features which aid the finding of similarities and dissimilarities, among other characteristics. For this reason, one of the most important tasks for video and image processing is segmentation. The segmentation process consists in separating data into groups that share similar features. Based on this, in this work we propose a novel framework for video representation and segmentation. The main workflow of this framework is given by the processing of an input frame sequence in order to obtain, as output, a segmented version. For video representation we use the Extreme Vertices Model in the -Dimensional Space while we use the Discrete Compactness descriptor as feature and Kohonen Self-Organizing Maps for segmentation purposes.","PeriodicalId":7253,"journal":{"name":"Adv. Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87292636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many problem domains utilize discriminant analysis, for example, classification, prediction, and diagnoses, by applying artificial intelligence and machine learning. However, the results are rarely perfect and errors can cause significant losses. Hence, end users are best served when they have performance information relevant to their need. Starting with the most basic questions, this study considers eight summary statistics often seen in the literature and evaluates their end user efficacy. Results lead to proposed criteria necessary for end user efficacious summary statistics. Testing the same eight summary statistics shows that none satisfy all of the criteria. Hence, two criteria-compliant summary statistics are introduced. To show how end users can benefit, measure utility is demonstrated on two problems. A key finding of this study is that researchers can make their test outcomes more relevant to end users with minor changes in their analyses and presentation.
{"title":"Efficacious Discriminant Analysis (Classifier) Measures for End Users","authors":"E. Eiland, L. Liebrock","doi":"10.1155/2016/8173625","DOIUrl":"https://doi.org/10.1155/2016/8173625","url":null,"abstract":"Many problem domains utilize discriminant analysis, for example, classification, prediction, and diagnoses, by applying artificial intelligence and machine learning. However, the results are rarely perfect and errors can cause significant losses. Hence, end users are best served when they have performance information relevant to their need. Starting with the most basic questions, this study considers eight summary statistics often seen in the literature and evaluates their end user efficacy. Results lead to proposed criteria necessary for end user efficacious summary statistics. Testing the same eight summary statistics shows that none satisfy all of the criteria. Hence, two criteria-compliant summary statistics are introduced. To show how end users can benefit, measure utility is demonstrated on two problems. A key finding of this study is that researchers can make their test outcomes more relevant to end users with minor changes in their analyses and presentation.","PeriodicalId":7253,"journal":{"name":"Adv. Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87184965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aashish Kumar Bohre, G. Agnihotri, Manisha Dubey, S. Kalambe
The optimal planning (sizing and siting) of the distributed generations (DGs) by using butterfly-PSO/BF-PSO technique to investigate the impacts of load models is presented in this work. The validity of the evaluated results is confirmed by comparing with well-known Genetic Algorithm (GA) and standard or conventional particle swarm optimization (PSO). To exhibit its compatibility in terms of load management, an impact of different load models on the size and location of DG has also been presented in this work. The fitness evolution function explored is the multiobjective function (FMO), which is based on the three significant indexes such as active power loss, reactive power loss, and voltage deviation index. The optimal solution is obtained by minimizing the multiobjective fitness function using BF-PSO, GA, and PSO technique. The comparison of the different optimization techniques is given for the different types of load models such as constant, industrial, residential, and commercial load models. The results clearly show that the BF-PSO technique presents the superior solution in terms of compatibility as well as computation time and efforts both. The algorithm has been carried out with 15-bus radial and 30-bus mesh system.
{"title":"Impacts of the Load Models on Optimal Planning of Distributed Generation in Distribution System","authors":"Aashish Kumar Bohre, G. Agnihotri, Manisha Dubey, S. Kalambe","doi":"10.1155/2015/297436","DOIUrl":"https://doi.org/10.1155/2015/297436","url":null,"abstract":"The optimal planning (sizing and siting) of the distributed generations (DGs) by using butterfly-PSO/BF-PSO technique to investigate the impacts of load models is presented in this work. The validity of the evaluated results is confirmed by comparing with well-known Genetic Algorithm (GA) and standard or conventional particle swarm optimization (PSO). To exhibit its compatibility in terms of load management, an impact of different load models on the size and location of DG has also been presented in this work. The fitness evolution function explored is the multiobjective function (FMO), which is based on the three significant indexes such as active power loss, reactive power loss, and voltage deviation index. The optimal solution is obtained by minimizing the multiobjective fitness function using BF-PSO, GA, and PSO technique. The comparison of the different optimization techniques is given for the different types of load models such as constant, industrial, residential, and commercial load models. The results clearly show that the BF-PSO technique presents the superior solution in terms of compatibility as well as computation time and efforts both. The algorithm has been carried out with 15-bus radial and 30-bus mesh system.","PeriodicalId":7253,"journal":{"name":"Adv. Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76573983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}