Pub Date : 2017-02-01DOI: 10.1109/ICDMAI.2017.8073476
M. Hans, Snehal C. Kor, A. S. Patil
As India emerging as a developing country, civilized area is also increasing day by day. As underground cables are best under such conditions its utilization is also growing because of its obvious advantages like lower transmission losses, lower maintenance cost and they are less susceptible to the impacts of severe weather and so many. But it is having few disadvantages too like expensive installation and detection of fault location. As it is not visible it becomes difficult to find exact location of the fault. In this paper we present two methods which will be very useful to identify the exact distance of fault of underground system from base station. One of the methods is Murray loop method and other one is Ohm's Law Method. Murray loop method uses the whetstone bridge to calculate exact distance of fault location from base station and sends it to the user mobile. Whereas in Ohm's law method, when any fault occurs, voltage drop will vary depending on the length of fault in cable, since the current varies. Both the methods use voltage convertor, microcontroller and potentiometer to find the fault location under LG, LL, LLL faults.
{"title":"Identification of underground cable fault location and development","authors":"M. Hans, Snehal C. Kor, A. S. Patil","doi":"10.1109/ICDMAI.2017.8073476","DOIUrl":"https://doi.org/10.1109/ICDMAI.2017.8073476","url":null,"abstract":"As India emerging as a developing country, civilized area is also increasing day by day. As underground cables are best under such conditions its utilization is also growing because of its obvious advantages like lower transmission losses, lower maintenance cost and they are less susceptible to the impacts of severe weather and so many. But it is having few disadvantages too like expensive installation and detection of fault location. As it is not visible it becomes difficult to find exact location of the fault. In this paper we present two methods which will be very useful to identify the exact distance of fault of underground system from base station. One of the methods is Murray loop method and other one is Ohm's Law Method. Murray loop method uses the whetstone bridge to calculate exact distance of fault location from base station and sends it to the user mobile. Whereas in Ohm's law method, when any fault occurs, voltage drop will vary depending on the length of fault in cable, since the current varies. Both the methods use voltage convertor, microcontroller and potentiometer to find the fault location under LG, LL, LLL faults.","PeriodicalId":368507,"journal":{"name":"2017 International Conference on Data Management, Analytics and Innovation (ICDMAI)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116688135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1109/ICDMAI.2017.8073491
Sangeetha Gunasekhar, K. Dinesh
This paper looks at corporate governance in terms of board characteristics such as board size and proportion of independent or non executive directors and performance of the firm in determining the chief executive officer's (CEO) compensation. We have taken central state owned enterprises (SOEs) for our study for the year 2015. The SOEs include both listed and non listed firms. We have employed Partial Least Square (PLS) based on Structural Equation Modeling (SEM) technique to draw results.
{"title":"The impact of corporate governance and firm performance on chief executive officer's compensation: Evidence from central state owned enterprises in India","authors":"Sangeetha Gunasekhar, K. Dinesh","doi":"10.1109/ICDMAI.2017.8073491","DOIUrl":"https://doi.org/10.1109/ICDMAI.2017.8073491","url":null,"abstract":"This paper looks at corporate governance in terms of board characteristics such as board size and proportion of independent or non executive directors and performance of the firm in determining the chief executive officer's (CEO) compensation. We have taken central state owned enterprises (SOEs) for our study for the year 2015. The SOEs include both listed and non listed firms. We have employed Partial Least Square (PLS) based on Structural Equation Modeling (SEM) technique to draw results.","PeriodicalId":368507,"journal":{"name":"2017 International Conference on Data Management, Analytics and Innovation (ICDMAI)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114132814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1109/ICDMAI.2017.8073537
Arnab Ghosh Dastidar
This paper provides systems and methods for the demand planner to improve forecasting intermittent long-tail demand by leveraging cluster-based processing. The proposed framework has been established on the demand data for a global power-generation business with reliable forecast number accuracy. The exploratory analysis encompasses both demand profiling and product classification stages, and the forecasting system identifies a cluster from historical demand data. Clustering aims to partition n products into k clusters, in which each product belongs to the cluster with the nearest product attribute. Demand of products within each cluster are aggregated, and the Unobserved Components time series Model (UCM) has been used to forecast at cluster level. Cluster-level forecasts are then disaggregated into child products based on the ratio of recent consumption.
本文为需求规划者利用基于集群的处理改进间歇性长尾需求的预测提供了系统和方法。该框架以全球发电企业的需求数据为基础,具有可靠的预测数字精度。探索性分析包括需求分析和产品分类两个阶段,预测系统从历史需求数据中确定一个集群。聚类的目的是将n个产品划分为k个聚类,每个产品都属于产品属性最接近的聚类。对各集群内的产品需求进行了汇总,并利用未观察组件时间序列模型(unobservable Components time series Model, UCM)在集群层面进行了预测。然后根据最近消费的比例将集群级预测分解为儿童产品。
{"title":"Intermittent demand forecasting for long tail SKUs","authors":"Arnab Ghosh Dastidar","doi":"10.1109/ICDMAI.2017.8073537","DOIUrl":"https://doi.org/10.1109/ICDMAI.2017.8073537","url":null,"abstract":"This paper provides systems and methods for the demand planner to improve forecasting intermittent long-tail demand by leveraging cluster-based processing. The proposed framework has been established on the demand data for a global power-generation business with reliable forecast number accuracy. The exploratory analysis encompasses both demand profiling and product classification stages, and the forecasting system identifies a cluster from historical demand data. Clustering aims to partition n products into k clusters, in which each product belongs to the cluster with the nearest product attribute. Demand of products within each cluster are aggregated, and the Unobserved Components time series Model (UCM) has been used to forecast at cluster level. Cluster-level forecasts are then disaggregated into child products based on the ratio of recent consumption.","PeriodicalId":368507,"journal":{"name":"2017 International Conference on Data Management, Analytics and Innovation (ICDMAI)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125708834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1109/ICDMAI.2017.8073505
V. Savakhande, C. L. Bhattar, P. L. Bhattar
In today's scenario, the use of high voltage gain DC-DC converter has been increased. It has been attaining popularity due to their increasing practices and extensive application in photovoltaic, fuel cell energy system, uninterrupted power supply and electric vehicles. A compressive review is presented to demonstrate the various high-voltage gain DC-DC converter topologies, control strategies and recent trades. The most of topologies with high voltage conversion ratio, low cost and high efficiency performance are covered and classified into several categories.
{"title":"Voltage-lift DC-DC converters for photovoltaic application-a review","authors":"V. Savakhande, C. L. Bhattar, P. L. Bhattar","doi":"10.1109/ICDMAI.2017.8073505","DOIUrl":"https://doi.org/10.1109/ICDMAI.2017.8073505","url":null,"abstract":"In today's scenario, the use of high voltage gain DC-DC converter has been increased. It has been attaining popularity due to their increasing practices and extensive application in photovoltaic, fuel cell energy system, uninterrupted power supply and electric vehicles. A compressive review is presented to demonstrate the various high-voltage gain DC-DC converter topologies, control strategies and recent trades. The most of topologies with high voltage conversion ratio, low cost and high efficiency performance are covered and classified into several categories.","PeriodicalId":368507,"journal":{"name":"2017 International Conference on Data Management, Analytics and Innovation (ICDMAI)","volume":"227 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132789063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1109/ICDMAI.2017.8073514
Satyabrata Maity, A. Chakrabarti, D. Bhattacharjee
This paper proposes an efficient way of background modeling and elimination for extracting foreground information from the video, applying a new block-based statistical feature extraction technique coined as Block Based Quantized Histogram (BBQH) for background modeling. The inclusion of contrast normalization and anisotropic smoothing in the preprocessing step, makes the feature extraction procedure more robust towards several unorthodox situations like illumination change, dynamic background, bootstrapping, noisy video and camouflaged conditions. The experimental results on the benchmark video frames clearly demonstrate that BBQH has successfully extracted the foreground information despite the various irregularities. BBQH also gives the best F-measure values for most of the benchmark videos in comparison with the other state of the art methods, and hence its novelty is well justified.
{"title":"Block-Based Quantized Histogram (BBQH) for efficient background modeling and foreground extraction in video","authors":"Satyabrata Maity, A. Chakrabarti, D. Bhattacharjee","doi":"10.1109/ICDMAI.2017.8073514","DOIUrl":"https://doi.org/10.1109/ICDMAI.2017.8073514","url":null,"abstract":"This paper proposes an efficient way of background modeling and elimination for extracting foreground information from the video, applying a new block-based statistical feature extraction technique coined as Block Based Quantized Histogram (BBQH) for background modeling. The inclusion of contrast normalization and anisotropic smoothing in the preprocessing step, makes the feature extraction procedure more robust towards several unorthodox situations like illumination change, dynamic background, bootstrapping, noisy video and camouflaged conditions. The experimental results on the benchmark video frames clearly demonstrate that BBQH has successfully extracted the foreground information despite the various irregularities. BBQH also gives the best F-measure values for most of the benchmark videos in comparison with the other state of the art methods, and hence its novelty is well justified.","PeriodicalId":368507,"journal":{"name":"2017 International Conference on Data Management, Analytics and Innovation (ICDMAI)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132792335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1109/ICDMAI.2017.8073481
G. Joshi, Nilesh P. Bhosale
Video compression is one of the technique that is related to image processing which is widely used for video broadcasting, video conferencing, automotive, consumer, and many other applications. Requirement of memory size for the storage of recorded videos for various applications is a major problem. For the purpose of communication via video processing, the diminished memory size of the media is obtained by compression technique. The proposed system has been developed using Discrete Wavelet Transform (DWT) algorithm, MATLAB, XILINX platform and FPGA SPARTEN 3 board. This architecture of DWT is described and synthesized using system c language, and result is obtained by implementing design on FPGA. The proposed algorithm enables memory saving along with increasing signal to noise ratio, and the overall performance of the system is calculated.
{"title":"Video compression using DWT algorithm implementing on FPGA","authors":"G. Joshi, Nilesh P. Bhosale","doi":"10.1109/ICDMAI.2017.8073481","DOIUrl":"https://doi.org/10.1109/ICDMAI.2017.8073481","url":null,"abstract":"Video compression is one of the technique that is related to image processing which is widely used for video broadcasting, video conferencing, automotive, consumer, and many other applications. Requirement of memory size for the storage of recorded videos for various applications is a major problem. For the purpose of communication via video processing, the diminished memory size of the media is obtained by compression technique. The proposed system has been developed using Discrete Wavelet Transform (DWT) algorithm, MATLAB, XILINX platform and FPGA SPARTEN 3 board. This architecture of DWT is described and synthesized using system c language, and result is obtained by implementing design on FPGA. The proposed algorithm enables memory saving along with increasing signal to noise ratio, and the overall performance of the system is calculated.","PeriodicalId":368507,"journal":{"name":"2017 International Conference on Data Management, Analytics and Innovation (ICDMAI)","volume":"267 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133452133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1109/ICDMAI.2017.8073484
M. Mukhedkar, Wagh Bhavesh Pandurang
Arithmetic and Logic block in processor is the most crucial and core component in CPU as well as number of Embedded and microprocessors. Power consumption and area are also main traits in ALU. Usually ALU is combinations of blocks which performs logical and arithmetical operations and are realized using circuits in combinational form. This paper depicts the major focus on to minimize the power consumption and reduce area by taking advantage of using GDI technique i. e. gate diffusion input technique. By using GDI technique the 4∗1multiplexer, 2∗1multiplexer as well as full adder are design. The simulation is performed by using Tanner ED tool in 180 nm technology and the results are compared with conventional pass transistor and CMOS logic. Using GDI technique the overall performance and efficiency of circuit also boost.
{"title":"A 180 nm efficient low power and optimized area ALU design using gate diffusion input technique","authors":"M. Mukhedkar, Wagh Bhavesh Pandurang","doi":"10.1109/ICDMAI.2017.8073484","DOIUrl":"https://doi.org/10.1109/ICDMAI.2017.8073484","url":null,"abstract":"Arithmetic and Logic block in processor is the most crucial and core component in CPU as well as number of Embedded and microprocessors. Power consumption and area are also main traits in ALU. Usually ALU is combinations of blocks which performs logical and arithmetical operations and are realized using circuits in combinational form. This paper depicts the major focus on to minimize the power consumption and reduce area by taking advantage of using GDI technique i. e. gate diffusion input technique. By using GDI technique the 4∗1multiplexer, 2∗1multiplexer as well as full adder are design. The simulation is performed by using Tanner ED tool in 180 nm technology and the results are compared with conventional pass transistor and CMOS logic. Using GDI technique the overall performance and efficiency of circuit also boost.","PeriodicalId":368507,"journal":{"name":"2017 International Conference on Data Management, Analytics and Innovation (ICDMAI)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130002452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1109/ICDMAI.2017.8073492
M. Gumani, Yogesh Korke, P. Shah, Sandeep S. Udmale, Vijay Sambhe, S. Bhirud
Forecasting is an integral part of any organization for their decision-making process so that they can predict their targets and modify their strategy in order to improve their sales or productivity in the coming future. This paper evaluates and compares various machine learning models, namely, ARIMA, Auto Regressive Neural Network(ARNN), XGBoost, SVM, Hy-brid Models like Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM and STL Decomposition (using ARIMA, Snaive, XGBoost) to forecast sales of a drug store company called Rossmann. Training data set contains past sales and supplemental information about drug stores. Accuracy of these models is measured by metrics such as MAE and RMSE. Initially, linear model such as ARIMA has been applied to forecast sales. ARIMA was not able to capture nonlinear patterns precisely, hence nonlinear models such as Neural Network, XGBoost and SVM were used. Nonlinear models performed better than ARIMA and gave low RMSE. Then, to further optimize the performance, composite models were designed using hybrid technique and decomposition technique. Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM were used and all of them performed better than their respective individual models. Then, the composite model was designed using STL Decomposition where the decomposed components namely seasonal, trend and remainder components were forecasted by Snaive, ARIMA and XGBoost. STL gave better results than individual and hybrid models. This paper evaluates and analyzes why composite models give better results than an individual model and state that decomposition technique is better than the hybrid technique for this application.
{"title":"Forecasting of sales by using fusion of machine learning techniques","authors":"M. Gumani, Yogesh Korke, P. Shah, Sandeep S. Udmale, Vijay Sambhe, S. Bhirud","doi":"10.1109/ICDMAI.2017.8073492","DOIUrl":"https://doi.org/10.1109/ICDMAI.2017.8073492","url":null,"abstract":"Forecasting is an integral part of any organization for their decision-making process so that they can predict their targets and modify their strategy in order to improve their sales or productivity in the coming future. This paper evaluates and compares various machine learning models, namely, ARIMA, Auto Regressive Neural Network(ARNN), XGBoost, SVM, Hy-brid Models like Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM and STL Decomposition (using ARIMA, Snaive, XGBoost) to forecast sales of a drug store company called Rossmann. Training data set contains past sales and supplemental information about drug stores. Accuracy of these models is measured by metrics such as MAE and RMSE. Initially, linear model such as ARIMA has been applied to forecast sales. ARIMA was not able to capture nonlinear patterns precisely, hence nonlinear models such as Neural Network, XGBoost and SVM were used. Nonlinear models performed better than ARIMA and gave low RMSE. Then, to further optimize the performance, composite models were designed using hybrid technique and decomposition technique. Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM were used and all of them performed better than their respective individual models. Then, the composite model was designed using STL Decomposition where the decomposed components namely seasonal, trend and remainder components were forecasted by Snaive, ARIMA and XGBoost. STL gave better results than individual and hybrid models. This paper evaluates and analyzes why composite models give better results than an individual model and state that decomposition technique is better than the hybrid technique for this application.","PeriodicalId":368507,"journal":{"name":"2017 International Conference on Data Management, Analytics and Innovation (ICDMAI)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134606597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1109/ICDMAI.2017.8073538
Tanashri Karle, D. Vora
In todays world each individual wish that his private information is not revealed in some or the other way. Privacy preservation plays a vital role in preventing individual private data preserved from the praying eyes. Anonymization techniques enable publication of information which permit analysis and guarantee privacy of sensitive information in data against variety of attacks. It sanitizes the information. It can also keep the person anonymous using encryption technique. There are various anonymization techniques and algorithms available which are discussed in this paper. Paper focuses on Generalization and Suppression techniques and describes Datafly and Mondrian algorithm and also discusses their comparison.
{"title":"PRIVACY preservation in big data using anonymization techniques","authors":"Tanashri Karle, D. Vora","doi":"10.1109/ICDMAI.2017.8073538","DOIUrl":"https://doi.org/10.1109/ICDMAI.2017.8073538","url":null,"abstract":"In todays world each individual wish that his private information is not revealed in some or the other way. Privacy preservation plays a vital role in preventing individual private data preserved from the praying eyes. Anonymization techniques enable publication of information which permit analysis and guarantee privacy of sensitive information in data against variety of attacks. It sanitizes the information. It can also keep the person anonymous using encryption technique. There are various anonymization techniques and algorithms available which are discussed in this paper. Paper focuses on Generalization and Suppression techniques and describes Datafly and Mondrian algorithm and also discusses their comparison.","PeriodicalId":368507,"journal":{"name":"2017 International Conference on Data Management, Analytics and Innovation (ICDMAI)","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121772299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-01DOI: 10.1109/ICDMAI.2017.8073496
Sagar S. Patil, A. Thorat
Fault current limiters (FCLs) have been instrumental on industrial level, with their enhanced power quality with respect to voltage sag mitigation, reduced upgrading of switchgears and prevention of protection system equipment from severe damages. Due to the industrial growth in power system network, there is an essential need for FCL. Recent work on FCL has picked up a quicker pace in the area of fault diagnosis of power system. Traditional methods such as fuse, Circuit Breakers (CBs) and Transformers etc. are used for limiting the fault current in the power network. But then again fuse is a single use device and takes manual replacement, also CBs have limitations of higher ratings. Transformer inrush current is another problem. This paper presents a detailed review of various fault current limiter configurations, control strategies, recent trends and their implementation for particular applications.
{"title":"Development of fault current limiters: A review","authors":"Sagar S. Patil, A. Thorat","doi":"10.1109/ICDMAI.2017.8073496","DOIUrl":"https://doi.org/10.1109/ICDMAI.2017.8073496","url":null,"abstract":"Fault current limiters (FCLs) have been instrumental on industrial level, with their enhanced power quality with respect to voltage sag mitigation, reduced upgrading of switchgears and prevention of protection system equipment from severe damages. Due to the industrial growth in power system network, there is an essential need for FCL. Recent work on FCL has picked up a quicker pace in the area of fault diagnosis of power system. Traditional methods such as fuse, Circuit Breakers (CBs) and Transformers etc. are used for limiting the fault current in the power network. But then again fuse is a single use device and takes manual replacement, also CBs have limitations of higher ratings. Transformer inrush current is another problem. This paper presents a detailed review of various fault current limiter configurations, control strategies, recent trends and their implementation for particular applications.","PeriodicalId":368507,"journal":{"name":"2017 International Conference on Data Management, Analytics and Innovation (ICDMAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130668100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}