Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409045
S. Akashe, G. Sharma, V. Rajak, R. Pandey
In this paper, we propose a leakage reduction technique as high leakage currents in deep submicron regimes are becoming a major contributor to total power dissipation of CMOS circuits. Sub threshold leakage current plays a very important role in power dissipation so to reduce the sub threshold leakage current we proposed an adaptive voltage level (AVL) technique. Which optimize the overall voltage across the half subtractor circuit in standby mode. In this AVL technique, two schemes are employed, one is AVLS (adaptive voltage level at supply) in which the supply voltage is reduced and the other is AVLG (adaptive voltage level at ground) in which the ground potential is increased. By applying this technique we have reduced the leakage current from 9.274*10-12ampere) to 5.428*10-12amp. That means this technique the leakage current 41.4%. The circuit is simulated on Cadence(R) Virtuoso(R) in 45nano meter CMOS technology. Simulation results reveal that there is a significant reduction in leakage current for this proposed cell with the AVL circuit reducing the supply voltage.
{"title":"Implementation of high performance and low leakage half subtractor circuit using AVL technique","authors":"S. Akashe, G. Sharma, V. Rajak, R. Pandey","doi":"10.1109/WICT.2012.6409045","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409045","url":null,"abstract":"In this paper, we propose a leakage reduction technique as high leakage currents in deep submicron regimes are becoming a major contributor to total power dissipation of CMOS circuits. Sub threshold leakage current plays a very important role in power dissipation so to reduce the sub threshold leakage current we proposed an adaptive voltage level (AVL) technique. Which optimize the overall voltage across the half subtractor circuit in standby mode. In this AVL technique, two schemes are employed, one is AVLS (adaptive voltage level at supply) in which the supply voltage is reduced and the other is AVLG (adaptive voltage level at ground) in which the ground potential is increased. By applying this technique we have reduced the leakage current from 9.274*10-12ampere) to 5.428*10-12amp. That means this technique the leakage current 41.4%. The circuit is simulated on Cadence(R) Virtuoso(R) in 45nano meter CMOS technology. Simulation results reveal that there is a significant reduction in leakage current for this proposed cell with the AVL circuit reducing the supply voltage.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124898268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409146
J. Nath, A. Nath
It is now a prime research area to understand microRNA (miRNA) in a quantitative manner. The researchers across the globe have been working on quantitative estimation of microRNA (miRNA) from different angles. Our present study would be a new addition in the present scenario. The miRNAs are non-coding short ribonucleic acid (RNA) molecules, approximately ~25 nucleotides long. MiRNAs help in understanding the entire scope of post-transcriptional gene regulation. MiRNAs regulate numerous cellular processes and have roles in tissue remodeling. In the present work the authors made an exhaustive study on deciphering the inherent statistical behavior in pre-mature miRNA strings through few statistical parameters namely Hurst Exponent values, Variance, Poly String Mean and Poly String Standard Deviation. These four parameters not only quantify miRNA but also classify the miRNA strings. In the present work the authors have to tried to explore the quantification and classification based on statistical results on nucleotide strings of pre-mature miRNAs of the three organisms Homo sapiens (hsa), Macaca mulatta (mml) and Pan troglodytes (ptr).
{"title":"A new algorithm for quantitative deciphering of pre-mature MiRNAs using some statistical parameters","authors":"J. Nath, A. Nath","doi":"10.1109/WICT.2012.6409146","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409146","url":null,"abstract":"It is now a prime research area to understand microRNA (miRNA) in a quantitative manner. The researchers across the globe have been working on quantitative estimation of microRNA (miRNA) from different angles. Our present study would be a new addition in the present scenario. The miRNAs are non-coding short ribonucleic acid (RNA) molecules, approximately ~25 nucleotides long. MiRNAs help in understanding the entire scope of post-transcriptional gene regulation. MiRNAs regulate numerous cellular processes and have roles in tissue remodeling. In the present work the authors made an exhaustive study on deciphering the inherent statistical behavior in pre-mature miRNA strings through few statistical parameters namely Hurst Exponent values, Variance, Poly String Mean and Poly String Standard Deviation. These four parameters not only quantify miRNA but also classify the miRNA strings. In the present work the authors have to tried to explore the quantification and classification based on statistical results on nucleotide strings of pre-mature miRNAs of the three organisms Homo sapiens (hsa), Macaca mulatta (mml) and Pan troglodytes (ptr).","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132087830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409255
Yu Shaochen, He Shanshan, Li Guangyu, Gao Hongmin
In order to research water pollution problem in soil or underground water and determine the position of metal pollution source of certain region, the convection-diffusion model to determine the position of heavy metal pollution source in topsoil is established based on solute migration theory and actual measured data of heavy metal content in topsoil of certain region combined with deterministic solute migration model. The model expounds the method of rotation of coordinates and numerical insulation using reference coordinate system which follows the movement of fluid particle, discusses migration law of soluble pollution in soil or aquifer in function of water flow and solves partial differential equation by least square method to determine the position of pollution source accurately.
{"title":"A convection-diffusion model to determine the position of heavy metal pollution source in topsoil","authors":"Yu Shaochen, He Shanshan, Li Guangyu, Gao Hongmin","doi":"10.1109/WICT.2012.6409255","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409255","url":null,"abstract":"In order to research water pollution problem in soil or underground water and determine the position of metal pollution source of certain region, the convection-diffusion model to determine the position of heavy metal pollution source in topsoil is established based on solute migration theory and actual measured data of heavy metal content in topsoil of certain region combined with deterministic solute migration model. The model expounds the method of rotation of coordinates and numerical insulation using reference coordinate system which follows the movement of fluid particle, discusses migration law of soluble pollution in soil or aquifer in function of water flow and solves partial differential equation by least square method to determine the position of pollution source accurately.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134369103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409118
D. Kurian, P. Chelliah
Though the vision of autonomic computing (AC) is highly ambitious, an objective analysis of autonomic computing and its growth in the last decade throw more incisive and decisive insights on its birth deformities and growth pains. Predominantly software-based solutions are being preferred to make IT infrastructures and platforms, adaptive and autonomic in their offerings, outputs, and outlooks. However the autonomic journey has not been as promising as originally envisaged by industry leaders and luminaries, and there are several reasons being quoted by professionals and pundits for that gap. Precisely speaking, there is a kind of slackness in articulating its unique characteristics, and the enormous potentials in business and IT acceleration. There are not many real-world applications to popularize the autonomic concept among the development community. Though, some inroads has been made into infrastructure areas like networking, load balancing etc., very few attempts has been exercised in application areas like ERP, SCM, or CRM. In this paper, we would like to dig and dive deeper to extract and explain where the pioneering and path-breaking autonomic computing stands today, and the varied opportunities and possibilities, which insists hot pursuit of the autonomic idea. A simplistic architecture for deployment of autonomic business applications is introduced and a sample implementation in an existing CRM system is described. This should form the basis of new start and ubiquitous application of AC concepts for business applications.
{"title":"An autonomic computing architecture for business applications","authors":"D. Kurian, P. Chelliah","doi":"10.1109/WICT.2012.6409118","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409118","url":null,"abstract":"Though the vision of autonomic computing (AC) is highly ambitious, an objective analysis of autonomic computing and its growth in the last decade throw more incisive and decisive insights on its birth deformities and growth pains. Predominantly software-based solutions are being preferred to make IT infrastructures and platforms, adaptive and autonomic in their offerings, outputs, and outlooks. However the autonomic journey has not been as promising as originally envisaged by industry leaders and luminaries, and there are several reasons being quoted by professionals and pundits for that gap. Precisely speaking, there is a kind of slackness in articulating its unique characteristics, and the enormous potentials in business and IT acceleration. There are not many real-world applications to popularize the autonomic concept among the development community. Though, some inroads has been made into infrastructure areas like networking, load balancing etc., very few attempts has been exercised in application areas like ERP, SCM, or CRM. In this paper, we would like to dig and dive deeper to extract and explain where the pioneering and path-breaking autonomic computing stands today, and the varied opportunities and possibilities, which insists hot pursuit of the autonomic idea. A simplistic architecture for deployment of autonomic business applications is introduced and a sample implementation in an existing CRM system is described. This should form the basis of new start and ubiquitous application of AC concepts for business applications.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132037963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409043
S. Akashe, A. Mudgal, S. B. Singh
In this paper power dissipation analysis for 3T DRAM cell and 4T DRAM cell design have been carried out for the Nanoscale technology. Many advanced processors now have on chip instructions and data memory using DRAMs. The major contribution of power dissipation in DRAM cell is off-state leakage current. Thus, improving the power efficiency of a DRAM cell is critical to the overall system power dissipation. This paper investigates the effectiveness of 3T DRAM cell and 4T DRAM cell circuit design techniques and power dissipation analysis. 3T DRAM cell is designed with the semantic design technique for the analysis of power dissipation using CADENCE Tool. In this paper, we have taken two circuits of dynamic random access memory (DRAM). Read and write operation for single bit storage of 3T DRAM and 4T DRAM circuit is shown by simulating it on CADENCE tool.
{"title":"Analysis of power in 3T DRAM and 4T DRAM Cell design for different technology","authors":"S. Akashe, A. Mudgal, S. B. Singh","doi":"10.1109/WICT.2012.6409043","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409043","url":null,"abstract":"In this paper power dissipation analysis for 3T DRAM cell and 4T DRAM cell design have been carried out for the Nanoscale technology. Many advanced processors now have on chip instructions and data memory using DRAMs. The major contribution of power dissipation in DRAM cell is off-state leakage current. Thus, improving the power efficiency of a DRAM cell is critical to the overall system power dissipation. This paper investigates the effectiveness of 3T DRAM cell and 4T DRAM cell circuit design techniques and power dissipation analysis. 3T DRAM cell is designed with the semantic design technique for the analysis of power dissipation using CADENCE Tool. In this paper, we have taken two circuits of dynamic random access memory (DRAM). Read and write operation for single bit storage of 3T DRAM and 4T DRAM circuit is shown by simulating it on CADENCE tool.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132230372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409122
D. Majumder
Different analytical modelling showed that metronomic chemotherapeutic (MCT) strategy is a better option than maximum tolerable dosing (MTD) for the treatment of cancer under the condition of malignancy. In this work, a major physiological constraint, drug clearance rate has been considered. Incorporating it into analytical state-space models, the transformation of the overall system has been examined through computer simulations. Accumulation of drug, dead tumor cells and metabolites produced by living tumor cells in turn affect the subsequent drug application and thereby the therapeutic procedure and its outcome. Simulation results suggest that subsequent drug administration delay increases gradually with time due to this constraint.
{"title":"Assessment for possible drug application delays in MCT strategy due to pathophysiological constraints of cancer","authors":"D. Majumder","doi":"10.1109/WICT.2012.6409122","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409122","url":null,"abstract":"Different analytical modelling showed that metronomic chemotherapeutic (MCT) strategy is a better option than maximum tolerable dosing (MTD) for the treatment of cancer under the condition of malignancy. In this work, a major physiological constraint, drug clearance rate has been considered. Incorporating it into analytical state-space models, the transformation of the overall system has been examined through computer simulations. Accumulation of drug, dead tumor cells and metabolites produced by living tumor cells in turn affect the subsequent drug application and thereby the therapeutic procedure and its outcome. Simulation results suggest that subsequent drug administration delay increases gradually with time due to this constraint.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132806770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409184
N. Saxena, N. Chaudhari
As Short Message Service (SMS) is now widely used as business tool, its security has become a major concern for business organizations and customers. However, their security is a critical issue cumbering their application and development. This paper analyses the most popular digital signature algorithms such as DSA, RSA and ECDSA and compared these algorithms. These signature algorithms were implemented in Java with various different key sizes set. Experimental comparison results of the three signature algorithms were presented and analysed. The results show that ECDSA is more suitable to generate the signature and RSA is more suitable to verify the signature on mobile devices. The experimental results are presented to show the effectiveness of each algorithm and to choose the most suitable algorithm for SMS digital signature. Next, we propose a new algorithm for digital signature based on ECDSA. At the end, conclusion and future extension of this work is discussed.
{"title":"Secure encryption with digital signature approach for Short Message Service","authors":"N. Saxena, N. Chaudhari","doi":"10.1109/WICT.2012.6409184","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409184","url":null,"abstract":"As Short Message Service (SMS) is now widely used as business tool, its security has become a major concern for business organizations and customers. However, their security is a critical issue cumbering their application and development. This paper analyses the most popular digital signature algorithms such as DSA, RSA and ECDSA and compared these algorithms. These signature algorithms were implemented in Java with various different key sizes set. Experimental comparison results of the three signature algorithms were presented and analysed. The results show that ECDSA is more suitable to generate the signature and RSA is more suitable to verify the signature on mobile devices. The experimental results are presented to show the effectiveness of each algorithm and to choose the most suitable algorithm for SMS digital signature. Next, we propose a new algorithm for digital signature based on ECDSA. At the end, conclusion and future extension of this work is discussed.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133017855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409156
P. Rahul, A. Gaikwad
This paper is focused on Devanagari Handwritten Script Recognition. The scanned word image is taken as an input image. An Input image is preprocessed and segmented. The features are extracted. Feature vector is applied to an artificial Neural Network. The Network is trained for the different set of numerals and alphabets. Output of Self Organizing Map applied to Learning Vector Quantization and the accuracy is calculated.
{"title":"Multistage Recognition Approach for Handwritten Devanagari Script Recognition","authors":"P. Rahul, A. Gaikwad","doi":"10.1109/WICT.2012.6409156","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409156","url":null,"abstract":"This paper is focused on Devanagari Handwritten Script Recognition. The scanned word image is taken as an input image. An Input image is preprocessed and segmented. The features are extracted. Feature vector is applied to an artificial Neural Network. The Network is trained for the different set of numerals and alphabets. Output of Self Organizing Map applied to Learning Vector Quantization and the accuracy is calculated.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115553776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409059
Akhil Behl, Kanika Behl
Cloud computing is a whole new paradigm that offers a non-traditional computing model for organizations to adopt Information Technology and related functions and aspects without upfront investment and with lower Total Cost of Ownership (TCO). Cloud computing opens doors to multiple, unlimited venues from elastic computing to on demand provisioning to dynamic storage and computing requirement fulfillment. However, despite the potential gains achieved from the cloud computing, the security of an open-ended and rather freely accessible resource is still questionable which impacts the cloud adoption. The security problem becomes amplified under the cloud model as new dimensions enter into the problem scope related to the architecture, multi-tenancy, layer dependency, and elasticity. This paper introduces a detailed analysis of the cloud security problem. It investigates the problem of security from the cloud architecture perspective, the cloud characteristics perspective, cloud delivery model perspective, and the cloud stakeholder perspective. The paper investigates some of the key research challenges of implementing cloud-aware security solutions which can plausibly secure the ever-changing and dynamic cloud model. Based on this analysis it presents a derived detailed specification of the cloud security problem and key features that should be covered by any proposed security solution for cloud computing.
{"title":"An analysis of cloud computing security issues","authors":"Akhil Behl, Kanika Behl","doi":"10.1109/WICT.2012.6409059","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409059","url":null,"abstract":"Cloud computing is a whole new paradigm that offers a non-traditional computing model for organizations to adopt Information Technology and related functions and aspects without upfront investment and with lower Total Cost of Ownership (TCO). Cloud computing opens doors to multiple, unlimited venues from elastic computing to on demand provisioning to dynamic storage and computing requirement fulfillment. However, despite the potential gains achieved from the cloud computing, the security of an open-ended and rather freely accessible resource is still questionable which impacts the cloud adoption. The security problem becomes amplified under the cloud model as new dimensions enter into the problem scope related to the architecture, multi-tenancy, layer dependency, and elasticity. This paper introduces a detailed analysis of the cloud security problem. It investigates the problem of security from the cloud architecture perspective, the cloud characteristics perspective, cloud delivery model perspective, and the cloud stakeholder perspective. The paper investigates some of the key research challenges of implementing cloud-aware security solutions which can plausibly secure the ever-changing and dynamic cloud model. Based on this analysis it presents a derived detailed specification of the cloud security problem and key features that should be covered by any proposed security solution for cloud computing.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115944344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409060
Tirimula Rao Benala, R. Mall, Satchidanada Dehuri, Koradda Chinna Babu
Software cost estimation continues to be an area of concern for managing of software development industry. We use unsupervised learning (e.g., clustering algorithms) combined with functional link artificial neural networks for software effort prediction. The unsupervised learning (clustering) indigenously divide the input space into the required number of partitions thus eliminating the need of ad-hoc selection of number of clusters. Functional link artificial neural networks (FLANNs), on the other hand is a powerful computational model. Chebyshev polynomial has been used in the FLANN as a choice for functional expansion to exhaustively study the performance. Three real life datasets related to software cost estimation have been considered for empirical evaluation of this proposed method. The experimental results show that our method could significantly improve prediction accuracy of conventional FLANN and has the potential to become an effective method for software cost estimation.
{"title":"Software effort prediction using unsupervised learning (clustering) and functional link artificial neural networks","authors":"Tirimula Rao Benala, R. Mall, Satchidanada Dehuri, Koradda Chinna Babu","doi":"10.1109/WICT.2012.6409060","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409060","url":null,"abstract":"Software cost estimation continues to be an area of concern for managing of software development industry. We use unsupervised learning (e.g., clustering algorithms) combined with functional link artificial neural networks for software effort prediction. The unsupervised learning (clustering) indigenously divide the input space into the required number of partitions thus eliminating the need of ad-hoc selection of number of clusters. Functional link artificial neural networks (FLANNs), on the other hand is a powerful computational model. Chebyshev polynomial has been used in the FLANN as a choice for functional expansion to exhaustively study the performance. Three real life datasets related to software cost estimation have been considered for empirical evaluation of this proposed method. The experimental results show that our method could significantly improve prediction accuracy of conventional FLANN and has the potential to become an effective method for software cost estimation.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115878781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}