Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409174
M. Rout, B. Majhi, U. M. Mohapatra, R. Mahapatra
The ABC algorithm is a new meta-heuristic approach, having the advantages of memory, multi-characters, local search, and a solution improvement mechanism. It can be used to identify a high quality optimal solution and offer a balance between complexity and performance, thus optimizing forecasting effectiveness. This paper proposes an efficient prediction model for forecasting of short and long range stock market prices of two well know stock indices, S&P 500 and DJIA using a simple adaptive linear combiner (ALC), whose weights are trained using artificial bee colony (ABC) algorithm. The Model is simulated in terms of mean square error (MSE) and extensive simulation study reveals that the performance of the proposed model with the test input patterns is more efficient, accurate than the PSO and GA based trained models.
{"title":"An artificial bee colony algorithm based efficient prediction model for stock market indices","authors":"M. Rout, B. Majhi, U. M. Mohapatra, R. Mahapatra","doi":"10.1109/WICT.2012.6409174","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409174","url":null,"abstract":"The ABC algorithm is a new meta-heuristic approach, having the advantages of memory, multi-characters, local search, and a solution improvement mechanism. It can be used to identify a high quality optimal solution and offer a balance between complexity and performance, thus optimizing forecasting effectiveness. This paper proposes an efficient prediction model for forecasting of short and long range stock market prices of two well know stock indices, S&P 500 and DJIA using a simple adaptive linear combiner (ALC), whose weights are trained using artificial bee colony (ABC) algorithm. The Model is simulated in terms of mean square error (MSE) and extensive simulation study reveals that the performance of the proposed model with the test input patterns is more efficient, accurate than the PSO and GA based trained models.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126907057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409094
S. M. Handigund, H. Kavitha, A. Aphoorva
The Software Requirements specification (SRS) of an information system contains description of various usecases that can be clustered as real worlds for different actors. Though, usecase diagram depicts the real world for each actor, there may be presence of some secondary actors for each real world. Thus, the real worlds of the actors are not completely isolated real worlds; there exists some sort of overlapping. This overlap indicates some sort of redundancy in the design diagrams of the same type. When this is transformed into code, the redundancy persists in the code also. The redundancy in the code always gives scope for inconsistency and defers integrity. To avoid this, there is a necessity to identify the usecase hierarchies and reflect them in usecase package diagram. This paper attempts to use the usecase diagrams, then develops automated methodology to identify the object classes participating in each usecase. It also proposes an automated methodology for design of usecase hierarchies based on "Extends", "Uses" and Groups". This paper considers the SRS of an information system. Then, using [1], molds it, abstracts its control flow in the form of CFT [1, 2, 3], designs DFT [1, 2, 3], of it & then abstracts the relevant sequence of statements for an each usecase of an actor. This paper presents our proposed methodology that identifies all possible interrelationships between usecases, forms the hierarchies of usecases using de-facto standards of hierarchical levels [4, 5]. This reorganization eliminates the redundancies and packages the usecase incorporating the common reuse and common closure principles. Here, we have also proposed the use of class diagram for the design of usecase hierarchies which may later used for the development of usecase package diagram.
{"title":"An automated methodology for the design of usecase package diagram","authors":"S. M. Handigund, H. Kavitha, A. Aphoorva","doi":"10.1109/WICT.2012.6409094","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409094","url":null,"abstract":"The Software Requirements specification (SRS) of an information system contains description of various usecases that can be clustered as real worlds for different actors. Though, usecase diagram depicts the real world for each actor, there may be presence of some secondary actors for each real world. Thus, the real worlds of the actors are not completely isolated real worlds; there exists some sort of overlapping. This overlap indicates some sort of redundancy in the design diagrams of the same type. When this is transformed into code, the redundancy persists in the code also. The redundancy in the code always gives scope for inconsistency and defers integrity. To avoid this, there is a necessity to identify the usecase hierarchies and reflect them in usecase package diagram. This paper attempts to use the usecase diagrams, then develops automated methodology to identify the object classes participating in each usecase. It also proposes an automated methodology for design of usecase hierarchies based on \"Extends\", \"Uses\" and Groups\". This paper considers the SRS of an information system. Then, using [1], molds it, abstracts its control flow in the form of CFT [1, 2, 3], designs DFT [1, 2, 3], of it & then abstracts the relevant sequence of statements for an each usecase of an actor. This paper presents our proposed methodology that identifies all possible interrelationships between usecases, forms the hierarchies of usecases using de-facto standards of hierarchical levels [4, 5]. This reorganization eliminates the redundancies and packages the usecase incorporating the common reuse and common closure principles. Here, we have also proposed the use of class diagram for the design of usecase hierarchies which may later used for the development of usecase package diagram.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125483397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409114
P. Khuntia, B. Sahu, C. Mohanty
The digital channel equalizers are located in the front end of the receivers to avoid the effect of Inter-Symbol-Interference (ISI). In this paper, the equalization problem has been viewed as an optimization problem. In past the Least Mean Square Algorithm (LMS), Recursive least square (RLS), Artificial Neural Network (ANN) and Genetic Algorithm (GA) have been successfully employed for nonlinear channel equalization. The LMS, RLS and ANN techniques are derivative based and hence are chances that the parameters may fall to local minima during training. Though GA is a derivative free technique, it takes more converging time. We propose a novel equalization technique based on Differential Evolution (DE). DE is an efficient and powerful population based stochastic search technique for solving optimization problems over continuous space and hence the channel equalization performance is expected to be superior.
{"title":"Development of adaptive channel equalization using DE","authors":"P. Khuntia, B. Sahu, C. Mohanty","doi":"10.1109/WICT.2012.6409114","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409114","url":null,"abstract":"The digital channel equalizers are located in the front end of the receivers to avoid the effect of Inter-Symbol-Interference (ISI). In this paper, the equalization problem has been viewed as an optimization problem. In past the Least Mean Square Algorithm (LMS), Recursive least square (RLS), Artificial Neural Network (ANN) and Genetic Algorithm (GA) have been successfully employed for nonlinear channel equalization. The LMS, RLS and ANN techniques are derivative based and hence are chances that the parameters may fall to local minima during training. Though GA is a derivative free technique, it takes more converging time. We propose a novel equalization technique based on Differential Evolution (DE). DE is an efficient and powerful population based stochastic search technique for solving optimization problems over continuous space and hence the channel equalization performance is expected to be superior.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129046574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409168
G. Rakate
A modified human body tracking system based on Discrete Wavelet Transform (DWT) and Mean-shift algorithm is proposed. Most of the traditional object tracking systems have many disadvantages like complexity, high computation power and large size. Here, whole system is implemented on ARM-Linux platform with camera mounted on rotary platform. DWT divides a frame into four different frequency bands without losing spatial information. So it neglects most of the fake motions in background as they are decomposed into high frequency wavelet sub-band. Color and spatial information are used as tracking parameters. Mean-shift algorithm takes less number of calculations while converging to new object search window. Ultimate aim of this project is to implement single human body tracking system on ARM-Linux platform, for which minimum computations should be performed. Combination of DWT and Mean-shift algorithm significantly decreases computation power. As shown in results, human body tracking system is successfully implemented.
{"title":"Human body tracking system based on DWT and Mean-shift algorithm on ARM-Linux platform","authors":"G. Rakate","doi":"10.1109/WICT.2012.6409168","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409168","url":null,"abstract":"A modified human body tracking system based on Discrete Wavelet Transform (DWT) and Mean-shift algorithm is proposed. Most of the traditional object tracking systems have many disadvantages like complexity, high computation power and large size. Here, whole system is implemented on ARM-Linux platform with camera mounted on rotary platform. DWT divides a frame into four different frequency bands without losing spatial information. So it neglects most of the fake motions in background as they are decomposed into high frequency wavelet sub-band. Color and spatial information are used as tracking parameters. Mean-shift algorithm takes less number of calculations while converging to new object search window. Ultimate aim of this project is to implement single human body tracking system on ARM-Linux platform, for which minimum computations should be performed. Combination of DWT and Mean-shift algorithm significantly decreases computation power. As shown in results, human body tracking system is successfully implemented.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129048523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409122
D. Majumder
Different analytical modelling showed that metronomic chemotherapeutic (MCT) strategy is a better option than maximum tolerable dosing (MTD) for the treatment of cancer under the condition of malignancy. In this work, a major physiological constraint, drug clearance rate has been considered. Incorporating it into analytical state-space models, the transformation of the overall system has been examined through computer simulations. Accumulation of drug, dead tumor cells and metabolites produced by living tumor cells in turn affect the subsequent drug application and thereby the therapeutic procedure and its outcome. Simulation results suggest that subsequent drug administration delay increases gradually with time due to this constraint.
{"title":"Assessment for possible drug application delays in MCT strategy due to pathophysiological constraints of cancer","authors":"D. Majumder","doi":"10.1109/WICT.2012.6409122","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409122","url":null,"abstract":"Different analytical modelling showed that metronomic chemotherapeutic (MCT) strategy is a better option than maximum tolerable dosing (MTD) for the treatment of cancer under the condition of malignancy. In this work, a major physiological constraint, drug clearance rate has been considered. Incorporating it into analytical state-space models, the transformation of the overall system has been examined through computer simulations. Accumulation of drug, dead tumor cells and metabolites produced by living tumor cells in turn affect the subsequent drug application and thereby the therapeutic procedure and its outcome. Simulation results suggest that subsequent drug administration delay increases gradually with time due to this constraint.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132806770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409043
S. Akashe, A. Mudgal, S. B. Singh
In this paper power dissipation analysis for 3T DRAM cell and 4T DRAM cell design have been carried out for the Nanoscale technology. Many advanced processors now have on chip instructions and data memory using DRAMs. The major contribution of power dissipation in DRAM cell is off-state leakage current. Thus, improving the power efficiency of a DRAM cell is critical to the overall system power dissipation. This paper investigates the effectiveness of 3T DRAM cell and 4T DRAM cell circuit design techniques and power dissipation analysis. 3T DRAM cell is designed with the semantic design technique for the analysis of power dissipation using CADENCE Tool. In this paper, we have taken two circuits of dynamic random access memory (DRAM). Read and write operation for single bit storage of 3T DRAM and 4T DRAM circuit is shown by simulating it on CADENCE tool.
{"title":"Analysis of power in 3T DRAM and 4T DRAM Cell design for different technology","authors":"S. Akashe, A. Mudgal, S. B. Singh","doi":"10.1109/WICT.2012.6409043","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409043","url":null,"abstract":"In this paper power dissipation analysis for 3T DRAM cell and 4T DRAM cell design have been carried out for the Nanoscale technology. Many advanced processors now have on chip instructions and data memory using DRAMs. The major contribution of power dissipation in DRAM cell is off-state leakage current. Thus, improving the power efficiency of a DRAM cell is critical to the overall system power dissipation. This paper investigates the effectiveness of 3T DRAM cell and 4T DRAM cell circuit design techniques and power dissipation analysis. 3T DRAM cell is designed with the semantic design technique for the analysis of power dissipation using CADENCE Tool. In this paper, we have taken two circuits of dynamic random access memory (DRAM). Read and write operation for single bit storage of 3T DRAM and 4T DRAM circuit is shown by simulating it on CADENCE tool.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132230372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409255
Yu Shaochen, He Shanshan, Li Guangyu, Gao Hongmin
In order to research water pollution problem in soil or underground water and determine the position of metal pollution source of certain region, the convection-diffusion model to determine the position of heavy metal pollution source in topsoil is established based on solute migration theory and actual measured data of heavy metal content in topsoil of certain region combined with deterministic solute migration model. The model expounds the method of rotation of coordinates and numerical insulation using reference coordinate system which follows the movement of fluid particle, discusses migration law of soluble pollution in soil or aquifer in function of water flow and solves partial differential equation by least square method to determine the position of pollution source accurately.
{"title":"A convection-diffusion model to determine the position of heavy metal pollution source in topsoil","authors":"Yu Shaochen, He Shanshan, Li Guangyu, Gao Hongmin","doi":"10.1109/WICT.2012.6409255","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409255","url":null,"abstract":"In order to research water pollution problem in soil or underground water and determine the position of metal pollution source of certain region, the convection-diffusion model to determine the position of heavy metal pollution source in topsoil is established based on solute migration theory and actual measured data of heavy metal content in topsoil of certain region combined with deterministic solute migration model. The model expounds the method of rotation of coordinates and numerical insulation using reference coordinate system which follows the movement of fluid particle, discusses migration law of soluble pollution in soil or aquifer in function of water flow and solves partial differential equation by least square method to determine the position of pollution source accurately.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134369103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409184
N. Saxena, N. Chaudhari
As Short Message Service (SMS) is now widely used as business tool, its security has become a major concern for business organizations and customers. However, their security is a critical issue cumbering their application and development. This paper analyses the most popular digital signature algorithms such as DSA, RSA and ECDSA and compared these algorithms. These signature algorithms were implemented in Java with various different key sizes set. Experimental comparison results of the three signature algorithms were presented and analysed. The results show that ECDSA is more suitable to generate the signature and RSA is more suitable to verify the signature on mobile devices. The experimental results are presented to show the effectiveness of each algorithm and to choose the most suitable algorithm for SMS digital signature. Next, we propose a new algorithm for digital signature based on ECDSA. At the end, conclusion and future extension of this work is discussed.
{"title":"Secure encryption with digital signature approach for Short Message Service","authors":"N. Saxena, N. Chaudhari","doi":"10.1109/WICT.2012.6409184","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409184","url":null,"abstract":"As Short Message Service (SMS) is now widely used as business tool, its security has become a major concern for business organizations and customers. However, their security is a critical issue cumbering their application and development. This paper analyses the most popular digital signature algorithms such as DSA, RSA and ECDSA and compared these algorithms. These signature algorithms were implemented in Java with various different key sizes set. Experimental comparison results of the three signature algorithms were presented and analysed. The results show that ECDSA is more suitable to generate the signature and RSA is more suitable to verify the signature on mobile devices. The experimental results are presented to show the effectiveness of each algorithm and to choose the most suitable algorithm for SMS digital signature. Next, we propose a new algorithm for digital signature based on ECDSA. At the end, conclusion and future extension of this work is discussed.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133017855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409151
A. C. Panda, H. Mehrotra, B. Majhi
This paper proposes parallel scale space construction of Scale Invariant Feature Transform (SIFT) using SIMD hypercube. The parallel SIFT approach is used for iris feature extraction. The input iris images and Gaussian filters are mapped to each processor in the hypercube and convolution takes place in each processor concurrently. The time complexity of parallel algorithm is O(N2) whereas sequential algorithm performs with complexity of O(lsN2), where l is the number of octaves, s is the number of Gaussian scale levels within an octave for N2 sized iris image.
{"title":"Parallel scale space construction using SIMD hypercube","authors":"A. C. Panda, H. Mehrotra, B. Majhi","doi":"10.1109/WICT.2012.6409151","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409151","url":null,"abstract":"This paper proposes parallel scale space construction of Scale Invariant Feature Transform (SIFT) using SIMD hypercube. The parallel SIFT approach is used for iris feature extraction. The input iris images and Gaussian filters are mapped to each processor in the hypercube and convolution takes place in each processor concurrently. The time complexity of parallel algorithm is O(N2) whereas sequential algorithm performs with complexity of O(lsN2), where l is the number of octaves, s is the number of Gaussian scale levels within an octave for N2 sized iris image.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133044203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409118
D. Kurian, P. Chelliah
Though the vision of autonomic computing (AC) is highly ambitious, an objective analysis of autonomic computing and its growth in the last decade throw more incisive and decisive insights on its birth deformities and growth pains. Predominantly software-based solutions are being preferred to make IT infrastructures and platforms, adaptive and autonomic in their offerings, outputs, and outlooks. However the autonomic journey has not been as promising as originally envisaged by industry leaders and luminaries, and there are several reasons being quoted by professionals and pundits for that gap. Precisely speaking, there is a kind of slackness in articulating its unique characteristics, and the enormous potentials in business and IT acceleration. There are not many real-world applications to popularize the autonomic concept among the development community. Though, some inroads has been made into infrastructure areas like networking, load balancing etc., very few attempts has been exercised in application areas like ERP, SCM, or CRM. In this paper, we would like to dig and dive deeper to extract and explain where the pioneering and path-breaking autonomic computing stands today, and the varied opportunities and possibilities, which insists hot pursuit of the autonomic idea. A simplistic architecture for deployment of autonomic business applications is introduced and a sample implementation in an existing CRM system is described. This should form the basis of new start and ubiquitous application of AC concepts for business applications.
{"title":"An autonomic computing architecture for business applications","authors":"D. Kurian, P. Chelliah","doi":"10.1109/WICT.2012.6409118","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409118","url":null,"abstract":"Though the vision of autonomic computing (AC) is highly ambitious, an objective analysis of autonomic computing and its growth in the last decade throw more incisive and decisive insights on its birth deformities and growth pains. Predominantly software-based solutions are being preferred to make IT infrastructures and platforms, adaptive and autonomic in their offerings, outputs, and outlooks. However the autonomic journey has not been as promising as originally envisaged by industry leaders and luminaries, and there are several reasons being quoted by professionals and pundits for that gap. Precisely speaking, there is a kind of slackness in articulating its unique characteristics, and the enormous potentials in business and IT acceleration. There are not many real-world applications to popularize the autonomic concept among the development community. Though, some inroads has been made into infrastructure areas like networking, load balancing etc., very few attempts has been exercised in application areas like ERP, SCM, or CRM. In this paper, we would like to dig and dive deeper to extract and explain where the pioneering and path-breaking autonomic computing stands today, and the varied opportunities and possibilities, which insists hot pursuit of the autonomic idea. A simplistic architecture for deployment of autonomic business applications is introduced and a sample implementation in an existing CRM system is described. This should form the basis of new start and ubiquitous application of AC concepts for business applications.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132037963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}