Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844197
M. Priyanka, V. S. Solomi, P. Vijayalakshmi, Tushar Nagarajan
A speech recognition system will recognise the speech uttered into text. The accuracy of the recognition system depends on the models generated. Models are trained based on the features extracted from the available training data. These models are used to recognise the spoken text. In the conventional feature extraction method, features are extracted using single window size (say 20ms). Instead of this fixed window size, we propose to extract features using multiple window sizes from the same speech signal. When multiple window sizes are used, multiple sets of feature vectors are derived for the same word thereby increasing the number of examples. Experiments show that when features are extracted with multiple window sizes, the variations among the feature vectors are considerably increased, which will lead to better acoustic models. This multiresolution feature extraction technique is successfully used for building a speech recogniser. To analyse the performance of multiresolution feature extraction, isolated word speech recognition system is developed for the TIMIT speech corpus. Results reveal that around 8% improvement in recognition accuracy is obtained over conventional single resolution feature extraction based method.
{"title":"Multiresolution feature extraction (MRFE) based speech recognition system","authors":"M. Priyanka, V. S. Solomi, P. Vijayalakshmi, Tushar Nagarajan","doi":"10.1109/ICRTIT.2013.6844197","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844197","url":null,"abstract":"A speech recognition system will recognise the speech uttered into text. The accuracy of the recognition system depends on the models generated. Models are trained based on the features extracted from the available training data. These models are used to recognise the spoken text. In the conventional feature extraction method, features are extracted using single window size (say 20ms). Instead of this fixed window size, we propose to extract features using multiple window sizes from the same speech signal. When multiple window sizes are used, multiple sets of feature vectors are derived for the same word thereby increasing the number of examples. Experiments show that when features are extracted with multiple window sizes, the variations among the feature vectors are considerably increased, which will lead to better acoustic models. This multiresolution feature extraction technique is successfully used for building a speech recogniser. To analyse the performance of multiresolution feature extraction, isolated word speech recognition system is developed for the TIMIT speech corpus. Results reveal that around 8% improvement in recognition accuracy is obtained over conventional single resolution feature extraction based method.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124738976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844181
Mohammad A. Dabbas, P. Neelakanta, D. DeGroff
Technoeconomics of a business structure exhibit evolving performance attributes as decided by various exogenous and endogenous causative variables. Proposed in this paper is a predictive model to elucidate the forecast performance on such evolving traits in large business structures (like electric power utility companies). The method uses artificial neural network (ANN) based predictive analytics viewed in data mining contexts. Specifically, should the available data be sparse, a method of scarcity removal in the knowledge domain is proposed for subsequent use in the ANN-based data mining exercise. Hence forecast projections on the growth/decay profile across the ex ante regime are determined. Further, for each forecast projection, a cone-of-forecast is suggested toward the corresponding limits (error-bounds) on the accuracy of rules extraction in data mining. Example simulations pertinent to real-world data on the performance of wind-power generation versus wind-speed are presented demonstrating the efficacy of forecasting strategy pursued. Possible shortcomings of the proposals are identified.
{"title":"ANN-based predictive analytics of forecasting with sparse data: Applications in data mining contexts","authors":"Mohammad A. Dabbas, P. Neelakanta, D. DeGroff","doi":"10.1109/ICRTIT.2013.6844181","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844181","url":null,"abstract":"Technoeconomics of a business structure exhibit evolving performance attributes as decided by various exogenous and endogenous causative variables. Proposed in this paper is a predictive model to elucidate the forecast performance on such evolving traits in large business structures (like electric power utility companies). The method uses artificial neural network (ANN) based predictive analytics viewed in data mining contexts. Specifically, should the available data be sparse, a method of scarcity removal in the knowledge domain is proposed for subsequent use in the ANN-based data mining exercise. Hence forecast projections on the growth/decay profile across the ex ante regime are determined. Further, for each forecast projection, a cone-of-forecast is suggested toward the corresponding limits (error-bounds) on the accuracy of rules extraction in data mining. Example simulations pertinent to real-world data on the performance of wind-power generation versus wind-speed are presented demonstrating the efficacy of forecasting strategy pursued. Possible shortcomings of the proposals are identified.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125023403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844207
J. Pragathi, H. Patil
Mammography is an X-ray imaging technique for diagnosing breast tumour. Segmentation of tumour in the mammogram images are difficult task because they are poor in contrast and the lesions are surrounded by tissue with similar characteristics. In this paper, an automatic detection algorithm is proposed to segment the suspicious masses or lesions. Mammogram images are analyzed by wavelet and the algorithm utilizes combination of region based segmentation and pixel based segmentation to detect the masses. The performance of the system is then evaluated using a dataset containing 60 images. From the experimental results the relative error calculated for each image is less than 15% and exhibits a sensitivity of 90%.
{"title":"Multiresolution analysis for computer-aided mass detection in mammogram using pixel based segmentation method","authors":"J. Pragathi, H. Patil","doi":"10.1109/ICRTIT.2013.6844207","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844207","url":null,"abstract":"Mammography is an X-ray imaging technique for diagnosing breast tumour. Segmentation of tumour in the mammogram images are difficult task because they are poor in contrast and the lesions are surrounded by tissue with similar characteristics. In this paper, an automatic detection algorithm is proposed to segment the suspicious masses or lesions. Mammogram images are analyzed by wavelet and the algorithm utilizes combination of region based segmentation and pixel based segmentation to detect the masses. The performance of the system is then evaluated using a dataset containing 60 images. From the experimental results the relative error calculated for each image is less than 15% and exhibits a sensitivity of 90%.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130009473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844227
M. Sumalatha, K. Praveenraj, C. Selvakumar
The sensor data are collected in regular interval and securely stored in cloud. In order to provide security this research work aims in proposing SK-IR: secured keyword based retrieval. In security mechanism the symmetric key encryption scheme AES is being used to secure the sensor data. The data are retrieved from cloud based on its keywords, scores and file location which are available in the posting list. The hash function is implemented to the posting list to enhance the security. The posting lists are stored in unknown cloud server location and it protects the data from getting hacked. In this paper we studied the performance of encryption methodology over unstructured database by choosing Hbase/Hadoop platform. Since HBase can handle the huge volume, variety, and complexity of data used on the Hadoop platform.
{"title":"SK-IR: Secured keyword based retrieval of sensor data in cloud","authors":"M. Sumalatha, K. Praveenraj, C. Selvakumar","doi":"10.1109/ICRTIT.2013.6844227","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844227","url":null,"abstract":"The sensor data are collected in regular interval and securely stored in cloud. In order to provide security this research work aims in proposing SK-IR: secured keyword based retrieval. In security mechanism the symmetric key encryption scheme AES is being used to secure the sensor data. The data are retrieved from cloud based on its keywords, scores and file location which are available in the posting list. The hash function is implemented to the posting list to enhance the security. The posting lists are stored in unknown cloud server location and it protects the data from getting hacked. In this paper we studied the performance of encryption methodology over unstructured database by choosing Hbase/Hadoop platform. Since HBase can handle the huge volume, variety, and complexity of data used on the Hadoop platform.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126353894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844264
Kausal Malladi, S. Sridharan
This paper aims at realization and implementation of an online voting that could be implemented in Automated Teller Machine (ATM) using IPv6. All ATMs are now currently in a private network of their respective bank servers. If it is possible to transform them into IPv6, they enter into a new domain of public networking. Also there is a possibility of many security related threats when these transformations are successfully deployed in all ATM terminals. A basic solution would be to distribute prior to the configuration of ATM terminals, the digital certificates of both bank and election commission servers. This would almost make impossible for others to intrude and disturb the transaction established by the ATM terminals between the bank or election commission servers. The number of transactions between the election commission server and the ATM terminals that will be required in case of the current scenario of routing all transactions through national financial switch (NFS) would reduce by at least half. Also additional encryption mechanisms established ensure secure state of these transactions. The two phases of transactions, the voter registration and online voting[1] aims at making entire process of franchise capturing automated.
{"title":"Online franchise capturing using IPv6 through Automated Teller Machines","authors":"Kausal Malladi, S. Sridharan","doi":"10.1109/ICRTIT.2013.6844264","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844264","url":null,"abstract":"This paper aims at realization and implementation of an online voting that could be implemented in Automated Teller Machine (ATM) using IPv6. All ATMs are now currently in a private network of their respective bank servers. If it is possible to transform them into IPv6, they enter into a new domain of public networking. Also there is a possibility of many security related threats when these transformations are successfully deployed in all ATM terminals. A basic solution would be to distribute prior to the configuration of ATM terminals, the digital certificates of both bank and election commission servers. This would almost make impossible for others to intrude and disturb the transaction established by the ATM terminals between the bank or election commission servers. The number of transactions between the election commission server and the ATM terminals that will be required in case of the current scenario of routing all transactions through national financial switch (NFS) would reduce by at least half. Also additional encryption mechanisms established ensure secure state of these transactions. The two phases of transactions, the voter registration and online voting[1] aims at making entire process of franchise capturing automated.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"21 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120923451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844174
R. Krishnamoorthy, N. Pradeep
The Max-Log-MAP is a Soft Input Soft Output (SISO) algorithm, which determines the probability of most likely path through the trellis and hence it gives sub optimal performance compared to Log-MAP algorithm. A simple but effective technique to improve the performance of Max-Log-MAP (MLMAP) algorithm is to scale the extrinsic information exchanged between two decoders using appropriate Scaling Factor (SF). Modified Max-Log-MAP (M-MLMAP) algorithm is achieved by fixing an arbitrary SF for inner decoder S2 and an optimized SF for the outer decoder S1. This paper presents the performance of the Modified Max-Log-MAP decoding algorithm by reducing the over estimation of reliability values to achieve low Bit Error Rate (BER). Appropriate mathematical relationship between SF and Eb/N0 is also proposed. The numerical results show that M-MLMAP algorithm improved the performance of turbo decoding over Additive White Gaussian Noise (AWGN) and Rayleigh fading channels. The proposed M-MLMAP algorithm showed a gain of 0.75dB over MLMAP algorithm at BER of 2×10-5 for Rayleigh fading channel.
{"title":"Modified Max-Log-MAP turbo decoding algorithm using optimized scaling factor","authors":"R. Krishnamoorthy, N. Pradeep","doi":"10.1109/ICRTIT.2013.6844174","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844174","url":null,"abstract":"The Max-Log-MAP is a Soft Input Soft Output (SISO) algorithm, which determines the probability of most likely path through the trellis and hence it gives sub optimal performance compared to Log-MAP algorithm. A simple but effective technique to improve the performance of Max-Log-MAP (MLMAP) algorithm is to scale the extrinsic information exchanged between two decoders using appropriate Scaling Factor (SF). Modified Max-Log-MAP (M-MLMAP) algorithm is achieved by fixing an arbitrary SF for inner decoder S2 and an optimized SF for the outer decoder S1. This paper presents the performance of the Modified Max-Log-MAP decoding algorithm by reducing the over estimation of reliability values to achieve low Bit Error Rate (BER). Appropriate mathematical relationship between SF and Eb/N0 is also proposed. The numerical results show that M-MLMAP algorithm improved the performance of turbo decoding over Additive White Gaussian Noise (AWGN) and Rayleigh fading channels. The proposed M-MLMAP algorithm showed a gain of 0.75dB over MLMAP algorithm at BER of 2×10-5 for Rayleigh fading channel.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124411339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844271
R. Uma, P. Dhavachelvan
Device scaling has been a relatively straight forward issue in terms of power, speed and noise aspect. For submicron CMOS technology area, topology selection, power dissipation and speed are imperative aspect especially for designing Clocked Storage Element (CSE), adder circuits and MAC unit for high-speed and low-energy design like portable batteries and microprocessors. This paper presents a logical based delay model for different adder topologies in order to obtain minimum delay, minimum number of stages in minimizing the transistor count and the power consumption of the circuit. In this work a full adder is designed with 10 carry and 6 sum logic constructions and its delay is observed with wide spectrum of electrical effort and its performance is observed in terms of number of stages and transistor sizes. From this mathematical analysis the optimized circuits are implemented using Tanner EDA with TSMC MOSIS 250 nm technology and its performance is analyzed in terms of transistor count, delay and power dissipation with respect to the mathematical model. All the logical construction (carry logic and sum logic) used for designing full adder are realized in terms of CMOS logic.
{"title":"Performance evaluation of full adders in ASIC using logical effort calculation","authors":"R. Uma, P. Dhavachelvan","doi":"10.1109/ICRTIT.2013.6844271","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844271","url":null,"abstract":"Device scaling has been a relatively straight forward issue in terms of power, speed and noise aspect. For submicron CMOS technology area, topology selection, power dissipation and speed are imperative aspect especially for designing Clocked Storage Element (CSE), adder circuits and MAC unit for high-speed and low-energy design like portable batteries and microprocessors. This paper presents a logical based delay model for different adder topologies in order to obtain minimum delay, minimum number of stages in minimizing the transistor count and the power consumption of the circuit. In this work a full adder is designed with 10 carry and 6 sum logic constructions and its delay is observed with wide spectrum of electrical effort and its performance is observed in terms of number of stages and transistor sizes. From this mathematical analysis the optimized circuits are implemented using Tanner EDA with TSMC MOSIS 250 nm technology and its performance is analyzed in terms of transistor count, delay and power dissipation with respect to the mathematical model. All the logical construction (carry logic and sum logic) used for designing full adder are realized in terms of CMOS logic.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122762867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844173
A. Yadav, Divya Tomar, Sonali Agarwal
In the medical field, huge data is available, which leads to the need of a powerful data analysis tool for extraction of useful information. Several studies have been carried out in data mining field to improve the capability of data analysis on huge datasets. Cancer is one of the most fatal diseases in the world. Lung Cancer with high rate of accurance is one of the serious problems and biggest killing disease in India. Prediction of occurance of the lung cancer is very difficult because it depends upon multiple attributes which could not be analyzedeasily. In this paper a real time lung cancer dataset is taken from SGPGI (Sanjay Gandhi Post Graduate Institute of Medical Sciences) Lucknow. A realtime dataset is always associated with its obvious challenges such as missing values, highly dimensional, noise, and outlier, which is not suitable for efficient classification. A clustering approach is an alternative solution to analyze the data in an unsupervised manner. In this current research work main focus is to develop a novel approach to create accurate clusters of desired real time datasets called Foggy K-means clustering. The result of the experiment indicates that foggy k-means clustering algorithm gives better result on real datasets as compared to simple k-means clustering algorithm and provides a better solution to the real world problem.
{"title":"Clustering of lung cancer data using Foggy K-means","authors":"A. Yadav, Divya Tomar, Sonali Agarwal","doi":"10.1109/ICRTIT.2013.6844173","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844173","url":null,"abstract":"In the medical field, huge data is available, which leads to the need of a powerful data analysis tool for extraction of useful information. Several studies have been carried out in data mining field to improve the capability of data analysis on huge datasets. Cancer is one of the most fatal diseases in the world. Lung Cancer with high rate of accurance is one of the serious problems and biggest killing disease in India. Prediction of occurance of the lung cancer is very difficult because it depends upon multiple attributes which could not be analyzedeasily. In this paper a real time lung cancer dataset is taken from SGPGI (Sanjay Gandhi Post Graduate Institute of Medical Sciences) Lucknow. A realtime dataset is always associated with its obvious challenges such as missing values, highly dimensional, noise, and outlier, which is not suitable for efficient classification. A clustering approach is an alternative solution to analyze the data in an unsupervised manner. In this current research work main focus is to develop a novel approach to create accurate clusters of desired real time datasets called Foggy K-means clustering. The result of the experiment indicates that foggy k-means clustering algorithm gives better result on real datasets as compared to simple k-means clustering algorithm and provides a better solution to the real world problem.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122850803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844232
P. Jayasuganthi, V. Jeyaprabha, P. S. A. Kumar, Dr.V. Vaidehi
As more and more surveillance cameras are deployed in a facility or area the demand for automatic detection of suspicious objects is increasing. Most of the work in recent literature concentrated on protruding object detection in video sequences. This paper proposes a novel approach to detect protruding as well as non protruding objects in sequences of walking pedestrians based on texture of the foreground objects. Initially static background is modeled with the help of mixture of Gaussian algorithm and the foreground objects are segmented. Later object is detected frame by frame which is followed by the calculation of statistical parameters such as mean and standard deviation, in every blob, to form data streams. These parameters are clustered online using k-means methodology over data streams, in order to find the outliers (dropped objects). Here k is based on the number of objects present in the video. Finally we have implemented on a standard data set from the website Video Surveillance Online Repository [15] and also our own dataset. The experimental results show that our system performs reasonable well and can accurately detect dropped objects in video data streams.
{"title":"Detection of dropped non protruding objects in video surveillance using clustered data stream","authors":"P. Jayasuganthi, V. Jeyaprabha, P. S. A. Kumar, Dr.V. Vaidehi","doi":"10.1109/ICRTIT.2013.6844232","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844232","url":null,"abstract":"As more and more surveillance cameras are deployed in a facility or area the demand for automatic detection of suspicious objects is increasing. Most of the work in recent literature concentrated on protruding object detection in video sequences. This paper proposes a novel approach to detect protruding as well as non protruding objects in sequences of walking pedestrians based on texture of the foreground objects. Initially static background is modeled with the help of mixture of Gaussian algorithm and the foreground objects are segmented. Later object is detected frame by frame which is followed by the calculation of statistical parameters such as mean and standard deviation, in every blob, to form data streams. These parameters are clustered online using k-means methodology over data streams, in order to find the outliers (dropped objects). Here k is based on the number of objects present in the video. Finally we have implemented on a standard data set from the website Video Surveillance Online Repository [15] and also our own dataset. The experimental results show that our system performs reasonable well and can accurately detect dropped objects in video data streams.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"517 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123102218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844172
Kausal Malladi, Debargha Ganguly
“You cut, I choose” is a classical algorithm for fair sharing of resources among two agents which guarantees “envy-freeness”. In a multi-agent scenario, several algorithms were proposed for sharing resources fairly on a Cloud. However, no algorithm has been proposed till now for High Performance Computing (HPC) agents which are computational intensive, where not just the resources are to be fair-shared but used to the utmost. This paper proposes an algorithm that considers a specific number of HPC agents that can be run on a host machine and tries to do a fair-share of resources. The proposed algorithm assumes the agents demanding resources to be taking a game-theoretic approach and gives a decent proportion of the demand as the allocation value. This algorithm works for a real-world scenario in which, the agents keep getting added dynamically to a host machine and assumes that the agents will not depart after they are allocated.
{"title":"Cake cutting of CPU resources among multiple HPC agents on a cloud","authors":"Kausal Malladi, Debargha Ganguly","doi":"10.1109/ICRTIT.2013.6844172","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844172","url":null,"abstract":"“You cut, I choose” is a classical algorithm for fair sharing of resources among two agents which guarantees “envy-freeness”. In a multi-agent scenario, several algorithms were proposed for sharing resources fairly on a Cloud. However, no algorithm has been proposed till now for High Performance Computing (HPC) agents which are computational intensive, where not just the resources are to be fair-shared but used to the utmost. This paper proposes an algorithm that considers a specific number of HPC agents that can be run on a host machine and tries to do a fair-share of resources. The proposed algorithm assumes the agents demanding resources to be taking a game-theoretic approach and gives a decent proportion of the demand as the allocation value. This algorithm works for a real-world scenario in which, the agents keep getting added dynamically to a host machine and assumes that the agents will not depart after they are allocated.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115376248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}