Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409098
M. Babita Jain, Manoj Kumar Nigam, Prem Chand Tiwari
Short term load forecasting in this paper is done by considering the sensitivity of the network load to the temperature, humidity, day type parameters (THD) and previous load and also ensuring that forecasting the load with these parameters can best be done by the Regression Line Method (RLM) and Curve Fitting Method (CFM). The analysis of the load data recognizes that the load pattern is not only dependent on temperature but also dependent on humidity and day type. A new norm has been developed using the regression line concept with inclusion of special constants which hold the effect of the history data and THD parameters on the load forecast and it is used for the STLF of the test dataset of the data set considered. A unique norm with a, b, c and d constants based on the history data has been proposed for the STLF using the concept of curve fitting technique. The algorithms implementing this forecasting technique have been programmed using MATLAB. The input data of each day average power, average temperature, average humidity and day type of the previous year are used for prediction of power, in the case of the regression line method and the forecast previous month data and the similar month data of the previous year is used for the curve fitting method. The results are also compared with the Euclidean Norm Method (ELM) which is generally used method for STLF. The simulation results show the robustness and suitability of the proposed CFM norm for the STLF as the forecasting accuracies are very good and less than 3% for almost all the day types and all the seasons. Results also indicate that the proposed curve fitting method out passes the regression technique and the standard Euclidean distance norm with respect to forecasting accuracy and hence it will provide a better technique to utilities for short term load forecasting.
{"title":"Curve fitting and regression line method based seasonal short term load forecasting","authors":"M. Babita Jain, Manoj Kumar Nigam, Prem Chand Tiwari","doi":"10.1109/WICT.2012.6409098","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409098","url":null,"abstract":"Short term load forecasting in this paper is done by considering the sensitivity of the network load to the temperature, humidity, day type parameters (THD) and previous load and also ensuring that forecasting the load with these parameters can best be done by the Regression Line Method (RLM) and Curve Fitting Method (CFM). The analysis of the load data recognizes that the load pattern is not only dependent on temperature but also dependent on humidity and day type. A new norm has been developed using the regression line concept with inclusion of special constants which hold the effect of the history data and THD parameters on the load forecast and it is used for the STLF of the test dataset of the data set considered. A unique norm with a, b, c and d constants based on the history data has been proposed for the STLF using the concept of curve fitting technique. The algorithms implementing this forecasting technique have been programmed using MATLAB. The input data of each day average power, average temperature, average humidity and day type of the previous year are used for prediction of power, in the case of the regression line method and the forecast previous month data and the similar month data of the previous year is used for the curve fitting method. The results are also compared with the Euclidean Norm Method (ELM) which is generally used method for STLF. The simulation results show the robustness and suitability of the proposed CFM norm for the STLF as the forecasting accuracies are very good and less than 3% for almost all the day types and all the seasons. Results also indicate that the proposed curve fitting method out passes the regression technique and the standard Euclidean distance norm with respect to forecasting accuracy and hence it will provide a better technique to utilities for short term load forecasting.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133299639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409214
D. Ambika, V. Radha
In this paper the analysis of the compression process was performed by comparing the compressed signal against the original signal. To do this the most powerful speech analysis and compression techniques such as Linear Predictive Coding (LPC) and Discrete Wavelet Transform (DWT) was implemented using MATLAB. Here nine samples of spoken words are collected from different speakers and are used for implementation. The results obtained from LPC were compared with other compression technique called Discrete Wavelet Transform. Finally the results were evaluated in terms of compressed ratio (CR), Peak signal-to-noise ratio (PSNR) and Normalized root-mean square error (NRMSE). The result shows that DWT performance was better for these samples than the LPC method.
{"title":"A comparative study between Discrete Wavelet Transform and Linear Predictive Coding","authors":"D. Ambika, V. Radha","doi":"10.1109/WICT.2012.6409214","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409214","url":null,"abstract":"In this paper the analysis of the compression process was performed by comparing the compressed signal against the original signal. To do this the most powerful speech analysis and compression techniques such as Linear Predictive Coding (LPC) and Discrete Wavelet Transform (DWT) was implemented using MATLAB. Here nine samples of spoken words are collected from different speakers and are used for implementation. The results obtained from LPC were compared with other compression technique called Discrete Wavelet Transform. Finally the results were evaluated in terms of compressed ratio (CR), Peak signal-to-noise ratio (PSNR) and Normalized root-mean square error (NRMSE). The result shows that DWT performance was better for these samples than the LPC method.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133051667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409226
Yiming Guo, Lei Yang, Xiaoyu Wu, Xiaodan Pan
As the basic and efficient segmentation framework, GraphCut plays an important part in video segmentation area. This paper proposes a adaptive video segmentation approach based on shape prior of the foreground. Shape information with Euclidean distance measure is added to GraphCut framework to compensate instability caused by single color information. And the shape model is adaptive with the size of foreground. The experiments show segmentation results with our method is significantly better than only using the color information.
{"title":"An adaptive video segmentation approach based on shape prior","authors":"Yiming Guo, Lei Yang, Xiaoyu Wu, Xiaodan Pan","doi":"10.1109/WICT.2012.6409226","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409226","url":null,"abstract":"As the basic and efficient segmentation framework, GraphCut plays an important part in video segmentation area. This paper proposes a adaptive video segmentation approach based on shape prior of the foreground. Shape information with Euclidean distance measure is added to GraphCut framework to compensate instability caused by single color information. And the shape model is adaptive with the size of foreground. The experiments show segmentation results with our method is significantly better than only using the color information.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126061388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409076
V. Deshpande, Pratibha Chavan, V. Wadhai, J. Helonde
In Wireless Sensor Networks (WSNs), there are one or more sinks or base stations and many sensor nodes distributed over wide area. Sensor nodes have restricted power. When a particular event is occurred, these sensor nodes can transmit large volume of data towards the sink. It can result in buffer overflow at the nodes. It causes packet drops and also network throughput decreases. In WSNs, congestion may lead to energy waste due to a large number of retransmissions and packet drops. Hence it shortens the lifetime of sensor nodes. So, congestion in WSNs needs to be controlled to decrease the waste of energy and also to increase the lifetime of sensor nodes. Proposed congestion control mechanisms will improve network throughput, packet delivery ratio and packet loss. Many network aspects such as reporting rate, node density, packet size etc. can affect congestion. Congestion can be controlled by using Differed Reporting Rate (DRR) algorithm.
{"title":"Congestion control in Wireless Sensor Networks by using Differed Reporting Rate","authors":"V. Deshpande, Pratibha Chavan, V. Wadhai, J. Helonde","doi":"10.1109/WICT.2012.6409076","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409076","url":null,"abstract":"In Wireless Sensor Networks (WSNs), there are one or more sinks or base stations and many sensor nodes distributed over wide area. Sensor nodes have restricted power. When a particular event is occurred, these sensor nodes can transmit large volume of data towards the sink. It can result in buffer overflow at the nodes. It causes packet drops and also network throughput decreases. In WSNs, congestion may lead to energy waste due to a large number of retransmissions and packet drops. Hence it shortens the lifetime of sensor nodes. So, congestion in WSNs needs to be controlled to decrease the waste of energy and also to increase the lifetime of sensor nodes. Proposed congestion control mechanisms will improve network throughput, packet delivery ratio and packet loss. Many network aspects such as reporting rate, node density, packet size etc. can affect congestion. Congestion can be controlled by using Differed Reporting Rate (DRR) algorithm.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123444102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409251
Xiangyu Meng, Rong Cong, Kai Li
A new method for discretization of continuous attributes is trying to be discussed in this paper based on the importance of attributes that finds solutions to overcome the limitation of the traditional rough sets. According to consistency degrees, grouping is an effective way to select candidate cut points, it also helps reducing the numbers of cut points. So the consistency of the decision-making system is maintained in the form of attribute discretization which permits the reduction of cut point numbers and the improvement of efficiency. Adopting variable precision rough information entropy as measuring criterion, it has a good tolerance to noise. Experiments show that the algorithm yields satisfy this reduction results.
{"title":"Research on attributes discretization in target fusion syetem","authors":"Xiangyu Meng, Rong Cong, Kai Li","doi":"10.1109/WICT.2012.6409251","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409251","url":null,"abstract":"A new method for discretization of continuous attributes is trying to be discussed in this paper based on the importance of attributes that finds solutions to overcome the limitation of the traditional rough sets. According to consistency degrees, grouping is an effective way to select candidate cut points, it also helps reducing the numbers of cut points. So the consistency of the decision-making system is maintained in the form of attribute discretization which permits the reduction of cut point numbers and the improvement of efficiency. Adopting variable precision rough information entropy as measuring criterion, it has a good tolerance to noise. Experiments show that the algorithm yields satisfy this reduction results.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121775989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409136
R. Dash, B. Misra, P. Dash, G. Panda
In this paper, a greedy polynomial neural network (GPNN) for the task of classification is proposed. Classification task is one of the most studied tasks of data mining. In solving classification task of data mining, the classical algorithm such as Polynomial Neural Network (PNN) takes large computation time because the network grows over the training period i.e. the partial descriptions (PDs) in each layer grows in successive generations. Unlike PNN this proposed work restricts the growth of partial descriptions to a single layer. A greedy technique is then used to select a subset of PDs those who can best map the input-output relation in general. Performance of this model is compared with the results obtained from PNN. Simulation result shows that the performance of GPNN is encouraging for harnessing its power in data mining area and also better in terms of processing time than the PNN model.
{"title":"Greedy polynomial neural network for classification task in data mining","authors":"R. Dash, B. Misra, P. Dash, G. Panda","doi":"10.1109/WICT.2012.6409136","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409136","url":null,"abstract":"In this paper, a greedy polynomial neural network (GPNN) for the task of classification is proposed. Classification task is one of the most studied tasks of data mining. In solving classification task of data mining, the classical algorithm such as Polynomial Neural Network (PNN) takes large computation time because the network grows over the training period i.e. the partial descriptions (PDs) in each layer grows in successive generations. Unlike PNN this proposed work restricts the growth of partial descriptions to a single layer. A greedy technique is then used to select a subset of PDs those who can best map the input-output relation in general. Performance of this model is compared with the results obtained from PNN. Simulation result shows that the performance of GPNN is encouraging for harnessing its power in data mining area and also better in terms of processing time than the PNN model.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122329415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409109
R. Kameswara Rao, B. Satya, Vara Prasad, K. Naidu, S. Changchien, T. Liang, Jiann-Fuh Chen, Lung-Sheng Yang, Shih-Ming Chen, Jiann-Fuh Chen Power
A high-efficiency dc-dc converter with high voltage gain and reduced switch stress is proposed. Generally speaking, the utilization of a coupled inductor is useful for raising the step-up ratio of the conventional boost converter. However, the switch surge voltage may be caused by the leakage inductor so that it will result in the requirement of high-voltage-rated devices. This paper proposes a new high step-up dc-dc converter designed especially for regulating the dc interface between various micro-sources and a dc-ac inverter to electricity grid. The figuration of the proposed converter is a quadratic boost converter with the coupled inductor in the second boost converter. The converter achieves high step-up voltage gain with appropriate duty ratio and low voltage stress on the power switch. Additionally, the energy stored in the leakage inductor of the coupled inductor can be recycled to the output capacitor. The operating principles and steady-state analysis of continuous-conduction mode and boundary-conduction mode are discussed in detail. The simulation circuit is developed by using the MATLAB/SIMULINK modeling and the concerned characteristics are analysed.
{"title":"Notice of Violation of IEEE Publication PrinciplesA cascaded high step-up dc-dc converter for micro-grid","authors":"R. Kameswara Rao, B. Satya, Vara Prasad, K. Naidu, S. Changchien, T. Liang, Jiann-Fuh Chen, Lung-Sheng Yang, Shih-Ming Chen, Jiann-Fuh Chen Power","doi":"10.1109/WICT.2012.6409109","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409109","url":null,"abstract":"A high-efficiency dc-dc converter with high voltage gain and reduced switch stress is proposed. Generally speaking, the utilization of a coupled inductor is useful for raising the step-up ratio of the conventional boost converter. However, the switch surge voltage may be caused by the leakage inductor so that it will result in the requirement of high-voltage-rated devices. This paper proposes a new high step-up dc-dc converter designed especially for regulating the dc interface between various micro-sources and a dc-ac inverter to electricity grid. The figuration of the proposed converter is a quadratic boost converter with the coupled inductor in the second boost converter. The converter achieves high step-up voltage gain with appropriate duty ratio and low voltage stress on the power switch. Additionally, the energy stored in the leakage inductor of the coupled inductor can be recycled to the output capacitor. The operating principles and steady-state analysis of continuous-conduction mode and boundary-conduction mode are discussed in detail. The simulation circuit is developed by using the MATLAB/SIMULINK modeling and the concerned characteristics are analysed.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124975342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409057
R. Bedi, Nitinkumar Rajendra Gove, V. Wadhai
With the number of users of social network growing exponentially, the need of protecting the user privacy in network has gain the prime importance. While joining a social network, the user is requested to fill up a lot of unnecessary information like educational background, birth date, interests etc. This information may get leaked or mal-accessed if not protected with proper security measures. The data stored in social network may be attacked in different ways according to purpose of attack. In this paper we identify basic types of privacy breaches in social network. Secondly, we study the concept of Hippocratic principles. We propose a simple classification of the information requested from the user when he joins the social network. We also propose a privacy preserving model based on Hippocratic principles, specifically for Purpose, Limited Disclosure, Consent and compliance. Our proposed model work on privacy metadata, query analyzer is extended to check the define policy before giving the result out. This model can be used while mining on private data, which will help to enhance the privacy level of trustworthiness among internet users.
{"title":"Application of Hippocratic principles for privacy preservation in social network","authors":"R. Bedi, Nitinkumar Rajendra Gove, V. Wadhai","doi":"10.1109/WICT.2012.6409057","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409057","url":null,"abstract":"With the number of users of social network growing exponentially, the need of protecting the user privacy in network has gain the prime importance. While joining a social network, the user is requested to fill up a lot of unnecessary information like educational background, birth date, interests etc. This information may get leaked or mal-accessed if not protected with proper security measures. The data stored in social network may be attacked in different ways according to purpose of attack. In this paper we identify basic types of privacy breaches in social network. Secondly, we study the concept of Hippocratic principles. We propose a simple classification of the information requested from the user when he joins the social network. We also propose a privacy preserving model based on Hippocratic principles, specifically for Purpose, Limited Disclosure, Consent and compliance. Our proposed model work on privacy metadata, query analyzer is extended to check the define policy before giving the result out. This model can be used while mining on private data, which will help to enhance the privacy level of trustworthiness among internet users.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123899219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409102
J. Subhashim, V. Bhaskar
In this paper, we derive closed form expressions for Capacity per unit bandwidth (Spectrum Efficiency) of Correlated Rayleigh Fading Channels under Maximal Ratio Combining Diversity for High Correlation between pilot and the signal. The spectrum efficiency expressions are derived for M diversity branches under adaptation policies, such as (i) Optimal Power and Rate Adaptation (OPRA) policy, (ii) Optimal Rate Adaptation (ORA) policy (iii) Channel Inversion with Fixed Rate (CIFR) policy, and (iv) Truncated channel Inversion with Fixed Rate (TIFR) policy. If M branch signals are highly correlated and if space diversity is exercised using a SIMO system, the spectrum efficiency achieved is higher compared to that achieved when the signals are uncorrelated and have no diversity. This forms the focal point of this paper. Analytical results show accurately that OPRA policy provides the highest capacity over other adaptation policies. The spectrum efficiency for all four policies and outage probability for the highly correlated case are derived, plotted and analyzed in detail in this work.
{"title":"Capacity analysis of highly Correlated Rayleigh Fading Channels for Maximal Ratio Combining Diversity","authors":"J. Subhashim, V. Bhaskar","doi":"10.1109/WICT.2012.6409102","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409102","url":null,"abstract":"In this paper, we derive closed form expressions for Capacity per unit bandwidth (Spectrum Efficiency) of Correlated Rayleigh Fading Channels under Maximal Ratio Combining Diversity for High Correlation between pilot and the signal. The spectrum efficiency expressions are derived for M diversity branches under adaptation policies, such as (i) Optimal Power and Rate Adaptation (OPRA) policy, (ii) Optimal Rate Adaptation (ORA) policy (iii) Channel Inversion with Fixed Rate (CIFR) policy, and (iv) Truncated channel Inversion with Fixed Rate (TIFR) policy. If M branch signals are highly correlated and if space diversity is exercised using a SIMO system, the spectrum efficiency achieved is higher compared to that achieved when the signals are uncorrelated and have no diversity. This forms the focal point of this paper. Analytical results show accurately that OPRA policy provides the highest capacity over other adaptation policies. The spectrum efficiency for all four policies and outage probability for the highly correlated case are derived, plotted and analyzed in detail in this work.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130072467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409081
Somdip Dey
In this paper, the author proposes a new combined symmetric key cryptographic method, which is based on the following steps: 1) In first step, the position number of each byte in the string (stream) of message (plain text file) is added to the ASCII value of each byte; 2) In second step, Single Bit Manipulation technique is applied to each byte; 3) In third step, Advanced Bit Randomization technique is applied to blocks of data after converting each byte to its equivalent binary format; 4) In fourth and final step, Bit Reversal technique is applied to form the encrypted message (output file). The second and third steps are totally random in nature and are depended on the password (symmetric key), which is provided for encryption method. It is evident from the steps above that the method used here is an amalgamation of both byte and bit manipulation cipher techniques. This method has been tested for many plain text files and other type of file formats, and the results were very satisfactory. There was no pattern found in the output file and spectral analysis of the frequency of characters also proves this.
{"title":"SD-C1BBR: SD-count-1-byte-bit randomization: A new advanced cryptographic randomization technique","authors":"Somdip Dey","doi":"10.1109/WICT.2012.6409081","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409081","url":null,"abstract":"In this paper, the author proposes a new combined symmetric key cryptographic method, which is based on the following steps: 1) In first step, the position number of each byte in the string (stream) of message (plain text file) is added to the ASCII value of each byte; 2) In second step, Single Bit Manipulation technique is applied to each byte; 3) In third step, Advanced Bit Randomization technique is applied to blocks of data after converting each byte to its equivalent binary format; 4) In fourth and final step, Bit Reversal technique is applied to form the encrypted message (output file). The second and third steps are totally random in nature and are depended on the password (symmetric key), which is provided for encryption method. It is evident from the steps above that the method used here is an amalgamation of both byte and bit manipulation cipher techniques. This method has been tested for many plain text files and other type of file formats, and the results were very satisfactory. There was no pattern found in the output file and spectral analysis of the frequency of characters also proves this.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126337306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}