首页 > 最新文献

2012 World Congress on Information and Communication Technologies最新文献

英文 中文
Curve fitting and regression line method based seasonal short term load forecasting 基于曲线拟合和回归线法的季节性短期负荷预测
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409098
M. Babita Jain, Manoj Kumar Nigam, Prem Chand Tiwari
Short term load forecasting in this paper is done by considering the sensitivity of the network load to the temperature, humidity, day type parameters (THD) and previous load and also ensuring that forecasting the load with these parameters can best be done by the Regression Line Method (RLM) and Curve Fitting Method (CFM). The analysis of the load data recognizes that the load pattern is not only dependent on temperature but also dependent on humidity and day type. A new norm has been developed using the regression line concept with inclusion of special constants which hold the effect of the history data and THD parameters on the load forecast and it is used for the STLF of the test dataset of the data set considered. A unique norm with a, b, c and d constants based on the history data has been proposed for the STLF using the concept of curve fitting technique. The algorithms implementing this forecasting technique have been programmed using MATLAB. The input data of each day average power, average temperature, average humidity and day type of the previous year are used for prediction of power, in the case of the regression line method and the forecast previous month data and the similar month data of the previous year is used for the curve fitting method. The results are also compared with the Euclidean Norm Method (ELM) which is generally used method for STLF. The simulation results show the robustness and suitability of the proposed CFM norm for the STLF as the forecasting accuracies are very good and less than 3% for almost all the day types and all the seasons. Results also indicate that the proposed curve fitting method out passes the regression technique and the standard Euclidean distance norm with respect to forecasting accuracy and hence it will provide a better technique to utilities for short term load forecasting.
本文的短期负荷预测既考虑了电网负荷对温度、湿度、日型参数(THD)和前期负荷的敏感性,又保证了用回归线法(RLM)和曲线拟合法(CFM)能较好地预测这些参数的负荷。对负荷数据的分析表明,负荷模式不仅与温度有关,还与湿度和天气类型有关。使用回归线概念开发了一个新的范数,其中包含特殊常数,这些常数包含历史数据和THD参数对负荷预测的影响,并用于考虑数据集的测试数据集的STLF。利用曲线拟合技术的概念,提出了一种基于历史数据的具有A、b、c和d常数的唯一范数。利用MATLAB编写了实现该预测技术的算法。功率的预测采用上年每天平均功率、平均温度、平均湿度、日型的输入数据,在采用回归线法的情况下,曲线拟合方法采用预测前一月数据和上年同类月数据。结果还与欧氏范数法(ELM)进行了比较。模拟结果表明,CFM范数对STLF具有较好的鲁棒性和适用性,对几乎所有日型和季节的预报精度都在3%以内。结果表明,所提出的曲线拟合方法在预测精度上优于回归技术和标准欧氏距离范数,为电力公司短期负荷预测提供了一种较好的方法。
{"title":"Curve fitting and regression line method based seasonal short term load forecasting","authors":"M. Babita Jain, Manoj Kumar Nigam, Prem Chand Tiwari","doi":"10.1109/WICT.2012.6409098","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409098","url":null,"abstract":"Short term load forecasting in this paper is done by considering the sensitivity of the network load to the temperature, humidity, day type parameters (THD) and previous load and also ensuring that forecasting the load with these parameters can best be done by the Regression Line Method (RLM) and Curve Fitting Method (CFM). The analysis of the load data recognizes that the load pattern is not only dependent on temperature but also dependent on humidity and day type. A new norm has been developed using the regression line concept with inclusion of special constants which hold the effect of the history data and THD parameters on the load forecast and it is used for the STLF of the test dataset of the data set considered. A unique norm with a, b, c and d constants based on the history data has been proposed for the STLF using the concept of curve fitting technique. The algorithms implementing this forecasting technique have been programmed using MATLAB. The input data of each day average power, average temperature, average humidity and day type of the previous year are used for prediction of power, in the case of the regression line method and the forecast previous month data and the similar month data of the previous year is used for the curve fitting method. The results are also compared with the Euclidean Norm Method (ELM) which is generally used method for STLF. The simulation results show the robustness and suitability of the proposed CFM norm for the STLF as the forecasting accuracies are very good and less than 3% for almost all the day types and all the seasons. Results also indicate that the proposed curve fitting method out passes the regression technique and the standard Euclidean distance norm with respect to forecasting accuracy and hence it will provide a better technique to utilities for short term load forecasting.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133299639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
A comparative study between Discrete Wavelet Transform and Linear Predictive Coding 离散小波变换与线性预测编码的比较研究
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409214
D. Ambika, V. Radha
In this paper the analysis of the compression process was performed by comparing the compressed signal against the original signal. To do this the most powerful speech analysis and compression techniques such as Linear Predictive Coding (LPC) and Discrete Wavelet Transform (DWT) was implemented using MATLAB. Here nine samples of spoken words are collected from different speakers and are used for implementation. The results obtained from LPC were compared with other compression technique called Discrete Wavelet Transform. Finally the results were evaluated in terms of compressed ratio (CR), Peak signal-to-noise ratio (PSNR) and Normalized root-mean square error (NRMSE). The result shows that DWT performance was better for these samples than the LPC method.
本文通过将压缩后的信号与原始信号进行比较,对压缩过程进行分析。为此,使用MATLAB实现了最强大的语音分析和压缩技术,如线性预测编码(LPC)和离散小波变换(DWT)。这里收集了来自不同说话者的九个口语单词样本,并用于实施。将LPC得到的结果与离散小波变换得到的结果进行了比较。最后用压缩比(CR)、峰值信噪比(PSNR)和归一化均方根误差(NRMSE)对结果进行评价。结果表明,对于这些样本,DWT的性能优于LPC方法。
{"title":"A comparative study between Discrete Wavelet Transform and Linear Predictive Coding","authors":"D. Ambika, V. Radha","doi":"10.1109/WICT.2012.6409214","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409214","url":null,"abstract":"In this paper the analysis of the compression process was performed by comparing the compressed signal against the original signal. To do this the most powerful speech analysis and compression techniques such as Linear Predictive Coding (LPC) and Discrete Wavelet Transform (DWT) was implemented using MATLAB. Here nine samples of spoken words are collected from different speakers and are used for implementation. The results obtained from LPC were compared with other compression technique called Discrete Wavelet Transform. Finally the results were evaluated in terms of compressed ratio (CR), Peak signal-to-noise ratio (PSNR) and Normalized root-mean square error (NRMSE). The result shows that DWT performance was better for these samples than the LPC method.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133051667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
An adaptive video segmentation approach based on shape prior 一种基于形状先验的自适应视频分割方法
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409226
Yiming Guo, Lei Yang, Xiaoyu Wu, Xiaodan Pan
As the basic and efficient segmentation framework, GraphCut plays an important part in video segmentation area. This paper proposes a adaptive video segmentation approach based on shape prior of the foreground. Shape information with Euclidean distance measure is added to GraphCut framework to compensate instability caused by single color information. And the shape model is adaptive with the size of foreground. The experiments show segmentation results with our method is significantly better than only using the color information.
GraphCut作为最基本、最高效的分割框架,在视频分割领域发挥着重要作用。提出了一种基于前景形状先验的自适应视频分割方法。在GraphCut框架中加入带有欧氏距离度量的形状信息,以补偿单一颜色信息带来的不稳定性。并且该形状模型能够适应前景的大小。实验结果表明,该方法的分割效果明显优于仅使用颜色信息。
{"title":"An adaptive video segmentation approach based on shape prior","authors":"Yiming Guo, Lei Yang, Xiaoyu Wu, Xiaodan Pan","doi":"10.1109/WICT.2012.6409226","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409226","url":null,"abstract":"As the basic and efficient segmentation framework, GraphCut plays an important part in video segmentation area. This paper proposes a adaptive video segmentation approach based on shape prior of the foreground. Shape information with Euclidean distance measure is added to GraphCut framework to compensate instability caused by single color information. And the shape model is adaptive with the size of foreground. The experiments show segmentation results with our method is significantly better than only using the color information.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126061388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Congestion control in Wireless Sensor Networks by using Differed Reporting Rate 基于不同报告速率的无线传感器网络拥塞控制
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409076
V. Deshpande, Pratibha Chavan, V. Wadhai, J. Helonde
In Wireless Sensor Networks (WSNs), there are one or more sinks or base stations and many sensor nodes distributed over wide area. Sensor nodes have restricted power. When a particular event is occurred, these sensor nodes can transmit large volume of data towards the sink. It can result in buffer overflow at the nodes. It causes packet drops and also network throughput decreases. In WSNs, congestion may lead to energy waste due to a large number of retransmissions and packet drops. Hence it shortens the lifetime of sensor nodes. So, congestion in WSNs needs to be controlled to decrease the waste of energy and also to increase the lifetime of sensor nodes. Proposed congestion control mechanisms will improve network throughput, packet delivery ratio and packet loss. Many network aspects such as reporting rate, node density, packet size etc. can affect congestion. Congestion can be controlled by using Differed Reporting Rate (DRR) algorithm.
在无线传感器网络(WSNs)中,有一个或多个接收器或基站和许多传感器节点分布在广阔的区域。传感器节点功率有限。当特定事件发生时,这些传感器节点可以向接收器传输大量数据。它可能导致节点上的缓冲区溢出。这会导致丢包,也会降低网络吞吐量。在无线传感器网络中,由于大量的重传和丢包,拥塞可能导致能量浪费。因此,它缩短了传感器节点的生命周期。因此,需要控制无线传感器网络中的拥塞,以减少能量浪费并延长传感器节点的寿命。所提出的拥塞控制机制将提高网络吞吐量、分组传送率和丢包率。许多网络方面,如报告率、节点密度、数据包大小等都会影响拥塞。拥塞可以通过不同报告率(DRR)算法来控制。
{"title":"Congestion control in Wireless Sensor Networks by using Differed Reporting Rate","authors":"V. Deshpande, Pratibha Chavan, V. Wadhai, J. Helonde","doi":"10.1109/WICT.2012.6409076","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409076","url":null,"abstract":"In Wireless Sensor Networks (WSNs), there are one or more sinks or base stations and many sensor nodes distributed over wide area. Sensor nodes have restricted power. When a particular event is occurred, these sensor nodes can transmit large volume of data towards the sink. It can result in buffer overflow at the nodes. It causes packet drops and also network throughput decreases. In WSNs, congestion may lead to energy waste due to a large number of retransmissions and packet drops. Hence it shortens the lifetime of sensor nodes. So, congestion in WSNs needs to be controlled to decrease the waste of energy and also to increase the lifetime of sensor nodes. Proposed congestion control mechanisms will improve network throughput, packet delivery ratio and packet loss. Many network aspects such as reporting rate, node density, packet size etc. can affect congestion. Congestion can be controlled by using Differed Reporting Rate (DRR) algorithm.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123444102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Research on attributes discretization in target fusion syetem 目标融合系统中属性离散化的研究
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409251
Xiangyu Meng, Rong Cong, Kai Li
A new method for discretization of continuous attributes is trying to be discussed in this paper based on the importance of attributes that finds solutions to overcome the limitation of the traditional rough sets. According to consistency degrees, grouping is an effective way to select candidate cut points, it also helps reducing the numbers of cut points. So the consistency of the decision-making system is maintained in the form of attribute discretization which permits the reduction of cut point numbers and the improvement of efficiency. Adopting variable precision rough information entropy as measuring criterion, it has a good tolerance to noise. Experiments show that the algorithm yields satisfy this reduction results.
本文从属性的重要性出发,探讨了一种新的连续属性离散化方法,克服了传统粗糙集方法的局限性。根据一致性程度进行分组是选择候选切点的有效方法,也有助于减少切点的数量。以属性离散化的形式保持了决策系统的一致性,减少了截点数,提高了效率。采用变精度粗信息熵作为测量标准,对噪声有良好的容忍度。实验表明,该算法得到了较好的约简结果。
{"title":"Research on attributes discretization in target fusion syetem","authors":"Xiangyu Meng, Rong Cong, Kai Li","doi":"10.1109/WICT.2012.6409251","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409251","url":null,"abstract":"A new method for discretization of continuous attributes is trying to be discussed in this paper based on the importance of attributes that finds solutions to overcome the limitation of the traditional rough sets. According to consistency degrees, grouping is an effective way to select candidate cut points, it also helps reducing the numbers of cut points. So the consistency of the decision-making system is maintained in the form of attribute discretization which permits the reduction of cut point numbers and the improvement of efficiency. Adopting variable precision rough information entropy as measuring criterion, it has a good tolerance to noise. Experiments show that the algorithm yields satisfy this reduction results.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121775989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Greedy polynomial neural network for classification task in data mining 贪心多项式神经网络在数据挖掘分类任务中的应用
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409136
R. Dash, B. Misra, P. Dash, G. Panda
In this paper, a greedy polynomial neural network (GPNN) for the task of classification is proposed. Classification task is one of the most studied tasks of data mining. In solving classification task of data mining, the classical algorithm such as Polynomial Neural Network (PNN) takes large computation time because the network grows over the training period i.e. the partial descriptions (PDs) in each layer grows in successive generations. Unlike PNN this proposed work restricts the growth of partial descriptions to a single layer. A greedy technique is then used to select a subset of PDs those who can best map the input-output relation in general. Performance of this model is compared with the results obtained from PNN. Simulation result shows that the performance of GPNN is encouraging for harnessing its power in data mining area and also better in terms of processing time than the PNN model.
本文提出了一种贪心多项式神经网络(GPNN)用于分类任务。分类任务是数据挖掘中研究最多的任务之一。在解决数据挖掘分类任务时,经典算法如多项式神经网络(PNN)由于网络在训练过程中不断增长,即每一层的部分描述(pd)逐代增长,因此计算时间较长。与PNN不同的是,该工作将部分描述的增长限制在单层。然后使用贪婪技术来选择pd的子集,这些子集通常可以最好地映射输入输出关系。将该模型的性能与PNN的结果进行了比较。仿真结果表明,GPNN在数据挖掘领域的性能令人鼓舞,在处理时间方面也优于PNN模型。
{"title":"Greedy polynomial neural network for classification task in data mining","authors":"R. Dash, B. Misra, P. Dash, G. Panda","doi":"10.1109/WICT.2012.6409136","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409136","url":null,"abstract":"In this paper, a greedy polynomial neural network (GPNN) for the task of classification is proposed. Classification task is one of the most studied tasks of data mining. In solving classification task of data mining, the classical algorithm such as Polynomial Neural Network (PNN) takes large computation time because the network grows over the training period i.e. the partial descriptions (PDs) in each layer grows in successive generations. Unlike PNN this proposed work restricts the growth of partial descriptions to a single layer. A greedy technique is then used to select a subset of PDs those who can best map the input-output relation in general. Performance of this model is compared with the results obtained from PNN. Simulation result shows that the performance of GPNN is encouraging for harnessing its power in data mining area and also better in terms of processing time than the PNN model.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122329415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Notice of Violation of IEEE Publication PrinciplesA cascaded high step-up dc-dc converter for micro-grid 一种微电网级联高升压dc-dc变换器
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409109
R. Kameswara Rao, B. Satya, Vara Prasad, K. Naidu, S. Changchien, T. Liang, Jiann-Fuh Chen, Lung-Sheng Yang, Shih-Ming Chen, Jiann-Fuh Chen Power
A high-efficiency dc-dc converter with high voltage gain and reduced switch stress is proposed. Generally speaking, the utilization of a coupled inductor is useful for raising the step-up ratio of the conventional boost converter. However, the switch surge voltage may be caused by the leakage inductor so that it will result in the requirement of high-voltage-rated devices. This paper proposes a new high step-up dc-dc converter designed especially for regulating the dc interface between various micro-sources and a dc-ac inverter to electricity grid. The figuration of the proposed converter is a quadratic boost converter with the coupled inductor in the second boost converter. The converter achieves high step-up voltage gain with appropriate duty ratio and low voltage stress on the power switch. Additionally, the energy stored in the leakage inductor of the coupled inductor can be recycled to the output capacitor. The operating principles and steady-state analysis of continuous-conduction mode and boundary-conduction mode are discussed in detail. The simulation circuit is developed by using the MATLAB/SIMULINK modeling and the concerned characteristics are analysed.
提出了一种具有高电压增益和低开关应力的高效dc-dc变换器。一般来说,利用耦合电感可以提高传统升压变换器的升压比。然而,开关浪涌电压可能是由漏电电感引起的,从而导致对器件的额定电压要求过高。本文提出了一种新型的高升压dc-dc变换器,专门用于调节各种微源之间的直流接口和直流-交流逆变器并网。所提出的变换器的结构是二次升压变换器,在第二升压变换器中有耦合电感。该变换器以适当的占空比实现了高升压增益和对电源开关的低电压应力。此外,存储在耦合电感的泄漏电感中的能量可以再循环到输出电容中。详细讨论了连续导通模式和边界导通模式的工作原理和稳态分析。利用MATLAB/SIMULINK建模开发了仿真电路,并对相关特性进行了分析。
{"title":"Notice of Violation of IEEE Publication PrinciplesA cascaded high step-up dc-dc converter for micro-grid","authors":"R. Kameswara Rao, B. Satya, Vara Prasad, K. Naidu, S. Changchien, T. Liang, Jiann-Fuh Chen, Lung-Sheng Yang, Shih-Ming Chen, Jiann-Fuh Chen Power","doi":"10.1109/WICT.2012.6409109","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409109","url":null,"abstract":"A high-efficiency dc-dc converter with high voltage gain and reduced switch stress is proposed. Generally speaking, the utilization of a coupled inductor is useful for raising the step-up ratio of the conventional boost converter. However, the switch surge voltage may be caused by the leakage inductor so that it will result in the requirement of high-voltage-rated devices. This paper proposes a new high step-up dc-dc converter designed especially for regulating the dc interface between various micro-sources and a dc-ac inverter to electricity grid. The figuration of the proposed converter is a quadratic boost converter with the coupled inductor in the second boost converter. The converter achieves high step-up voltage gain with appropriate duty ratio and low voltage stress on the power switch. Additionally, the energy stored in the leakage inductor of the coupled inductor can be recycled to the output capacitor. The operating principles and steady-state analysis of continuous-conduction mode and boundary-conduction mode are discussed in detail. The simulation circuit is developed by using the MATLAB/SIMULINK modeling and the concerned characteristics are analysed.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124975342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Application of Hippocratic principles for privacy preservation in social network 希波克拉底原则在社交网络隐私保护中的应用
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409057
R. Bedi, Nitinkumar Rajendra Gove, V. Wadhai
With the number of users of social network growing exponentially, the need of protecting the user privacy in network has gain the prime importance. While joining a social network, the user is requested to fill up a lot of unnecessary information like educational background, birth date, interests etc. This information may get leaked or mal-accessed if not protected with proper security measures. The data stored in social network may be attacked in different ways according to purpose of attack. In this paper we identify basic types of privacy breaches in social network. Secondly, we study the concept of Hippocratic principles. We propose a simple classification of the information requested from the user when he joins the social network. We also propose a privacy preserving model based on Hippocratic principles, specifically for Purpose, Limited Disclosure, Consent and compliance. Our proposed model work on privacy metadata, query analyzer is extended to check the define policy before giving the result out. This model can be used while mining on private data, which will help to enhance the privacy level of trustworthiness among internet users.
随着社交网络用户数量呈指数级增长,保护网络用户隐私的需求变得尤为重要。在加入社交网络时,用户被要求填写很多不必要的信息,比如教育背景、出生日期、兴趣等。如果不采取适当的安全保护措施,这些信息可能会泄露或被恶意访问。根据攻击的目的,存储在社交网络中的数据可能会受到不同方式的攻击。在本文中,我们确定了社交网络中隐私泄露的基本类型。其次,我们学习希波克拉底原则的概念。我们对用户加入社交网络时所要求的信息进行简单分类。我们还提出了一种基于希波克拉底原则的隐私保护模型,特别是针对目的、有限披露、同意和合规。我们提出的模型在隐私元数据上工作,扩展了查询分析器,在给出结果之前检查定义策略。该模型可用于对私人数据的挖掘,有助于提高互联网用户之间的可信度隐私水平。
{"title":"Application of Hippocratic principles for privacy preservation in social network","authors":"R. Bedi, Nitinkumar Rajendra Gove, V. Wadhai","doi":"10.1109/WICT.2012.6409057","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409057","url":null,"abstract":"With the number of users of social network growing exponentially, the need of protecting the user privacy in network has gain the prime importance. While joining a social network, the user is requested to fill up a lot of unnecessary information like educational background, birth date, interests etc. This information may get leaked or mal-accessed if not protected with proper security measures. The data stored in social network may be attacked in different ways according to purpose of attack. In this paper we identify basic types of privacy breaches in social network. Secondly, we study the concept of Hippocratic principles. We propose a simple classification of the information requested from the user when he joins the social network. We also propose a privacy preserving model based on Hippocratic principles, specifically for Purpose, Limited Disclosure, Consent and compliance. Our proposed model work on privacy metadata, query analyzer is extended to check the define policy before giving the result out. This model can be used while mining on private data, which will help to enhance the privacy level of trustworthiness among internet users.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123899219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Capacity analysis of highly Correlated Rayleigh Fading Channels for Maximal Ratio Combining Diversity 最大比组合分集下高相关瑞利衰落信道容量分析
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409102
J. Subhashim, V. Bhaskar
In this paper, we derive closed form expressions for Capacity per unit bandwidth (Spectrum Efficiency) of Correlated Rayleigh Fading Channels under Maximal Ratio Combining Diversity for High Correlation between pilot and the signal. The spectrum efficiency expressions are derived for M diversity branches under adaptation policies, such as (i) Optimal Power and Rate Adaptation (OPRA) policy, (ii) Optimal Rate Adaptation (ORA) policy (iii) Channel Inversion with Fixed Rate (CIFR) policy, and (iv) Truncated channel Inversion with Fixed Rate (TIFR) policy. If M branch signals are highly correlated and if space diversity is exercised using a SIMO system, the spectrum efficiency achieved is higher compared to that achieved when the signals are uncorrelated and have no diversity. This forms the focal point of this paper. Analytical results show accurately that OPRA policy provides the highest capacity over other adaptation policies. The spectrum efficiency for all four policies and outage probability for the highly correlated case are derived, plotted and analyzed in detail in this work.
在导频与信号高度相关的条件下,导出了最大比组合分集下相关瑞利衰落信道单位带宽容量(频谱效率)的封闭表达式。推导了M个分集分支在适应策略下的频谱效率表达式,包括(i)最优功率和速率自适应(OPRA)策略、(ii)最优速率自适应(ORA)策略、(iii)固定速率信道反转(CIFR)策略和(iv)固定速率截断信道反转(TIFR)策略。如果M支路信号高度相关,并且使用SIMO系统进行空间分集,则实现的频谱效率比信号不相关且没有分集时更高。这是本文的重点。分析结果准确地表明,OPRA政策比其他适应政策提供了最高的能力。在这项工作中,对所有四种策略的频谱效率和高度相关情况下的中断概率进行了推导、绘制和详细分析。
{"title":"Capacity analysis of highly Correlated Rayleigh Fading Channels for Maximal Ratio Combining Diversity","authors":"J. Subhashim, V. Bhaskar","doi":"10.1109/WICT.2012.6409102","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409102","url":null,"abstract":"In this paper, we derive closed form expressions for Capacity per unit bandwidth (Spectrum Efficiency) of Correlated Rayleigh Fading Channels under Maximal Ratio Combining Diversity for High Correlation between pilot and the signal. The spectrum efficiency expressions are derived for M diversity branches under adaptation policies, such as (i) Optimal Power and Rate Adaptation (OPRA) policy, (ii) Optimal Rate Adaptation (ORA) policy (iii) Channel Inversion with Fixed Rate (CIFR) policy, and (iv) Truncated channel Inversion with Fixed Rate (TIFR) policy. If M branch signals are highly correlated and if space diversity is exercised using a SIMO system, the spectrum efficiency achieved is higher compared to that achieved when the signals are uncorrelated and have no diversity. This forms the focal point of this paper. Analytical results show accurately that OPRA policy provides the highest capacity over other adaptation policies. The spectrum efficiency for all four policies and outage probability for the highly correlated case are derived, plotted and analyzed in detail in this work.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130072467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SD-C1BBR: SD-count-1-byte-bit randomization: A new advanced cryptographic randomization technique SD-C1BBR: sd -count-1字节位随机化:一种新的高级密码随机化技术
Pub Date : 2012-10-01 DOI: 10.1109/WICT.2012.6409081
Somdip Dey
In this paper, the author proposes a new combined symmetric key cryptographic method, which is based on the following steps: 1) In first step, the position number of each byte in the string (stream) of message (plain text file) is added to the ASCII value of each byte; 2) In second step, Single Bit Manipulation technique is applied to each byte; 3) In third step, Advanced Bit Randomization technique is applied to blocks of data after converting each byte to its equivalent binary format; 4) In fourth and final step, Bit Reversal technique is applied to form the encrypted message (output file). The second and third steps are totally random in nature and are depended on the password (symmetric key), which is provided for encryption method. It is evident from the steps above that the method used here is an amalgamation of both byte and bit manipulation cipher techniques. This method has been tested for many plain text files and other type of file formats, and the results were very satisfactory. There was no pattern found in the output file and spectral analysis of the frequency of characters also proves this.
本文提出了一种新的组合对称密钥加密方法,该方法基于以下步骤:1)第一步,将消息(明文文件)的字符串(流)中每个字节的位置号加到每个字节的ASCII值中;2)第二步,对每个字节应用单比特操作技术;3)第三步,将每个字节转换为等效的二进制格式后,对数据块应用高级位随机化技术;4)第四步,也是最后一步,采用比特反转技术形成加密消息(输出文件)。第二步和第三步是完全随机的,依赖于为加密方法提供的密码(对称密钥)。从上面的步骤可以明显看出,这里使用的方法是字节和位操作密码技术的合并。这种方法已经对许多纯文本文件和其他类型的文件格式进行了测试,结果非常令人满意。在输出文件中没有发现模式,字符频率的频谱分析也证明了这一点。
{"title":"SD-C1BBR: SD-count-1-byte-bit randomization: A new advanced cryptographic randomization technique","authors":"Somdip Dey","doi":"10.1109/WICT.2012.6409081","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409081","url":null,"abstract":"In this paper, the author proposes a new combined symmetric key cryptographic method, which is based on the following steps: 1) In first step, the position number of each byte in the string (stream) of message (plain text file) is added to the ASCII value of each byte; 2) In second step, Single Bit Manipulation technique is applied to each byte; 3) In third step, Advanced Bit Randomization technique is applied to blocks of data after converting each byte to its equivalent binary format; 4) In fourth and final step, Bit Reversal technique is applied to form the encrypted message (output file). The second and third steps are totally random in nature and are depended on the password (symmetric key), which is provided for encryption method. It is evident from the steps above that the method used here is an amalgamation of both byte and bit manipulation cipher techniques. This method has been tested for many plain text files and other type of file formats, and the results were very satisfactory. There was no pattern found in the output file and spectral analysis of the frequency of characters also proves this.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126337306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
2012 World Congress on Information and Communication Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1