首页 > 最新文献

2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)最新文献

英文 中文
Openstack-paradigm shift to open source cloud computing & its integration 开源模式向开源云计算及其集成的转变
Pub Date : 2016-12-01 DOI: 10.1109/IC3I.2016.7917944
Shubham Awasthi, A. Pathak, Lovekesh Kapoor
With emergence in cloud computing there was a huge demand for the data center technology and the operating system which can handle the data center. Increasing demand of infrastructure services leading the organizations to move towards Cloud. The aim is to provide an opportunity to the industry to build a hosting architecture, massively scalable which is completely open source, and to provide a solution to manage their on premises datacenters or private cloud and public cloud data centers simultaneously. When we talk about the combination of private as well as public cloud workloads, here term comes into existence i.e. Hybrid Cloud. To build and manage the Hybrid cloud, Openstack is the open source solution available in market. Here we will be discussing the whole concept of OpenStack in detail, It's architecture, functionalities and how we setup it in our environment tested different use cases.
随着云计算的兴起,对数据中心技术和能够处理数据中心的操作系统有着巨大的需求。不断增长的基础设施服务需求导致组织向云迁移。其目的是为业界提供一个机会来构建一个完全开源的、大规模可扩展的托管架构,并提供一个解决方案来同时管理他们的本地数据中心或私有云和公共云数据中心。当我们谈论私有云和公共云工作负载的组合时,这里出现了一个术语,即混合云。为了构建和管理混合云,Openstack是市场上可用的开源解决方案。在这里,我们将详细讨论OpenStack的整个概念,它的架构,功能以及我们如何在我们的环境中测试不同的用例。
{"title":"Openstack-paradigm shift to open source cloud computing & its integration","authors":"Shubham Awasthi, A. Pathak, Lovekesh Kapoor","doi":"10.1109/IC3I.2016.7917944","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7917944","url":null,"abstract":"With emergence in cloud computing there was a huge demand for the data center technology and the operating system which can handle the data center. Increasing demand of infrastructure services leading the organizations to move towards Cloud. The aim is to provide an opportunity to the industry to build a hosting architecture, massively scalable which is completely open source, and to provide a solution to manage their on premises datacenters or private cloud and public cloud data centers simultaneously. When we talk about the combination of private as well as public cloud workloads, here term comes into existence i.e. Hybrid Cloud. To build and manage the Hybrid cloud, Openstack is the open source solution available in market. Here we will be discussing the whole concept of OpenStack in detail, It's architecture, functionalities and how we setup it in our environment tested different use cases.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128036586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
FMODC: Fuzzy guided multi-objective document clustering by GA FMODC:基于遗传算法的模糊引导多目标文档聚类
Pub Date : 2016-12-01 DOI: 10.1109/IC3I.2016.7918043
A. Rao, S. Ramakrishna
In the unsuprevised learning method of text mining process that is in prevalence, the issues pertaining to multi dimensionality is turning out to be a major factor, as the clustering is not focusing on optimal evaluation of concept, context and semantic relevancy, which are also very essential in terms of clustering process. In majority of the models that are proposed earlier, the factors like the term frequency, were considered and the clustering has been focusing only on one factor of Semantic, whereas as context and conceptual factors also play a significant importance. In extension to the earlier model of MODC, and DC3SR, the proposed model of Multi-objective Distance based Optimal Document Clustering (MODC) by GA has been proposed in the study. Among the lessons that are learnt from the review of earlier models., the scope for fuzzy guided multi-objective optimal document clustering (FMODC) approach which shall support in more effective computation and clustering using the Genetic Algorithm is discussed in the case scenario. From the experimentation process that is focused in the study, using the meta-text data gathered from the same publisher, the model has been tested in comparative analysis to other two models, BADC and AC-DCO and the outcome in terms of optimum clustering that has been achieved with FMODC model depicts the kind of accuracy in the model and the system. An unsupervised learning approach to form the initial clusters that estimates similarity between any two documents by concept, context and semantic relevance score and further optimizes by fuzzy genetic algorithm is proposed. This novel method represents the concept as correlation between arguments and activities in given documents, context as correlation between meta-text of the documents and the semantic relevance is assessed by estimating the similarity between documents through the hyponyms of the arguments. The meta-text of the documents considered for context assessment contains the authors list, keywords list and list of document versioning time schedules. The experiments were conducted to assess the significance of the proposed model.
在目前流行的文本挖掘过程的无监督学习方法中,由于聚类不关注概念、上下文和语义相关性的最优评价,多维度问题成为主要因素,而这些在聚类过程中也是非常重要的。在之前提出的大多数模型中,考虑了术语频率等因素,聚类只关注语义的一个因素,而上下文和概念因素也起着重要作用。本文在原有MODC模型和DC3SR模型的基础上,提出了基于遗传算法的多目标距离优化文档聚类(MODC)模型。从回顾早期模式中吸取的教训之一。,讨论了模糊引导多目标最优文档聚类(FMODC)方法的适用范围,该方法支持更有效的遗传算法计算和聚类。在本研究的实验过程中,使用同一出版商的元文本数据,对模型进行了与BADC和AC-DCO两种模型的对比分析,FMODC模型在最优聚类方面所取得的结果描述了模型和系统的准确性。提出了一种无监督学习方法,通过概念、上下文和语义关联评分来估计任意两个文档之间的相似度,并通过模糊遗传算法进一步优化。该方法将概念表示为给定文档中参数和活动之间的相关性,将上下文表示为文档元文本之间的相关性,并通过参数的下位词估计文档之间的相似性来评估语义相关性。用于上下文评估的文档的元文本包含作者列表、关键字列表和文档版本时间表列表。通过实验来评估所提出模型的意义。
{"title":"FMODC: Fuzzy guided multi-objective document clustering by GA","authors":"A. Rao, S. Ramakrishna","doi":"10.1109/IC3I.2016.7918043","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7918043","url":null,"abstract":"In the unsuprevised learning method of text mining process that is in prevalence, the issues pertaining to multi dimensionality is turning out to be a major factor, as the clustering is not focusing on optimal evaluation of concept, context and semantic relevancy, which are also very essential in terms of clustering process. In majority of the models that are proposed earlier, the factors like the term frequency, were considered and the clustering has been focusing only on one factor of Semantic, whereas as context and conceptual factors also play a significant importance. In extension to the earlier model of MODC, and DC3SR, the proposed model of Multi-objective Distance based Optimal Document Clustering (MODC) by GA has been proposed in the study. Among the lessons that are learnt from the review of earlier models., the scope for fuzzy guided multi-objective optimal document clustering (FMODC) approach which shall support in more effective computation and clustering using the Genetic Algorithm is discussed in the case scenario. From the experimentation process that is focused in the study, using the meta-text data gathered from the same publisher, the model has been tested in comparative analysis to other two models, BADC and AC-DCO and the outcome in terms of optimum clustering that has been achieved with FMODC model depicts the kind of accuracy in the model and the system. An unsupervised learning approach to form the initial clusters that estimates similarity between any two documents by concept, context and semantic relevance score and further optimizes by fuzzy genetic algorithm is proposed. This novel method represents the concept as correlation between arguments and activities in given documents, context as correlation between meta-text of the documents and the semantic relevance is assessed by estimating the similarity between documents through the hyponyms of the arguments. The meta-text of the documents considered for context assessment contains the authors list, keywords list and list of document versioning time schedules. The experiments were conducted to assess the significance of the proposed model.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114004209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Review of optical flow technique for moving object detection 运动目标检测的光流技术综述
Pub Date : 2016-12-01 DOI: 10.1109/IC3I.2016.7917999
Anshuman Agarwal, Shivam Gupta, D. Singh
Object detection in a video is a challenging task in the field of image processing. Some applications of the domain are Human Machine Interaction (HMI), Security and Surveillance, Supplemented Authenticity, Traffic Monitoring on Roads, Medicinal Imaging etc. There happens to be a number of methods available for object detection. Each of the method has some constraints on the kind of application it has been used for. This paper presents one of such method which is termed as Optical Flow technique. This technique is found to be more robust and efficient for moving object detection and the same has been shown by an experiment in the paper. Applying optical flow to an image gives flow vectors of the points corresponding to the moving objects. Next part of marking the required moving object of interest counts to the post processing. Post processing is the legitimate contribution of the paper for moving object detection problem. This here is discussed as Blob Analysis. It is tested on datasets available online, real time videos and also on videos recorded manually. The results show that the moving objects are successfully detected using optical flow technique and the required post processing.
视频中的目标检测是图像处理领域中一项具有挑战性的任务。该领域的一些应用包括人机交互(HMI)、安全与监控、补充真实性、道路交通监控、医学成像等。碰巧有许多方法可用于对象检测。每种方法对其所用于的应用程序类型都有一些限制。本文提出了一种称为光流技术的方法。实验结果表明,该方法对运动目标检测具有较好的鲁棒性和有效性。将光流应用于图像给出了与移动物体相对应的点的流向量。标记所需的感兴趣的移动对象的下一部分是后期处理。后处理是本文对运动目标检测问题的合理贡献。这在这里被称为Blob分析。它在可用的在线数据集、实时视频和手动录制的视频上进行了测试。结果表明,利用光流技术和相应的后处理技术,可以成功地检测出运动目标。
{"title":"Review of optical flow technique for moving object detection","authors":"Anshuman Agarwal, Shivam Gupta, D. Singh","doi":"10.1109/IC3I.2016.7917999","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7917999","url":null,"abstract":"Object detection in a video is a challenging task in the field of image processing. Some applications of the domain are Human Machine Interaction (HMI), Security and Surveillance, Supplemented Authenticity, Traffic Monitoring on Roads, Medicinal Imaging etc. There happens to be a number of methods available for object detection. Each of the method has some constraints on the kind of application it has been used for. This paper presents one of such method which is termed as Optical Flow technique. This technique is found to be more robust and efficient for moving object detection and the same has been shown by an experiment in the paper. Applying optical flow to an image gives flow vectors of the points corresponding to the moving objects. Next part of marking the required moving object of interest counts to the post processing. Post processing is the legitimate contribution of the paper for moving object detection problem. This here is discussed as Blob Analysis. It is tested on datasets available online, real time videos and also on videos recorded manually. The results show that the moving objects are successfully detected using optical flow technique and the required post processing.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123641646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Security assessment of AODV protocol under Wormhole and DOS attacks AODV协议在虫洞和DOS攻击下的安全性评估
Pub Date : 2016-12-01 DOI: 10.1109/IC3I.2016.7917954
B. K. Joshi, Megha Soni
A MANet a is set of wireless mobile nodes that share a common wireless channel without any centralized unit. In recent years many routing protocols have been proposed for application of MANets in government, commercial and military area. MANets have some qualities such as dynamic nature, decentralized support and infrastructure-less which make it extremely prone to attacks. Security becomes a major issue in the design of routing protocols in MANets. In this paper, we present security analysis of routing protocol in general and Ad-hoc On demand Distance Vector in particular under different type of attacks.
MANet是一组共享公共无线信道的无线移动节点,没有任何集中的单元。近年来,为了实现城域网在政府、商业和军事领域的应用,提出了许多路由协议。manet具有一些特性,如动态性质、分散支持和无基础设施,使其极易受到攻击。安全问题已成为城域网路由协议设计中的一个主要问题。本文对路由协议和Ad-hoc随需应变距离矢量在不同类型攻击下的安全性进行了分析。
{"title":"Security assessment of AODV protocol under Wormhole and DOS attacks","authors":"B. K. Joshi, Megha Soni","doi":"10.1109/IC3I.2016.7917954","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7917954","url":null,"abstract":"A MANet a is set of wireless mobile nodes that share a common wireless channel without any centralized unit. In recent years many routing protocols have been proposed for application of MANets in government, commercial and military area. MANets have some qualities such as dynamic nature, decentralized support and infrastructure-less which make it extremely prone to attacks. Security becomes a major issue in the design of routing protocols in MANets. In this paper, we present security analysis of routing protocol in general and Ad-hoc On demand Distance Vector in particular under different type of attacks.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122473349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Fine tuning the parameters of back propagation algorithm for optimum learning performance 对反向传播算法的参数进行微调以获得最佳的学习性能
Pub Date : 2016-12-01 DOI: 10.1109/IC3I.2016.7917926
Viral Nagori
The back propagation algorithm has wide range of applications for training of feed forward neural networks. Over the years, many researchers have used back propagation algorithm to train their neural network based systems without emphasizing on how to fine tune the parameters of the algorithm. The paper throws the light on how researchers can manipulate and experiment with the parameters of the back propagation algorithm to achieve the optimum learning performance. The paper presents the results of the laboratory experiments of fine tuning the parameters of the back propagation algorithm. The process of fine tuning the parameters was applied on the neural network based expert system prototype. The prototype aims to analyze and design customized motivational strategies based on employees' perspective. The laboratory experiments were conducted on the following parameters of back propagation algorithm: learning rate, momentum rate and activation functions. Learning performance are measured and recorded. At the same time, the impact of activation function on the final output is also measured. Based on the results, the values of the above parameters which provide the optimum learning performance is chosen for the full scale system implementation.
反向传播算法在前馈神经网络的训练中有着广泛的应用。多年来,许多研究人员使用反向传播算法来训练基于神经网络的系统,而没有强调如何微调算法的参数。本文揭示了研究人员如何对反向传播算法的参数进行操作和实验,以达到最佳的学习性能。本文给出了对反向传播算法参数进行微调的实验室实验结果。将参数微调过程应用于基于神经网络的专家系统原型。原型旨在分析和设计基于员工视角的定制化激励策略。对反向传播算法的学习速率、动量速率和激活函数等参数进行了实验室实验。学习表现被测量和记录。同时,还测量了激活函数对最终输出的影响。在此基础上,选择能提供最佳学习性能的上述参数值进行全尺寸系统实施。
{"title":"Fine tuning the parameters of back propagation algorithm for optimum learning performance","authors":"Viral Nagori","doi":"10.1109/IC3I.2016.7917926","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7917926","url":null,"abstract":"The back propagation algorithm has wide range of applications for training of feed forward neural networks. Over the years, many researchers have used back propagation algorithm to train their neural network based systems without emphasizing on how to fine tune the parameters of the algorithm. The paper throws the light on how researchers can manipulate and experiment with the parameters of the back propagation algorithm to achieve the optimum learning performance. The paper presents the results of the laboratory experiments of fine tuning the parameters of the back propagation algorithm. The process of fine tuning the parameters was applied on the neural network based expert system prototype. The prototype aims to analyze and design customized motivational strategies based on employees' perspective. The laboratory experiments were conducted on the following parameters of back propagation algorithm: learning rate, momentum rate and activation functions. Learning performance are measured and recorded. At the same time, the impact of activation function on the final output is also measured. Based on the results, the values of the above parameters which provide the optimum learning performance is chosen for the full scale system implementation.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125103322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Simulink based fuzzified COCOMO 基于Simulink的模糊COCOMO
Pub Date : 2016-12-01 DOI: 10.1109/IC3I.2016.7918800
Sonia Chhabra, Harvir Singh
Accurate estimation of the cost of a software minimizes the risk for the software development process. Applicability of the different cost estimation models is very crucial as the information required for the implementation of such models is imprecise and vague. In order to increase the accuracy of the model, it is proposed to introduce the concept of fuzzification for calculating the value of effort multiplier corresponding to different of cost drivers used in COCOMO model. The proposed model is designed using the MATLAB and is modeled in SIMULINK. The results are validated using the COCOMO dataset and it has been observed that by fuzzifying the cost drivers the model generated the results more closer to the actual values and thus enhances the accuracy of the estimation process.
对软件成本的准确估计可以将软件开发过程中的风险降到最低。不同成本估算模型的适用性是非常重要的,因为实现这些模型所需的信息是不精确和模糊的。为了提高模型的精度,提出引入模糊化的概念来计算COCOMO模型中不同成本动因对应的努力乘数值。该模型采用MATLAB进行设计,并在SIMULINK中进行建模。使用COCOMO数据集验证了结果,并观察到通过模糊化成本驱动因素,模型生成的结果更接近实际值,从而提高了估计过程的准确性。
{"title":"Simulink based fuzzified COCOMO","authors":"Sonia Chhabra, Harvir Singh","doi":"10.1109/IC3I.2016.7918800","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7918800","url":null,"abstract":"Accurate estimation of the cost of a software minimizes the risk for the software development process. Applicability of the different cost estimation models is very crucial as the information required for the implementation of such models is imprecise and vague. In order to increase the accuracy of the model, it is proposed to introduce the concept of fuzzification for calculating the value of effort multiplier corresponding to different of cost drivers used in COCOMO model. The proposed model is designed using the MATLAB and is modeled in SIMULINK. The results are validated using the COCOMO dataset and it has been observed that by fuzzifying the cost drivers the model generated the results more closer to the actual values and thus enhances the accuracy of the estimation process.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125176649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Modeling of a compact dual band PIFA using hybrid neural network 基于混合神经网络的紧凑双频PIFA建模
Pub Date : 2016-12-01 DOI: 10.1109/IC3I.2016.7917958
Ruchi Varma, J. Ghosh
In this paper, a compact dual band planar inverted-F antenna (PIFA) has been proposed. Dual band is achieved by inserting slots on the top radiating patch. The dimension of the patch is 15 × 12 mm2 and finite ground plane size is 44×40mm2 which can easily be integrated inside the mobile phone. Further, a hybrid neural network (HNN) is used for the design of dual band PIFA. This method is more accurate and requires less time. The HNN results are compared with the CST simulation results and are found to be in good accord.
本文提出了一种紧凑的双频平面倒f天线(PIFA)。双波段是通过在顶部的辐射贴片上插入插槽来实现的。贴片的尺寸为15 × 12 mm2,有限地平面尺寸为44×40mm2,可以方便地集成到手机内部。此外,将混合神经网络(HNN)用于双频PIFA的设计。这种方法更准确,所需时间更少。将HNN计算结果与CST模拟结果进行了比较,发现两者吻合较好。
{"title":"Modeling of a compact dual band PIFA using hybrid neural network","authors":"Ruchi Varma, J. Ghosh","doi":"10.1109/IC3I.2016.7917958","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7917958","url":null,"abstract":"In this paper, a compact dual band planar inverted-F antenna (PIFA) has been proposed. Dual band is achieved by inserting slots on the top radiating patch. The dimension of the patch is 15 × 12 mm2 and finite ground plane size is 44×40mm2 which can easily be integrated inside the mobile phone. Further, a hybrid neural network (HNN) is used for the design of dual band PIFA. This method is more accurate and requires less time. The HNN results are compared with the CST simulation results and are found to be in good accord.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127199705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Thermal effect aware X-bit filling technique for peak temperature reduction during VLSI testing 基于热效应感知的x位填充技术降低超大规模集成电路测试中的峰值温度
Pub Date : 2016-12-01 DOI: 10.1109/IC3I.2016.7917963
Sanjoy Mitra, Debaprasad Das
Power density for digital circuits is increasing by leaps and bounds with the progress of technology and increased integration. Higher spatial power density contemplates heat generation which raises peak temperature affecting flawless system behavior. The situation is worsened further during testing and this rise in temperature during test can permanently spoil the chip. To resolve this problem, significant efforts are rendered by the academia and industry for controlling temperature rise during test mode operation and peak temperature reduction is viewed as a sub problem in this context. Controlling of temperature divergence is also needed to bring temperature distribution homogeneity across the chip. Heat generated inside a chip under test can be dropped down by lowering inter test cube switching activity. Controlling of peak temperature and temperature divergence with in a safe legitimate threshold may be accomplished by an intelligent don't care bit filling approach which especially takes care of thermal effect and drops down switching activity inside a circuit block. In this paper, a thermal effect aware don't care (X) filling approach is put forwarded which controls peak temperature and temperature divergence within a predefined threshold during testing. This proposal is verified by extensive simulation on ITC'99 benchmark circuits and exhibits a satisfactory level of efficacy.
随着技术的进步和集成度的提高,数字电路的功率密度正在突飞猛进地增长。更高的空间功率密度考虑热产生,提高峰值温度,影响完美的系统行为。在测试过程中,这种情况会进一步恶化,测试过程中温度的升高会永久损坏芯片。为了解决这一问题,学术界和工业界在控制试验模式运行期间的温升方面做了大量工作,峰值温度降低被视为其中的一个子问题。控制温度发散也需要使整个芯片的温度分布均匀。通过降低测试立方体间的开关活动,可以降低被测芯片内部产生的热量。峰值温度和温度散度控制在一个安全的合法阈值内,可以通过一种智能的不在意位填充方法来实现,这种方法特别考虑了热效应,降低了电路块内部的开关活动。本文提出了一种热效应感知的不关心(X)填充方法,该方法在测试过程中将峰值温度和温度发散控制在预定义的阈值内。在ITC’99基准电路上进行了大量的仿真,验证了该方案的有效性。
{"title":"Thermal effect aware X-bit filling technique for peak temperature reduction during VLSI testing","authors":"Sanjoy Mitra, Debaprasad Das","doi":"10.1109/IC3I.2016.7917963","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7917963","url":null,"abstract":"Power density for digital circuits is increasing by leaps and bounds with the progress of technology and increased integration. Higher spatial power density contemplates heat generation which raises peak temperature affecting flawless system behavior. The situation is worsened further during testing and this rise in temperature during test can permanently spoil the chip. To resolve this problem, significant efforts are rendered by the academia and industry for controlling temperature rise during test mode operation and peak temperature reduction is viewed as a sub problem in this context. Controlling of temperature divergence is also needed to bring temperature distribution homogeneity across the chip. Heat generated inside a chip under test can be dropped down by lowering inter test cube switching activity. Controlling of peak temperature and temperature divergence with in a safe legitimate threshold may be accomplished by an intelligent don't care bit filling approach which especially takes care of thermal effect and drops down switching activity inside a circuit block. In this paper, a thermal effect aware don't care (X) filling approach is put forwarded which controls peak temperature and temperature divergence within a predefined threshold during testing. This proposal is verified by extensive simulation on ITC'99 benchmark circuits and exhibits a satisfactory level of efficacy.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114711818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure medical image steganography with RSA cryptography using decision tree 使用决策树的RSA加密安全医学图像隐写
Pub Date : 2016-12-01 DOI: 10.1109/IC3I.2016.7917977
Mamta Jain, Rishabh Choudhary, A. D. Sivarama Kumar
In this article, a novel technique about secure medical information transmission of patient inside medical cover image is presented by concealing data using decision tree concept. Decision tree shows a robust mechanism by providing decisions for secret information concealing location in medical carrier image using secret information mapping concept. RSA encryption algorithm is being used for patient's unique information enciphering. The outcome of the RSA is structured into various equally distributed blocks. In steganography, secret cipher blocks are assigned to carrier image for data inserting by mapping mechanism using breadth first search. Receiver gets hidden secret medical information of patient using RSA decryption, so only authorized recipient can recognize the plain text. Performance is analyzed and measured using numerous parameters between medical stego and carrier images. Results are analyzed and compared with many of existing algorithms.
本文提出了一种利用决策树概念对数据进行隐藏的医学封面图像内患者医疗信息安全传输的新技术。决策树利用秘密信息映射的概念对医学载体图像中的秘密信息隐藏位置进行决策,显示了一种鲁棒的机制。采用RSA加密算法对患者的唯一信息进行加密。RSA的结果被构造成各种均匀分布的块。在隐写术中,采用广度优先搜索的映射机制,将密码组分配到载体图像中进行数据插入。接收方通过RSA解密获得隐藏的患者秘密医疗信息,因此只有授权的接收方才能识别明文。性能分析和测量使用医疗隐写和载体图像之间的众多参数。结果与许多现有算法进行了分析和比较。
{"title":"Secure medical image steganography with RSA cryptography using decision tree","authors":"Mamta Jain, Rishabh Choudhary, A. D. Sivarama Kumar","doi":"10.1109/IC3I.2016.7917977","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7917977","url":null,"abstract":"In this article, a novel technique about secure medical information transmission of patient inside medical cover image is presented by concealing data using decision tree concept. Decision tree shows a robust mechanism by providing decisions for secret information concealing location in medical carrier image using secret information mapping concept. RSA encryption algorithm is being used for patient's unique information enciphering. The outcome of the RSA is structured into various equally distributed blocks. In steganography, secret cipher blocks are assigned to carrier image for data inserting by mapping mechanism using breadth first search. Receiver gets hidden secret medical information of patient using RSA decryption, so only authorized recipient can recognize the plain text. Performance is analyzed and measured using numerous parameters between medical stego and carrier images. Results are analyzed and compared with many of existing algorithms.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122161452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Rule based chunker for Hindi 基于规则的印地语分块器
Pub Date : 2016-12-01 DOI: 10.1109/IC3I.2016.7918005
S. Asopa, Pooja Asopa, Iti Mathur, Nisheeth Joshi
In this research paper, a rule based chunker is developed and evaluated. For the development of the chunker, handcrafted linguistic rules for mainly noun, adverb, verb, adjective phrases and conjuncts were generated. Indian Languages Chunk Tagset is used for annotations. In order to evaluate, 500 sentences of Hindi language tagged by HMM tagger were considered and given as an input to our chunker. Precision, Recall and F-Measure for the system were calculated and found to be 79.68, 69.36 and 74.16 respectively.
本文开发了一种基于规则的分块器,并对其进行了评价。为了开发分块器,生成了主要用于名词、副词、动词、形容词短语和连词的手工语言规则。印度语言块标记集用于注释。为了评估,考虑了500个由HMM标记器标记的印地语句子,并将其作为我们的分块器的输入。系统的精密度、召回率和F-Measure分别为79.68、69.36和74.16。
{"title":"Rule based chunker for Hindi","authors":"S. Asopa, Pooja Asopa, Iti Mathur, Nisheeth Joshi","doi":"10.1109/IC3I.2016.7918005","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7918005","url":null,"abstract":"In this research paper, a rule based chunker is developed and evaluated. For the development of the chunker, handcrafted linguistic rules for mainly noun, adverb, verb, adjective phrases and conjuncts were generated. Indian Languages Chunk Tagset is used for annotations. In order to evaluate, 500 sentences of Hindi language tagged by HMM tagger were considered and given as an input to our chunker. Precision, Recall and F-Measure for the system were calculated and found to be 79.68, 69.36 and 74.16 respectively.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129933171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1