首页 > 最新文献

2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)最新文献

英文 中文
A secured automated Attendance Management System implemented with Secret Sharing Algorithm 采用秘密共享算法实现的安全自动考勤管理系统
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315854
Shakti Arora, D. Verma, V. Athavale
Attendance management system is always an important entity for any of the organization, a number of attendance system like barcode, RFID, finger-print recognition system are already available in the market. While various companies are using different types of attendance management system according to their budget and convenience. Few of the designed and adopted systems have their own drawbacks like data handling, data security and privacy of the information. In this pape, we have proposed and designed a secured attendance management system with Secret sharing algorithm. The main objective is to automate the attendance system and provide the complete authentic and secure database of all the employees or users. Due to Covid-19 maximum of the organizations are trying to adopt the attendance management system without any physical connectivity or manual intervention. Our proposed solution is the best feasible solution to handle the stated problems. Attendance can be marked by scanning the QR code with a mobile phone at distant locations while secret sharing security algorithm is providing the security by distributing the complete code into number of secret shares which can be only recovered by authentic entities so integrity and privacy of the information is maintained properly.
考勤管理系统一直是任何一个组织的重要组成部分,目前市场上已经出现了条形码、RFID、指纹识别等多种考勤管理系统。而各公司则根据其预算和便利性使用不同类型的考勤管理系统。在设计和采用的系统中,很少存在数据处理、数据安全和信息隐私等方面的缺陷。本文提出并设计了一个基于秘密共享算法的安全考勤管理系统。主要目标是实现考勤系统的自动化,为所有员工或用户提供完整、真实、安全的数据库。由于Covid-19,大多数组织正在尝试采用没有任何物理连接或人工干预的考勤管理系统。我们提出的解决方案是处理上述问题的最佳可行方案。远程用手机扫描二维码即可标记考勤,而秘密共享安全算法通过将完整的代码分发到多个秘密共享中提供安全性,这些秘密共享只能由真实的实体恢复,从而保持信息的完整性和隐私性。
{"title":"A secured automated Attendance Management System implemented with Secret Sharing Algorithm","authors":"Shakti Arora, D. Verma, V. Athavale","doi":"10.1109/PDGC50313.2020.9315854","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315854","url":null,"abstract":"Attendance management system is always an important entity for any of the organization, a number of attendance system like barcode, RFID, finger-print recognition system are already available in the market. While various companies are using different types of attendance management system according to their budget and convenience. Few of the designed and adopted systems have their own drawbacks like data handling, data security and privacy of the information. In this pape, we have proposed and designed a secured attendance management system with Secret sharing algorithm. The main objective is to automate the attendance system and provide the complete authentic and secure database of all the employees or users. Due to Covid-19 maximum of the organizations are trying to adopt the attendance management system without any physical connectivity or manual intervention. Our proposed solution is the best feasible solution to handle the stated problems. Attendance can be marked by scanning the QR code with a mobile phone at distant locations while secret sharing security algorithm is providing the security by distributing the complete code into number of secret shares which can be only recovered by authentic entities so integrity and privacy of the information is maintained properly.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130257967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Machine Learning Technique for Wireless Sensor Networks 无线传感器网络的机器学习技术
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315775
Rajwinder Kaur, Jasminder Kaur Sandhu, Luxmi Sapra
Wireless Sensor Networks comprise of various low-cost, low-energy sensor nodes that perform the data gathering task. In a network, data or packets are transferred from source to destination via sink node or other coordinating nodes. It can be outlined as a network of devices that communicate information collected from the sensor field. The information flow takes place with the help of wireless links. Sensors are normally qualified by limited interaction abilities because of power and bandwidth constraints. In this paper, the main focus is on network issues and their solution. We consider Machine Learning techniques implemented in this network to solve some network problems. Machine Learning is the process where we train the model or machine based on training data, the model is programmed in such a way so that it “learns” from the information that it holds. This paper contains details of publications spanning a period of 2015–2020 for Machine Learning techniques that describe the challenging issues of Wireless Sensor Network.
无线传感器网络由执行数据收集任务的各种低成本、低能量传感器节点组成。在网络中,数据或数据包通过汇聚节点或其他协调节点从源传输到目的。它可以被概括为一个设备网络,用于交流从传感器现场收集的信息。信息流在无线链路的帮助下进行。由于功率和带宽的限制,传感器通常具有有限的交互能力。本文主要关注网络问题及其解决方案。我们考虑在这个网络中实现机器学习技术来解决一些网络问题。机器学习是我们基于训练数据训练模型或机器的过程,模型以这样一种方式编程,使它从它所拥有的信息中“学习”。本文包含2015-2020年期间机器学习技术的详细出版物,这些出版物描述了无线传感器网络的挑战性问题。
{"title":"Machine Learning Technique for Wireless Sensor Networks","authors":"Rajwinder Kaur, Jasminder Kaur Sandhu, Luxmi Sapra","doi":"10.1109/PDGC50313.2020.9315775","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315775","url":null,"abstract":"Wireless Sensor Networks comprise of various low-cost, low-energy sensor nodes that perform the data gathering task. In a network, data or packets are transferred from source to destination via sink node or other coordinating nodes. It can be outlined as a network of devices that communicate information collected from the sensor field. The information flow takes place with the help of wireless links. Sensors are normally qualified by limited interaction abilities because of power and bandwidth constraints. In this paper, the main focus is on network issues and their solution. We consider Machine Learning techniques implemented in this network to solve some network problems. Machine Learning is the process where we train the model or machine based on training data, the model is programmed in such a way so that it “learns” from the information that it holds. This paper contains details of publications spanning a period of 2015–2020 for Machine Learning techniques that describe the challenging issues of Wireless Sensor Network.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129001339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Correlative Analysis of Denoising Methods in Spectral Images Embedded with Different Noises 嵌入不同噪声的光谱图像去噪方法的相关性分析
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315749
Sangeetha Annam, Anshu Singla
Digital image is one of the primary way of communication in the present digital world. During the acquiring process, the images may become noisy. Noise reduction is a demanding task during the image analysis process without dissimilating the important features. It is the procedure of restoring the original image by discarding unwanted noises and known as Image denoising. The main intention of any noise removal technique is to completely eradicate the noise from the image, such that the resulting image is better than the original image. In this digital era, remote sensing images are widely commercial for environmental monitoring. In this study, a correlative analysis of different noise removal methods using various filters in spectral images is performed. Spectral images are introduced with different types of noise and further filters are applied to denoise the image. The performances of the methods are evaluated using benchmarks: Signal-to-Noise Ratio (SNR) and Peak Signal-to-N oise Ratio (PSNR). Experimental results demonstrate that the SNR and PSNR measures were comparatively higher for all the filters when the image is introduced with Poisson noise.
数字图像是当今数字世界中主要的通信方式之一。在采集过程中,图像可能会产生噪声。在图像分析过程中,在不影响重要特征的情况下,降噪是一项要求很高的任务。它是通过去除不需要的噪声来恢复原始图像的过程,称为图像去噪。任何去噪技术的主要目的都是为了完全消除图像中的噪声,从而得到比原始图像更好的图像。在这个数字时代,遥感图像被广泛用于环境监测。在本研究中,对光谱图像中使用各种滤波器的不同去噪方法进行了相关性分析。在光谱图像中引入了不同类型的噪声,并进一步应用滤波器对图像进行去噪。使用信噪比(SNR)和峰值信噪比(PSNR)对方法的性能进行了评估。实验结果表明,当图像中引入泊松噪声时,所有滤波器的信噪比和PSNR指标都相对较高。
{"title":"Correlative Analysis of Denoising Methods in Spectral Images Embedded with Different Noises","authors":"Sangeetha Annam, Anshu Singla","doi":"10.1109/PDGC50313.2020.9315749","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315749","url":null,"abstract":"Digital image is one of the primary way of communication in the present digital world. During the acquiring process, the images may become noisy. Noise reduction is a demanding task during the image analysis process without dissimilating the important features. It is the procedure of restoring the original image by discarding unwanted noises and known as Image denoising. The main intention of any noise removal technique is to completely eradicate the noise from the image, such that the resulting image is better than the original image. In this digital era, remote sensing images are widely commercial for environmental monitoring. In this study, a correlative analysis of different noise removal methods using various filters in spectral images is performed. Spectral images are introduced with different types of noise and further filters are applied to denoise the image. The performances of the methods are evaluated using benchmarks: Signal-to-Noise Ratio (SNR) and Peak Signal-to-N oise Ratio (PSNR). Experimental results demonstrate that the SNR and PSNR measures were comparatively higher for all the filters when the image is introduced with Poisson noise.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125422000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Similarity Measure Approaches Applied in Text Document Clustering for Information Retrieval 相似度量方法在文本文档聚类信息检索中的应用
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315851
Naveen Kumar, S. Yadav, Divakar Yadav
In today's world with ever increasing amount of text assets overloaded on web with digitized libraries, sorting out these documents got developed into a feasible need. Document clustering is an important procedure which consequently sorts out huge number of articles into a modest number of balanced gatherings. Document clustering is making groups of similar documents into number of clusters such that documents within the same group with high similarity values among one another and dissimilar to documents from other clusters. Common applications of document Clustering includes grouping similar news articles, analysis of customer feedback, text mining, duplicate content detection, finding similar documents, search optimization and many more. This lead to utilization of these documents for finding required information in a competent and efficient manner. Document clustering required a measurement for evaluating how surprising two given information are. This dissimilarity is often estimated by using some distance measures, for example, Cosine Similarity, Euclidean distance, etc. In our work, we evaluated and analyzed how effective these measures are in partitioned clustering for text document datasets. In our experiments we have used standard K-means algorithm and our results details on six text documents datasets and five most commonly used distance or similarity measures in text clustering.
随着数字化图书馆在网络上承载的文本资源越来越多,对这些文档进行整理已成为一种可行的需求。文档聚类是一个重要的过程,它将大量的文章整理成数量适中的平衡集合。文档聚类是将一组相似的文档分成若干个簇,这样同一组中的文档彼此之间具有高相似性值,并且与其他簇中的文档不相似。文档聚类的常见应用包括对相似的新闻文章进行分组、分析客户反馈、文本挖掘、重复内容检测、查找相似的文档、搜索优化等等。这导致利用这些文件以一种称职和有效的方式查找所需的信息。文档聚类需要一个度量来评估两个给定信息的惊人程度。这种差异通常通过使用一些距离度量来估计,例如,余弦相似度,欧几里得距离等。在我们的工作中,我们评估和分析了这些度量在文本文档数据集的分区聚类中的有效性。在我们的实验中,我们使用了标准的K-means算法,并在6个文本文档数据集和5个文本聚类中最常用的距离或相似度量上详细介绍了我们的结果。
{"title":"Similarity Measure Approaches Applied in Text Document Clustering for Information Retrieval","authors":"Naveen Kumar, S. Yadav, Divakar Yadav","doi":"10.1109/PDGC50313.2020.9315851","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315851","url":null,"abstract":"In today's world with ever increasing amount of text assets overloaded on web with digitized libraries, sorting out these documents got developed into a feasible need. Document clustering is an important procedure which consequently sorts out huge number of articles into a modest number of balanced gatherings. Document clustering is making groups of similar documents into number of clusters such that documents within the same group with high similarity values among one another and dissimilar to documents from other clusters. Common applications of document Clustering includes grouping similar news articles, analysis of customer feedback, text mining, duplicate content detection, finding similar documents, search optimization and many more. This lead to utilization of these documents for finding required information in a competent and efficient manner. Document clustering required a measurement for evaluating how surprising two given information are. This dissimilarity is often estimated by using some distance measures, for example, Cosine Similarity, Euclidean distance, etc. In our work, we evaluated and analyzed how effective these measures are in partitioned clustering for text document datasets. In our experiments we have used standard K-means algorithm and our results details on six text documents datasets and five most commonly used distance or similarity measures in text clustering.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125540269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Hybrid Job Scheduling in Distributed Systems based on Clone Detection 基于克隆检测的分布式系统混合作业调度
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315855
Uddalok Sen, M. Sarkar, N. Mukherjee
In order to propose an efficient scheduling policy in a large distributed heterogeneous environment, resource requirements of newly submitted jobs should be predicted prior to the execution of jobs. An execution history can be maintained to store the execution profile of all jobs executed earlier on a given set of resources. The execution history stores the actual CPU cycle consumed by the job as well as the resource details where it is executed. A feedback-guided job-modeling scheme can be used to detect similarity between the newly submitted jobs and previously executed jobs. It can also be used to predict resource requirements based on this similarity. However, efficient resource scheduling based on this knowledge has not been dealt with. In this paper, we propose a hybrid, scheduling policy of new jobs, which are independent of each other, based on their similarity with history jobs. Here we focus on exact clone jobs only i.e. its identical job is found in execution history and predicted resource consumption is same as exact resource consumption. We also endeavor to deal with two conflicting parameters i.e., execution cost and make span of jobs. A comparison with other existing algorithms is also presented in this paper.
为了在大型分布式异构环境中提出高效的调度策略,需要在作业执行之前预测新提交作业的资源需求。可以维护执行历史记录,以存储先前在给定资源集上执行的所有作业的执行概要。执行历史记录存储作业所消耗的实际CPU周期以及执行作业的资源详细信息。可以使用反馈引导的作业建模方案来检测新提交的作业和以前执行的作业之间的相似性。它还可以用于基于这种相似性来预测资源需求。然而,基于这些知识的有效资源调度还没有得到解决。本文基于新作业与历史作业的相似性,提出了一种相互独立的新作业混合调度策略。这里我们只关注精确的克隆作业,即在执行历史中找到相同的作业,并且预测的资源消耗与实际的资源消耗相同。我们还努力处理两个相互冲突的参数,即执行成本和作业跨度。并与现有算法进行了比较。
{"title":"Hybrid Job Scheduling in Distributed Systems based on Clone Detection","authors":"Uddalok Sen, M. Sarkar, N. Mukherjee","doi":"10.1109/PDGC50313.2020.9315855","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315855","url":null,"abstract":"In order to propose an efficient scheduling policy in a large distributed heterogeneous environment, resource requirements of newly submitted jobs should be predicted prior to the execution of jobs. An execution history can be maintained to store the execution profile of all jobs executed earlier on a given set of resources. The execution history stores the actual CPU cycle consumed by the job as well as the resource details where it is executed. A feedback-guided job-modeling scheme can be used to detect similarity between the newly submitted jobs and previously executed jobs. It can also be used to predict resource requirements based on this similarity. However, efficient resource scheduling based on this knowledge has not been dealt with. In this paper, we propose a hybrid, scheduling policy of new jobs, which are independent of each other, based on their similarity with history jobs. Here we focus on exact clone jobs only i.e. its identical job is found in execution history and predicted resource consumption is same as exact resource consumption. We also endeavor to deal with two conflicting parameters i.e., execution cost and make span of jobs. A comparison with other existing algorithms is also presented in this paper.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122224835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Analysis of Feature Detection and Extraction Techniques for Vision-based ISLR system 基于视觉的ISLR系统特征检测与提取技术的比较分析
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315777
Akansha Tyagi, Sandhya Bansal, Arjun Kashyap
Sign language recognition is a highly adaptive interface between the deaf-mute community and machines. In India, Indian Sign Language (ISL) plays a significant role in the deaf-mute society, breaking communication distancing. Extracting features from the input image is crucial in vision-based Indian Sign Language Recognition (ISLR). This paper addresses feature detection and extraction techniques used in the ISLR. This paper categorizes existing techniques into three broad groups: scale-based, intensity-based, and hybrid techniques. SIFT (Scale Invariant Feature Transform), SURF (Speeded up Robust Features), FAST (Features from Accelerated Segment Test), BRIEF (Binary Robust Independent Elementary Features), and ORB (Oriented FAST and rotated BRIEF) are the techniques that have been evaluated and compared for intensity scaling, occlusion, orientation, affine transformation, blurring, and illumination. Results were generated in terms of key point detected, time-taken, and the match rate. SIFT is consistent in most circumstances, though it is slow. FAST is the fastest with good performance like ORB, and BRIEF shows its advantages in affine transformation and intensity changes.
手语识别是聋哑人群体与机器之间的高度自适应界面。在印度,印度手语(ISL)在聋哑社会中发挥着重要作用,打破了交流距离。从输入图像中提取特征是基于视觉的印度手语识别(ISLR)的关键。本文讨论了ISLR中使用的特征检测和提取技术。本文将现有的技术分为三大类:基于规模的、基于强度的和混合技术。SIFT(尺度不变特征变换),SURF(加速鲁棒特征),FAST(加速片段测试特征),BRIEF(二进制鲁棒独立基本特征)和ORB(定向FAST和旋转BRIEF)是已经评估和比较的技术,用于强度缩放,遮挡,方向,射射变换,模糊和照明。根据检测到的关键点、所花费的时间和匹配率生成结果。SIFT在大多数情况下是一致的,尽管速度较慢。FAST与ORB一样速度最快,性能良好,BRIEF在仿射变换和强度变化方面表现出优势。
{"title":"Comparative Analysis of Feature Detection and Extraction Techniques for Vision-based ISLR system","authors":"Akansha Tyagi, Sandhya Bansal, Arjun Kashyap","doi":"10.1109/PDGC50313.2020.9315777","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315777","url":null,"abstract":"Sign language recognition is a highly adaptive interface between the deaf-mute community and machines. In India, Indian Sign Language (ISL) plays a significant role in the deaf-mute society, breaking communication distancing. Extracting features from the input image is crucial in vision-based Indian Sign Language Recognition (ISLR). This paper addresses feature detection and extraction techniques used in the ISLR. This paper categorizes existing techniques into three broad groups: scale-based, intensity-based, and hybrid techniques. SIFT (Scale Invariant Feature Transform), SURF (Speeded up Robust Features), FAST (Features from Accelerated Segment Test), BRIEF (Binary Robust Independent Elementary Features), and ORB (Oriented FAST and rotated BRIEF) are the techniques that have been evaluated and compared for intensity scaling, occlusion, orientation, affine transformation, blurring, and illumination. Results were generated in terms of key point detected, time-taken, and the match rate. SIFT is consistent in most circumstances, though it is slow. FAST is the fastest with good performance like ORB, and BRIEF shows its advantages in affine transformation and intensity changes.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126086633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
TCB Minimization towards Secured and Lightweight IoT End Device Architecture using Virtualization at Fog Node 使用雾节点虚拟化实现安全轻量级物联网终端设备架构的TCB最小化
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315850
Prateek Mishra, S. Yadav, S. Arora
An Internet of Things (IoT) architecture comprised of cloud, fog and resource constrained IoT end devices. The exponential development of IoT has increased the processing and footprint overhead in IoT end devices. All the components of IoT end devices that establish Chain of Trust (CoT) to ensure security are termed as Trusted Computing Base (TCB). The increased overhead in the IoT end device has increased the demand to increase the size of TCB surface area hence increases complexity of TCB surface area and also the increased the visibility of TCB surface area to the external world made the IoT end devices architecture over-architectured and unsecured. The TCB surface area minimization that has been remained unfocused reduces the complexity of TCB surface area and visibility of TCB components to the external un-trusted world hence ensures security in terms of confidentiality, integrity, authenticity (CIA) at the IoT end devices. The TCB minimization thus will convert the over-architectured IoT end device into lightweight and secured architecture highly desired for resource constrained IoT end devices. In this paper we review the IoT end device architectures proposed in the recent past and concluded that these architectures of resource constrained IoT end devices are over-architectured due to larger TCB and ignored bugs and vulnerabilities in TCB hence un-secured. We propose the Novel levelled architecture with TCB minimization by replacing oversized hypervisor with lightweight Micro(μ)-hypervisor i.e. μ-visor and transferring μ-hypervisor based virtualization over fog node for light weight and secured IoT End device architecture. The bug free TCB components confirm stable CoT for guaranteed CIA resulting into robust Trusted Execution Environment (TEE) hence secured IoT end device architecture. Thus the proposed resulting architecture is secured with minimized SRAM and flash memory combined footprint 39.05% of the total available memory per device. In this paper we review the IoT end device architectures proposed in the recent past and concluded that these architectures of resource constrained IoT end devices are over-architectured due to larger TCB and ignored bugs and vulnerabilities in TCB hence un-secured. We propose the Novel levelled architecture with TCB minimization by replacing oversized hypervisor with lightweight Micro(μ)-hypervisor i.e. μ-visor and transferring μ-hypervisor based virtualization over fog node for light weight and secured IoT End device architecture. The bug free TCB components confirm stable CoT for guaranteed CIA resulting into robust Trusted Execution Environment (TEE) hence secured IoT end device architecture. Thus the proposed resulting architecture is secured with minimized SRAM and flash memory combined footprint 39.05% of the total available memory per device.
物联网(IoT)架构由云、雾和资源受限的物联网终端设备组成。物联网的指数级发展增加了物联网终端设备的处理和占用空间开销。物联网终端设备中所有建立信任链(CoT)以确保安全的组件都称为可信计算基础(TCB)。物联网终端设备开销的增加增加了对增加TCB表面积尺寸的需求,从而增加了TCB表面积的复杂性,而且TCB表面积对外部世界的可见性也增加了物联网终端设备架构的过度架构和不安全。一直未集中的TCB表面积最小化降低了TCB表面积的复杂性和TCB组件对外部不可信世界的可见性,从而确保了物联网终端设备在机密性、完整性、真实性(CIA)方面的安全性。因此,TCB最小化将把过度架构的物联网终端设备转换为轻量级和安全的架构,这是资源受限的物联网终端设备非常需要的。在本文中,我们回顾了最近提出的物联网终端设备架构,并得出结论,这些资源受限的物联网终端设备架构由于较大的TCB而过度架构,并且忽略了TCB中的错误和漏洞,因此不安全。我们提出了一种新颖的水平架构,通过用轻量级的Micro(μ)-hypervisor(即μ-visor)取代超大的hypervisor,并在雾节点上传输基于μ-hypervisor的虚拟化,以实现轻量级和安全的物联网终端设备架构,从而实现TCB最小化。无bug的TCB组件确认了有保证的CIA的稳定CoT,从而形成强大的可信执行环境(TEE),从而安全的物联网终端设备架构。因此,所提出的最终架构以最小的SRAM和闪存组合占用39.05%的每个设备可用内存来保护。在本文中,我们回顾了最近提出的物联网终端设备架构,并得出结论,这些资源受限的物联网终端设备架构由于较大的TCB而过度架构,并且忽略了TCB中的错误和漏洞,因此不安全。我们提出了一种新颖的水平架构,通过用轻量级的Micro(μ)-hypervisor(即μ-visor)取代超大的hypervisor,并在雾节点上传输基于μ-hypervisor的虚拟化,以实现轻量级和安全的物联网终端设备架构,从而实现TCB最小化。无bug的TCB组件确认了有保证的CIA的稳定CoT,从而形成强大的可信执行环境(TEE),从而安全的物联网终端设备架构。因此,所提出的最终架构以最小的SRAM和闪存组合占用39.05%的每个设备可用内存来保护。
{"title":"TCB Minimization towards Secured and Lightweight IoT End Device Architecture using Virtualization at Fog Node","authors":"Prateek Mishra, S. Yadav, S. Arora","doi":"10.1109/PDGC50313.2020.9315850","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315850","url":null,"abstract":"An Internet of Things (IoT) architecture comprised of cloud, fog and resource constrained IoT end devices. The exponential development of IoT has increased the processing and footprint overhead in IoT end devices. All the components of IoT end devices that establish Chain of Trust (CoT) to ensure security are termed as Trusted Computing Base (TCB). The increased overhead in the IoT end device has increased the demand to increase the size of TCB surface area hence increases complexity of TCB surface area and also the increased the visibility of TCB surface area to the external world made the IoT end devices architecture over-architectured and unsecured. The TCB surface area minimization that has been remained unfocused reduces the complexity of TCB surface area and visibility of TCB components to the external un-trusted world hence ensures security in terms of confidentiality, integrity, authenticity (CIA) at the IoT end devices. The TCB minimization thus will convert the over-architectured IoT end device into lightweight and secured architecture highly desired for resource constrained IoT end devices. In this paper we review the IoT end device architectures proposed in the recent past and concluded that these architectures of resource constrained IoT end devices are over-architectured due to larger TCB and ignored bugs and vulnerabilities in TCB hence un-secured. We propose the Novel levelled architecture with TCB minimization by replacing oversized hypervisor with lightweight Micro(μ)-hypervisor i.e. μ-visor and transferring μ-hypervisor based virtualization over fog node for light weight and secured IoT End device architecture. The bug free TCB components confirm stable CoT for guaranteed CIA resulting into robust Trusted Execution Environment (TEE) hence secured IoT end device architecture. Thus the proposed resulting architecture is secured with minimized SRAM and flash memory combined footprint 39.05% of the total available memory per device. In this paper we review the IoT end device architectures proposed in the recent past and concluded that these architectures of resource constrained IoT end devices are over-architectured due to larger TCB and ignored bugs and vulnerabilities in TCB hence un-secured. We propose the Novel levelled architecture with TCB minimization by replacing oversized hypervisor with lightweight Micro(μ)-hypervisor i.e. μ-visor and transferring μ-hypervisor based virtualization over fog node for light weight and secured IoT End device architecture. The bug free TCB components confirm stable CoT for guaranteed CIA resulting into robust Trusted Execution Environment (TEE) hence secured IoT end device architecture. Thus the proposed resulting architecture is secured with minimized SRAM and flash memory combined footprint 39.05% of the total available memory per device.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126107808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
FPGA-Based Parallel Prefix Speculative Adder for Fast Computation Application 基于fpga的并行前缀推测加法器的快速计算应用
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315783
Garima Thakur, Harsh Sohal, Shruti Jain
Approximate computing provides the tradeoff between the accuracy, the speed as well as power consumption. Approximate adders and other logical circuits can reduce hardware overhead. In this paper non-speculative and speculative parallel prefix adder is proposed and makes it more reliable to be used in applications where high speed circuits are required. If there is misprediction of result in speculative adder then error-correction is activated in the next clock cycle. Speculation is a process in which approximation is done. Approximate computing is widely used in the current scenario. The speculative adder reduces the critical path and provides the trade-off between reliability and performance. Proposed speculative parallel prefix adder results in 8.204ns delay which shows 36.87%, 2.35%, 26.32 % improvement in comparison to conventional NSA, proposed NSA, and conventional SA. Architecture is implemented for 16-bit operand length and used is an FPGA-based processing application.
近似计算提供了精度、速度和功耗之间的权衡。近似加法器和其他逻辑电路可以减少硬件开销。本文提出了一种非推测性和推测性并行前缀加法器,使其更可靠地应用于需要高速电路的场合。如果推测加法器结果预测错误,则在下一个时钟周期启动纠错。推测是一个进行近似的过程。近似计算在当前场景中得到了广泛的应用。推测加法器减少了关键路径,并提供了可靠性和性能之间的权衡。所提出的推测式并行前缀加法器的延迟为8.204ns,比传统的NSA、提议的NSA和传统的SA分别提高36.87%、2.35%和26.32%。架构实现为16位操作数长度,并使用基于fpga的处理应用程序。
{"title":"FPGA-Based Parallel Prefix Speculative Adder for Fast Computation Application","authors":"Garima Thakur, Harsh Sohal, Shruti Jain","doi":"10.1109/PDGC50313.2020.9315783","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315783","url":null,"abstract":"Approximate computing provides the tradeoff between the accuracy, the speed as well as power consumption. Approximate adders and other logical circuits can reduce hardware overhead. In this paper non-speculative and speculative parallel prefix adder is proposed and makes it more reliable to be used in applications where high speed circuits are required. If there is misprediction of result in speculative adder then error-correction is activated in the next clock cycle. Speculation is a process in which approximation is done. Approximate computing is widely used in the current scenario. The speculative adder reduces the critical path and provides the trade-off between reliability and performance. Proposed speculative parallel prefix adder results in 8.204ns delay which shows 36.87%, 2.35%, 26.32 % improvement in comparison to conventional NSA, proposed NSA, and conventional SA. Architecture is implemented for 16-bit operand length and used is an FPGA-based processing application.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116553729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Improving the Efficiency of Automated Latent Fingerprint Identification Using Stack of Convolutional Auto-encoder 利用卷积自编码器栈提高指纹自动潜伏识别效率
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315746
Megha Chhabra, M. Shukla, K. Ravulakollu
In this paper, a method for improving the efficiency of latent fingerprint segmentation and detection system is presented. Structural detection and precise segmentation of fingerprints otherwise not visible to the naked eye (called latents), provide the basis for automatic identification of latent fingerprints. The method is based on the assumption, that including detection of relevant structure of interest from latent fingerprint image into an effective segmentation model pipeline improves the effectiveness of the model and efficiency of the automated segmentation. The approach discards detections of poor-quality due to noise, inadequate data, misplaced structures of interests from multiple instances of fingermarks in the image etc. A collaborative detector-segmentation approach is proposed which establishes reproducibility and repeatability of the model, consequently increasing the efficiency of the frame of work. The results are obtained on IIIT -DCLF database. Performing saliency-based detection using color based visual distortion reducing the subsequent information processing cost through a stack of the convolutional autoencoder. The results obtained signify significant improvement over published results.
本文提出了一种提高潜在指纹分割检测系统效率的方法。对肉眼不可见的指纹(称为潜指纹)进行结构检测和精确分割,为自动识别潜指纹提供依据。该方法基于这样的假设:将潜在指纹图像中感兴趣的相关结构检测纳入有效的分割模型流水线中,提高了模型的有效性和自动分割的效率。该方法摒弃了由于噪声、数据不足、图像中多个指纹实例的兴趣结构错位等导致的低质量检测。提出了一种协同检测分割方法,建立了模型的再现性和可重复性,从而提高了工作框架的效率。结果在IIIT -DCLF数据库上得到。利用基于颜色的视觉失真进行显著性检测,通过卷积自编码器堆栈减少后续信息处理成本。所得结果与已发表的结果相比有显著改善。
{"title":"Improving the Efficiency of Automated Latent Fingerprint Identification Using Stack of Convolutional Auto-encoder","authors":"Megha Chhabra, M. Shukla, K. Ravulakollu","doi":"10.1109/PDGC50313.2020.9315746","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315746","url":null,"abstract":"In this paper, a method for improving the efficiency of latent fingerprint segmentation and detection system is presented. Structural detection and precise segmentation of fingerprints otherwise not visible to the naked eye (called latents), provide the basis for automatic identification of latent fingerprints. The method is based on the assumption, that including detection of relevant structure of interest from latent fingerprint image into an effective segmentation model pipeline improves the effectiveness of the model and efficiency of the automated segmentation. The approach discards detections of poor-quality due to noise, inadequate data, misplaced structures of interests from multiple instances of fingermarks in the image etc. A collaborative detector-segmentation approach is proposed which establishes reproducibility and repeatability of the model, consequently increasing the efficiency of the frame of work. The results are obtained on IIIT -DCLF database. Performing saliency-based detection using color based visual distortion reducing the subsequent information processing cost through a stack of the convolutional autoencoder. The results obtained signify significant improvement over published results.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"28 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131743913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Task Allocation for Cloud Using Bat Algorithm 基于Bat算法的高效云任务分配
Pub Date : 2020-11-06 DOI: 10.1109/PDGC50313.2020.9315845
Anant Kumar Jayswal
Cloud computing is the shared pool of heterogeneous computing, storage, and network resources across the globe. The resources are allocated to cloud users using a service level agreement. The resources are accessed by the end-users based on pricing models. In such scenario placement and management of resources over cloud datacenters is a critical issue. Task allocation in cloud places an important role in the performance of cloud and manage utilization of resource and performance of tasks. There exist various static and dynamic algorithm to solve this issue. In this work a Bat Algorithm inspired task allocation algorithm for cloud infrastructure is proposed to improve the performance of cloud in term of execution time and start time as compared to existing algorithms.
云计算是全球异构计算、存储和网络资源的共享池。资源通过服务水平协议分配给云用户。最终用户根据定价模型访问资源。在这种情况下,云数据中心上资源的放置和管理是一个关键问题。云中的任务分配对云的性能和管理资源的利用以及任务的性能起着重要的作用。目前已有各种静态和动态算法来解决这一问题。本文提出了一种基于Bat算法的云基础设施任务分配算法,与现有算法相比,在执行时间和启动时间方面提高了云的性能。
{"title":"Efficient Task Allocation for Cloud Using Bat Algorithm","authors":"Anant Kumar Jayswal","doi":"10.1109/PDGC50313.2020.9315845","DOIUrl":"https://doi.org/10.1109/PDGC50313.2020.9315845","url":null,"abstract":"Cloud computing is the shared pool of heterogeneous computing, storage, and network resources across the globe. The resources are allocated to cloud users using a service level agreement. The resources are accessed by the end-users based on pricing models. In such scenario placement and management of resources over cloud datacenters is a critical issue. Task allocation in cloud places an important role in the performance of cloud and manage utilization of resource and performance of tasks. There exist various static and dynamic algorithm to solve this issue. In this work a Bat Algorithm inspired task allocation algorithm for cloud infrastructure is proposed to improve the performance of cloud in term of execution time and start time as compared to existing algorithms.","PeriodicalId":347216,"journal":{"name":"2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121272229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1