首页 > 最新文献

2021 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER)最新文献

英文 中文
High Performance Variable Precision Multiplier and Accumulator Unit for Digital Filter Applications 用于数字滤波器应用的高性能可变精度乘法器和累加器单元
K. Neelima, Satyam
The computationally intensive designs such as Multiplier Accumulator Unit need to be revisited for ascertaining better performance. Design of MAC unit using Variable Precision multiplier is explored in this paper with adoption of this concept to multipliers like Array, Carry Save, Booth and Vedic Multipliers. The variable precision MAC unit saves the computation memory as the partial results are computed with less memory so that the final result has a size of 2n x m for multiplication of n x m bits. The Verilog HDL modeling is used for the designs and Xilinx ISE 14.5 with ISIM simulator are used to functionally verify for Zynq 7000 series FPGA (XC7Z020-1CLG484). Among these, Vedic VPMAC FIR Filter proved to be better for area and delay by atleast 23.05% and 17.16% respectively, with a trade-off of 2.04% in power dissipation when compared with the other three designs. Also when compared with existing designs, it uses less area, delay and power dissipation by at least 8.06%, 8.58% and 2% respectively.
为了获得更好的性能,需要重新考虑诸如乘法器累加器单元之类的计算密集型设计。本文探讨了可变精度乘法器的MAC单元设计,并将此概念应用于Array、Carry Save、Booth和Vedic乘法器等乘法器。可变精度MAC单元节省了计算内存,因为部分结果使用较少的内存进行计算,因此对于n x m位的乘法,最终结果的大小为2n x m。Verilog HDL建模用于设计,Xilinx ISE 14.5与ISIM模拟器用于Zynq 7000系列FPGA (XC7Z020-1CLG484)的功能验证。其中,Vedic VPMAC FIR滤波器与其他三种设计相比,在面积和延迟方面分别至少提高23.05%和17.16%,功耗降低2.04%。与现有设计相比,其面积、延迟和功耗分别降低至少8.06%、8.58%和2%。
{"title":"High Performance Variable Precision Multiplier and Accumulator Unit for Digital Filter Applications","authors":"K. Neelima, Satyam","doi":"10.1109/DISCOVER52564.2021.9663665","DOIUrl":"https://doi.org/10.1109/DISCOVER52564.2021.9663665","url":null,"abstract":"The computationally intensive designs such as Multiplier Accumulator Unit need to be revisited for ascertaining better performance. Design of MAC unit using Variable Precision multiplier is explored in this paper with adoption of this concept to multipliers like Array, Carry Save, Booth and Vedic Multipliers. The variable precision MAC unit saves the computation memory as the partial results are computed with less memory so that the final result has a size of 2n x m for multiplication of n x m bits. The Verilog HDL modeling is used for the designs and Xilinx ISE 14.5 with ISIM simulator are used to functionally verify for Zynq 7000 series FPGA (XC7Z020-1CLG484). Among these, Vedic VPMAC FIR Filter proved to be better for area and delay by atleast 23.05% and 17.16% respectively, with a trade-off of 2.04% in power dissipation when compared with the other three designs. Also when compared with existing designs, it uses less area, delay and power dissipation by at least 8.06%, 8.58% and 2% respectively.","PeriodicalId":413789,"journal":{"name":"2021 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114819345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Load Balancing and Predictive Analysis Model Implementation in Public Cloud 公共云中负载均衡与预测分析模型的实现
M. Gagandeep, R. Pushpalatha, B. Ramesh
The data centre is fundamental to cloud computing. Data centers are currently being strained by the rising demand for cloud computing services. Cloud computing practices are very important in terms of device performance and schedule that can make it easier for users to distribute the workload among network resources. Any data-center services can eventually become overloaded/ under loaded, resulting in increased energy usage, as well as decreased functionality and resource waste.As a result, this paper uses a contextual with multiple metrics to adopt optimization algorithms that are implemented by load balancing. Load balancing with system integration strengthens resource utilization but can increase Performance of System (Latency) metrics. This research aims to incorporate a new system for congestion control and server expansion including migration latency, device threshold, QoS, and energy consumption.
数据中心是云计算的基础。由于对云计算服务的需求不断增长,数据中心目前正处于紧张状态。云计算实践在设备性能和调度方面非常重要,可以使用户更容易地在网络资源之间分配工作负载。任何数据中心服务最终都可能过载/负载不足,从而导致能源使用增加,以及功能和资源浪费减少。因此,本文使用具有多个度量的上下文来采用负载平衡实现的优化算法。带有系统集成的负载平衡增强了资源利用率,但可能会增加系统性能(延迟)指标。本研究旨在整合一个新的系统,用于拥塞控制和服务器扩展,包括迁移延迟、设备阈值、QoS和能耗。
{"title":"Load Balancing and Predictive Analysis Model Implementation in Public Cloud","authors":"M. Gagandeep, R. Pushpalatha, B. Ramesh","doi":"10.1109/DISCOVER52564.2021.9663617","DOIUrl":"https://doi.org/10.1109/DISCOVER52564.2021.9663617","url":null,"abstract":"The data centre is fundamental to cloud computing. Data centers are currently being strained by the rising demand for cloud computing services. Cloud computing practices are very important in terms of device performance and schedule that can make it easier for users to distribute the workload among network resources. Any data-center services can eventually become overloaded/ under loaded, resulting in increased energy usage, as well as decreased functionality and resource waste.As a result, this paper uses a contextual with multiple metrics to adopt optimization algorithms that are implemented by load balancing. Load balancing with system integration strengthens resource utilization but can increase Performance of System (Latency) metrics. This research aims to incorporate a new system for congestion control and server expansion including migration latency, device threshold, QoS, and energy consumption.","PeriodicalId":413789,"journal":{"name":"2021 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132335987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Scalable Data Provenance Mechanism for Cloud Environment using Ethereum Blockchain 基于以太坊区块链的云环境可扩展数据来源机制
P. Abhishek, Y. Akash, D. Narayan
Cloud computing has become an important technology for the processing of large computationally expensive programs and data from real-world. IT organizations are en-trusting the cloud vendors for the security of their computational infrastructures including the data. Thus, there is a need for better provenance assurance for the data present in the cloud for better security and to establish trust between the cloud vendors and customers. Blockchain technology operates in a decentralized way for building trust between the entities in the system using immutable ledger. In this work, we propose the decentralized and trusted cloud data provenance mechanism using Ethereum blockchain platform, IPFS and scalable consensus mechanism. PoW is the consensus algorithm currently used in Ethereum Blockchain. PoW consensus mechanism needs lot of computational power for the processing of data in the Blockchain. Thus, we implement Proof of Stake (POS) consensus algorithm in Ethereum to improve the efficiency of the proposed data provenance mechanism. We demonstrate the performance of PoS-enabled provenance framework in a multi-node testbed. The results reveal that POS performs better than PoW for cloud data provenance.
云计算已经成为处理现实世界中计算成本高昂的大型程序和数据的重要技术。IT组织将其计算基础设施(包括数据)的安全性委托给云计算供应商。因此,需要为云中的数据提供更好的来源保证,以获得更好的安全性,并在云供应商和客户之间建立信任。区块链技术以去中心化的方式运行,使用不可变分类账在系统中的实体之间建立信任。在这项工作中,我们提出了基于以太坊区块链平台、IPFS和可扩展共识机制的去中心化可信云数据溯源机制。PoW是目前以太坊区块链中使用的共识算法。PoW共识机制需要大量的计算能力来处理区块链中的数据。因此,我们在以太坊中实现了权益证明(POS)共识算法,以提高所提议的数据来源机制的效率。我们在一个多节点测试平台上演示了pos支持的溯源框架的性能。结果表明,POS在云数据溯源方面的性能优于PoW。
{"title":"A Scalable Data Provenance Mechanism for Cloud Environment using Ethereum Blockchain","authors":"P. Abhishek, Y. Akash, D. Narayan","doi":"10.1109/DISCOVER52564.2021.9663425","DOIUrl":"https://doi.org/10.1109/DISCOVER52564.2021.9663425","url":null,"abstract":"Cloud computing has become an important technology for the processing of large computationally expensive programs and data from real-world. IT organizations are en-trusting the cloud vendors for the security of their computational infrastructures including the data. Thus, there is a need for better provenance assurance for the data present in the cloud for better security and to establish trust between the cloud vendors and customers. Blockchain technology operates in a decentralized way for building trust between the entities in the system using immutable ledger. In this work, we propose the decentralized and trusted cloud data provenance mechanism using Ethereum blockchain platform, IPFS and scalable consensus mechanism. PoW is the consensus algorithm currently used in Ethereum Blockchain. PoW consensus mechanism needs lot of computational power for the processing of data in the Blockchain. Thus, we implement Proof of Stake (POS) consensus algorithm in Ethereum to improve the efficiency of the proposed data provenance mechanism. We demonstrate the performance of PoS-enabled provenance framework in a multi-node testbed. The results reveal that POS performs better than PoW for cloud data provenance.","PeriodicalId":413789,"journal":{"name":"2021 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER)","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116401407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Secure Access Control to Cloud Resources using Blockchain 使用区块链对云资源进行安全访问控制
H. Spoorti, R. Sneha, V. Soujanya, K. Heena, S. Pooja, D. Narayan
With the steady increase in complexity of cloud computing and other technologies, there has been a significant concern for providing privacy and data security. For secure access of cloud resources, protection of sensitive data is essential. The critical data that relates to resource provisioning in cloud environment includes key pairs and IP addresses. The user needs this critical data to access the resources which has been provisioned. In conventional systems, this data is directly provided to the user. This technique results in local storage and opens up a vulnerability of snooping by a malicious intruder. Furthermore, the user needs to repeatedly remember the key pairs and other information of all the created resources. Recently, Blockchain, with its decentralized and immutable features is being explored in the design access control mechanism. In this work, we design blockchain based access control mechanism to access cloud resources. We use OpenStack cloud to provision the resources, store the critical data of resources in Ethereum blockchain and use secure access control to cloud resources. We also carry out the scalability and performance analysis of the blockchain based system using Ethereum platform.
随着云计算和其他技术的复杂性不断增加,提供隐私和数据安全已经成为一个重要的问题。为了安全访问云资源,保护敏感数据至关重要。云环境中与资源发放相关的关键数据包括密钥对和IP地址。用户需要这些关键数据来访问已分配的资源。在传统系统中,这些数据是直接提供给用户的。这种技术导致本地存储,并打开了被恶意入侵者窥探的漏洞。此外,用户需要反复记住所有已创建资源的密钥对和其他信息。最近,区块链以其去中心化和不可变的特点,在设计访问控制机制方面进行了探索。在这项工作中,我们设计了基于区块链的访问控制机制来访问云资源。我们使用OpenStack云提供资源,将资源的关键数据存储在以太坊区块链中,并对云资源进行安全访问控制。我们还使用以太坊平台对基于区块链的系统进行了可扩展性和性能分析。
{"title":"Secure Access Control to Cloud Resources using Blockchain","authors":"H. Spoorti, R. Sneha, V. Soujanya, K. Heena, S. Pooja, D. Narayan","doi":"10.1109/DISCOVER52564.2021.9663647","DOIUrl":"https://doi.org/10.1109/DISCOVER52564.2021.9663647","url":null,"abstract":"With the steady increase in complexity of cloud computing and other technologies, there has been a significant concern for providing privacy and data security. For secure access of cloud resources, protection of sensitive data is essential. The critical data that relates to resource provisioning in cloud environment includes key pairs and IP addresses. The user needs this critical data to access the resources which has been provisioned. In conventional systems, this data is directly provided to the user. This technique results in local storage and opens up a vulnerability of snooping by a malicious intruder. Furthermore, the user needs to repeatedly remember the key pairs and other information of all the created resources. Recently, Blockchain, with its decentralized and immutable features is being explored in the design access control mechanism. In this work, we design blockchain based access control mechanism to access cloud resources. We use OpenStack cloud to provision the resources, store the critical data of resources in Ethereum blockchain and use secure access control to cloud resources. We also carry out the scalability and performance analysis of the blockchain based system using Ethereum platform.","PeriodicalId":413789,"journal":{"name":"2021 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116453686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Security of the Medical Images using Different Chaos map Techniques 不同混沌图技术对医学图像安全性的影响
S. Ragani, T. Vijaya Murari, K. Ravishankar
Nowadays Internet is used for faster transmission of valuable information. This information may be text, image, and video anything. With the advancement of computer and biomedical innovations, medical JPEG images contain the patient’s individual data and the security of the private data pulls in incredible consideration. The progression of program and correspondence advancements has made images act significant parts in the domain of telediagnosis, tele-surgery, and so forth. Simultaneously, such signs of progress provide new intends to process clinical pictures; it additionally expands security concern as far as confidentiality, integrity and availability. This paper provides an overview of the various approaches currently in use and then goes on to develop a new mechanism for the same using various Chaos maps techniques in order to provide more integrityand security to the image by making the keyspace large enough towithstand brute force attacks.
如今,互联网被用来更快地传递有价值的信息。这些信息可以是文本、图像和视频。随着计算机和生物医学创新的进步,医学JPEG图像包含了患者的个人数据,私人数据的安全性得到了令人难以置信的考虑。程序和通信技术的进步使图像在远程诊断、远程手术等领域发挥了重要作用。同时,这些进展的迹象提供了处理临床图像的新意图;它还扩展了对机密性、完整性和可用性的安全关注。本文概述了目前使用的各种方法,然后继续使用各种混沌映射技术开发一种新机制,以便通过使键空间足够大以承受蛮力攻击来为图像提供更多的完整性和安全性。
{"title":"Security of the Medical Images using Different Chaos map Techniques","authors":"S. Ragani, T. Vijaya Murari, K. Ravishankar","doi":"10.1109/DISCOVER52564.2021.9663714","DOIUrl":"https://doi.org/10.1109/DISCOVER52564.2021.9663714","url":null,"abstract":"Nowadays Internet is used for faster transmission of valuable information. This information may be text, image, and video anything. With the advancement of computer and biomedical innovations, medical JPEG images contain the patient’s individual data and the security of the private data pulls in incredible consideration. The progression of program and correspondence advancements has made images act significant parts in the domain of telediagnosis, tele-surgery, and so forth. Simultaneously, such signs of progress provide new intends to process clinical pictures; it additionally expands security concern as far as confidentiality, integrity and availability. This paper provides an overview of the various approaches currently in use and then goes on to develop a new mechanism for the same using various Chaos maps techniques in order to provide more integrityand security to the image by making the keyspace large enough towithstand brute force attacks.","PeriodicalId":413789,"journal":{"name":"2021 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116608133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
PCA and SVM Technique for Epileptic Seizure Classification 基于PCA和SVM的癫痫发作分类
Mohammad Asif Raibag, J. V. Franklin
In India we have shortage of skilled Neuro-Physicians who can correctly and timely analyze the complicated features of electroencephalogram (EEG) signal which is critical in epilepsy diagnosis, and hence developing a reliable seizure classification model remains a challenging task. A Support Vector Machine (SVM)-based mechanism is proposed in this paper for classification of epileptic seizures from EEG recordings of brain activity. Certain relevant features are selected from time-frequency domain EEG recordings (TFD). Principal Component Analysis (PCA) technique is applied to improve the performance of the model and for classification SVM classifier with different kernels is applied. According to the results, the proposed PCA-SVM radial basis kernel approach is capable of improving epilepsy classification, as made evident by the results, which show an accuracy of 96.6% for normal subject data versus epileptic data. The performance with other parameters too show promising results hence the proposed SVM-RBF model achieves robust classification for epilepsy.
在印度,我们缺乏熟练的神经内科医生,他们能够正确及时地分析脑电图信号的复杂特征,这对癫痫的诊断至关重要,因此建立一个可靠的癫痫发作分类模型仍然是一项具有挑战性的任务。本文提出了一种基于支持向量机(SVM)的基于脑电活动记录的癫痫发作分类机制。从时频域EEG记录(TFD)中选取一定的相关特征。采用主成分分析(PCA)技术提高模型的性能,并采用不同核的支持向量机分类器进行分类。结果表明,所提出的PCA-SVM径向基核方法能够提高癫痫分类的准确率,对正常受试者数据的准确率为96.6%。在其他参数下,SVM-RBF模型的分类效果也很好,实现了对癫痫的鲁棒分类。
{"title":"PCA and SVM Technique for Epileptic Seizure Classification","authors":"Mohammad Asif Raibag, J. V. Franklin","doi":"10.1109/DISCOVER52564.2021.9663616","DOIUrl":"https://doi.org/10.1109/DISCOVER52564.2021.9663616","url":null,"abstract":"In India we have shortage of skilled Neuro-Physicians who can correctly and timely analyze the complicated features of electroencephalogram (EEG) signal which is critical in epilepsy diagnosis, and hence developing a reliable seizure classification model remains a challenging task. A Support Vector Machine (SVM)-based mechanism is proposed in this paper for classification of epileptic seizures from EEG recordings of brain activity. Certain relevant features are selected from time-frequency domain EEG recordings (TFD). Principal Component Analysis (PCA) technique is applied to improve the performance of the model and for classification SVM classifier with different kernels is applied. According to the results, the proposed PCA-SVM radial basis kernel approach is capable of improving epilepsy classification, as made evident by the results, which show an accuracy of 96.6% for normal subject data versus epileptic data. The performance with other parameters too show promising results hence the proposed SVM-RBF model achieves robust classification for epilepsy.","PeriodicalId":413789,"journal":{"name":"2021 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114942963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Hardware Architecture for Sample Adaptive Offset Filter in HEVC HEVC中样本自适应偏移滤波器的硬件结构
P. Kopperundevi, M. S. Prakash
The newly introduced in-loop filtering tool in the High Efficiency Video Coding (HEVC) standard is the Sample Adaptive Offset (SAO) filter. It mainly helps to reduce the ringing artifacts, that occurs due to distortion or loss of high frequency information. While SAO contributes to a significant increase in coding efficiency, the complexity of in-loop filtering in HEVC encoding is dominated by the estimation of SAO parameters. The SAO estimation primarily includes two phases: the statistics collection phase and the parameter determination phase. The statistics collection phase involves calculating sum and count for band and edge offset. The parameter determination phase involves distortion and offset generation, cost generation and decision, SAO type decision and finally, merge mode decision. In this paper, we designed the SAO encoder’s hardware architecture using a separate clock for each phase. The evaluation of proposed architecture results in decrease in the area of 4%-73% when compared with existing architectures while achieving a similar throughput rate. The design occupies a 70K gate count with a minimum operating frequency of 375MHz.
高效视频编码(HEVC)标准中新引入的环内滤波工具是采样自适应偏移(SAO)滤波器。它主要有助于减少因高频信息失真或丢失而产生的振铃伪影。虽然SAO有助于显著提高编码效率,但HEVC编码中环内滤波的复杂性主要取决于SAO参数的估计。SAO估计主要包括两个阶段:统计信息收集阶段和参数确定阶段。统计信息收集阶段包括计算频带和边缘偏移的总和和计数。参数确定阶段包括失真和偏移产生、成本产生和决策、SAO类型决策,最后是合并模式决策。在本文中,我们设计了SAO编码器的硬件架构,每个相位使用一个单独的时钟。与现有体系结构相比,对所提议体系结构的评估结果是在实现相似吞吐量的同时减少了4%-73%。该设计占用70K门数,最小工作频率为375MHz。
{"title":"A Hardware Architecture for Sample Adaptive Offset Filter in HEVC","authors":"P. Kopperundevi, M. S. Prakash","doi":"10.1109/DISCOVER52564.2021.9663590","DOIUrl":"https://doi.org/10.1109/DISCOVER52564.2021.9663590","url":null,"abstract":"The newly introduced in-loop filtering tool in the High Efficiency Video Coding (HEVC) standard is the Sample Adaptive Offset (SAO) filter. It mainly helps to reduce the ringing artifacts, that occurs due to distortion or loss of high frequency information. While SAO contributes to a significant increase in coding efficiency, the complexity of in-loop filtering in HEVC encoding is dominated by the estimation of SAO parameters. The SAO estimation primarily includes two phases: the statistics collection phase and the parameter determination phase. The statistics collection phase involves calculating sum and count for band and edge offset. The parameter determination phase involves distortion and offset generation, cost generation and decision, SAO type decision and finally, merge mode decision. In this paper, we designed the SAO encoder’s hardware architecture using a separate clock for each phase. The evaluation of proposed architecture results in decrease in the area of 4%-73% when compared with existing architectures while achieving a similar throughput rate. The design occupies a 70K gate count with a minimum operating frequency of 375MHz.","PeriodicalId":413789,"journal":{"name":"2021 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER)","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122983051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification of Crop and Weed from Digital Images: A Review 基于数字图像的作物和杂草分类研究进展
Radhika Kamath, Mamatha Balachandra, Srikanth Prabhu
One of the major concern in the agricultural sector is the control of weeds. Weeds are capable of reducing the crop yield significantly and thus in curhuge loss. There are many ways of controlling weeds like using chemical herbicides, manual weeding, and using mechanical weeder. Overuse of chemical herbicides for weeds harms environment. Shortage of labors is a main problem with manual weeding. Mechanical weeding is not effective and is not suitable for some of the crops like direct-seeded rice fields. In recenty ears, technology is being explored in agriculture for the automatic detection and identification weeds from the digital images. This is useful in recommending specific herbicides and thus reducing overuse of herbicides and herbicide-resistant weeds. Thus contributing to site-specific weed management. This paper reviews some of the important research works carried out for the classification of crop and weeds from the digital images. In addition, some of the important future research scopes are discussed in this paper.
农业部门的主要问题之一是杂草的控制。杂草能显著降低作物产量,造成巨大损失。有许多控制杂草的方法,如使用化学除草剂、人工除草和使用机械除草。过度使用化学除草剂除草危害环境。劳动力短缺是人工除草的主要问题。机械除草效果不佳,不适合直接播种水稻等作物。近年来,人们正在探索从数字图像中自动检测和识别杂草的技术。这有助于推荐特定的除草剂,从而减少除草剂和抗除草剂杂草的过度使用。从而有助于特定地点的杂草管理。本文综述了数字图像中作物和杂草分类的一些重要研究工作。此外,本文还对今后的研究方向进行了展望。
{"title":"Classification of Crop and Weed from Digital Images: A Review","authors":"Radhika Kamath, Mamatha Balachandra, Srikanth Prabhu","doi":"10.1109/DISCOVER52564.2021.9663729","DOIUrl":"https://doi.org/10.1109/DISCOVER52564.2021.9663729","url":null,"abstract":"One of the major concern in the agricultural sector is the control of weeds. Weeds are capable of reducing the crop yield significantly and thus in curhuge loss. There are many ways of controlling weeds like using chemical herbicides, manual weeding, and using mechanical weeder. Overuse of chemical herbicides for weeds harms environment. Shortage of labors is a main problem with manual weeding. Mechanical weeding is not effective and is not suitable for some of the crops like direct-seeded rice fields. In recenty ears, technology is being explored in agriculture for the automatic detection and identification weeds from the digital images. This is useful in recommending specific herbicides and thus reducing overuse of herbicides and herbicide-resistant weeds. Thus contributing to site-specific weed management. This paper reviews some of the important research works carried out for the classification of crop and weeds from the digital images. In addition, some of the important future research scopes are discussed in this paper.","PeriodicalId":413789,"journal":{"name":"2021 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER)","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122846010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
[DISCOVER 2021 Front cover] [DISCOVER 2021年封面]
{"title":"[DISCOVER 2021 Front cover]","authors":"","doi":"10.1109/discover52564.2021.9663626","DOIUrl":"https://doi.org/10.1109/discover52564.2021.9663626","url":null,"abstract":"","PeriodicalId":413789,"journal":{"name":"2021 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117245785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Triple-Layered and Double Secured Cryptography Technique in Wireless Sensor Networks 无线传感器网络中一种高效的三层双重安全加密技术
K. Vivek, M. R. Kale, Venkata Sai Krishna Thotakura, K. Sushma
When implemented in a complex environment, wireless network security is the main factor, and it is the sensor networks’ primary concern. Cryptology is a vital component in wireless sensor networks to accomplish this. Many existing cryptographic techniques had not shown good and better results till now. An efficient, strong, triple phased, double secured, and integrated cryptographic approach has been introduced in this study that utilizes both secret-key and public-key methods. Rijndael Encryption Approach (REA), Horst Feistel’s Encryption Approach (HFEA), and enhanced Rivest-Shamir-Adleman (e- RSA) are employed in the propounded technique in various stages of the algorithm since secret-key based system offers a significant level of protection and enable key management through publickey based techniques. REA was used in stage 1 of the algorithm; REA+HFEA was used in stage 2, and REA+HFEA+e-RSA was used in the last stage, and all three stages were performed in parallel. Parameters like execution time and decryption time were taken into account for measuring the performance levels of the propounded approach. The propounded algorithm is differentiated from existing techniques using a single evaluation parameter i.e computation time. It is found that propounded approach gave a good performance in terms of computation time with an Average Encryption Time (AET) and Average Decryption Time (ADT) of 1.12 and 1.26 on text sizes of 6, 25, 35, 61, and 184MegaBytes (MB) respectively. The proposed hybrid model is 1.36 times faster than ECC+RSA+MD-5,3.25 times faster than AES+ECC, 2.7 times faster than AES+RSA, and 3.24 times faster than AES+ECC+RSA+MD5.
当在复杂环境中实现时,无线网络安全是主要因素,也是传感器网络的首要关注点。密码学是无线传感器网络实现这一目标的重要组成部分。现有的许多加密技术到目前为止都没有表现出较好的效果。本研究介绍了一种高效、强大、三阶段、双重保护和集成的加密方法,该方法同时利用了私钥和公钥方法。Rijndael加密方法(REA), Horst Feistel的加密方法(HFEA)和增强的Rivest-Shamir-Adleman (e- RSA)被用于算法的各个阶段,因为基于密钥的系统提供了重要的保护级别,并通过基于公钥的技术实现密钥管理。算法第一阶段采用REA;第二阶段采用REA+HFEA,最后阶段采用REA+HFEA+e-RSA,三个阶段并行进行。在测量所提出方法的性能水平时,考虑了执行时间和解密时间等参数。所提出的算法与使用单一评估参数即计算时间的现有技术不同。结果表明,本文提出的方法在计算时间方面表现良好,在文本大小分别为6,25,35,61和184mb时,平均加密时间(AET)和平均解密时间(ADT)分别为1.12和1.26。该混合模型比ECC+RSA+MD-5快1.36倍,比AES+ECC快3.25倍,比AES+RSA快2.7倍,比AES+ECC+RSA+MD5快3.24倍。
{"title":"An Efficient Triple-Layered and Double Secured Cryptography Technique in Wireless Sensor Networks","authors":"K. Vivek, M. R. Kale, Venkata Sai Krishna Thotakura, K. Sushma","doi":"10.1109/DISCOVER52564.2021.9663674","DOIUrl":"https://doi.org/10.1109/DISCOVER52564.2021.9663674","url":null,"abstract":"When implemented in a complex environment, wireless network security is the main factor, and it is the sensor networks’ primary concern. Cryptology is a vital component in wireless sensor networks to accomplish this. Many existing cryptographic techniques had not shown good and better results till now. An efficient, strong, triple phased, double secured, and integrated cryptographic approach has been introduced in this study that utilizes both secret-key and public-key methods. Rijndael Encryption Approach (REA), Horst Feistel’s Encryption Approach (HFEA), and enhanced Rivest-Shamir-Adleman (e- RSA) are employed in the propounded technique in various stages of the algorithm since secret-key based system offers a significant level of protection and enable key management through publickey based techniques. REA was used in stage 1 of the algorithm; REA+HFEA was used in stage 2, and REA+HFEA+e-RSA was used in the last stage, and all three stages were performed in parallel. Parameters like execution time and decryption time were taken into account for measuring the performance levels of the propounded approach. The propounded algorithm is differentiated from existing techniques using a single evaluation parameter i.e computation time. It is found that propounded approach gave a good performance in terms of computation time with an Average Encryption Time (AET) and Average Decryption Time (ADT) of 1.12 and 1.26 on text sizes of 6, 25, 35, 61, and 184MegaBytes (MB) respectively. The proposed hybrid model is 1.36 times faster than ECC+RSA+MD-5,3.25 times faster than AES+ECC, 2.7 times faster than AES+RSA, and 3.24 times faster than AES+ECC+RSA+MD5.","PeriodicalId":413789,"journal":{"name":"2021 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132950680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
2021 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1