首页 > 最新文献

2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)最新文献

英文 中文
Human Pose Estimation in 3D using heatmaps 人体姿态估计在3D使用热图
Pub Date : 2022-02-12 DOI: 10.1109/AISP53593.2022.9760634
Sachin Parajuli, Manoj Kumar Guragai
3D human pose estimation involves estimating human joint locations in 3D directly from 2D camera images. The estimation model would have to estimate the depth information directly from the 2D images. We explore two methods in this paper both of which represent human pose as a heatmap. The first one follows (Newell et al. [6]) and (Martinez et al. [7]) where we predict 2D poses and then lift these 2D poses to 3D. The second approach is inspired by (Pavlakos et al. [8]) and involves learning 3D pose directly from the 2D images. We observe that while both these approaches work well, the mean of both their predictions gives us the best mean per-joint prediction error (MPJPE) score.
三维人体姿态估计涉及直接从二维相机图像中估计三维人体关节位置。估计模型必须直接从二维图像中估计深度信息。我们在本文中探索了两种方法,这两种方法都将人体姿势表示为热图。第一个是(Newell等人[6])和(Martinez等人[7]),我们预测2D姿势,然后将这些2D姿势提升到3D。第二种方法受到(Pavlakos等人[8])的启发,涉及直接从2D图像中学习3D姿势。我们观察到,虽然这两种方法都工作得很好,但它们的预测的平均值给了我们最好的平均每关节预测误差(MPJPE)分数。
{"title":"Human Pose Estimation in 3D using heatmaps","authors":"Sachin Parajuli, Manoj Kumar Guragai","doi":"10.1109/AISP53593.2022.9760634","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760634","url":null,"abstract":"3D human pose estimation involves estimating human joint locations in 3D directly from 2D camera images. The estimation model would have to estimate the depth information directly from the 2D images. We explore two methods in this paper both of which represent human pose as a heatmap. The first one follows (Newell et al. [6]) and (Martinez et al. [7]) where we predict 2D poses and then lift these 2D poses to 3D. The second approach is inspired by (Pavlakos et al. [8]) and involves learning 3D pose directly from the 2D images. We observe that while both these approaches work well, the mean of both their predictions gives us the best mean per-joint prediction error (MPJPE) score.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"36 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80920093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulation Analysis of Different TMDC Materials and their Performances 不同TMDC材料及其性能的仿真分析
Pub Date : 2022-02-12 DOI: 10.1109/AISP53593.2022.9760597
Vydha Pradeep Kumar, D. Panda
The simulations and applications of nanostructured molybdenum disulfide particles and other complicated composite materials will be discussed in this paper. They have several appealing features that are related to the base element’s transfer characteristics, i.e. metal oxide semiconductor, and the strong chemical activity of Sulphur and oxygen family elements. The procedures followed in the creation then structuring of MoS2 nanoparticles and the mechanisms underpinning its biological characteristics and catalytic activity have made significant progress that helped us understand the properties of various materials. The most significant of this paper is that we will study the Simulation/Synthesis Analysis of Different TMDC Materials in different Model Design of FET Transistor and their applications. The benefits and prospects offered by MoS2 nanoparticles, nano-architectures, and other similar materials are discussed.
本文将讨论纳米二硫化钼颗粒和其他复杂复合材料的模拟和应用。它们有几个吸引人的特征,这些特征与基本元素的转移特性有关,即金属氧化物半导体,以及硫和氧族元素的强化学活性。二硫化钼纳米颗粒的创建和结构以及支撑其生物特性和催化活性的机制取得了重大进展,有助于我们了解各种材料的特性。本文最重要的是研究了不同TMDC材料在FET晶体管不同模型设计中的仿真/综合分析及其应用。讨论了二硫化钼纳米颗粒、纳米结构和其他类似材料的优点和前景。
{"title":"Simulation Analysis of Different TMDC Materials and their Performances","authors":"Vydha Pradeep Kumar, D. Panda","doi":"10.1109/AISP53593.2022.9760597","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760597","url":null,"abstract":"The simulations and applications of nanostructured molybdenum disulfide particles and other complicated composite materials will be discussed in this paper. They have several appealing features that are related to the base element’s transfer characteristics, i.e. metal oxide semiconductor, and the strong chemical activity of Sulphur and oxygen family elements. The procedures followed in the creation then structuring of MoS2 nanoparticles and the mechanisms underpinning its biological characteristics and catalytic activity have made significant progress that helped us understand the properties of various materials. The most significant of this paper is that we will study the Simulation/Synthesis Analysis of Different TMDC Materials in different Model Design of FET Transistor and their applications. The benefits and prospects offered by MoS2 nanoparticles, nano-architectures, and other similar materials are discussed.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"5 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79450851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forecasting Carbon Dioxide Levels Using Autoregressive Integrated Moving Average Model 利用自回归综合移动平均模型预测二氧化碳水平
Pub Date : 2022-02-12 DOI: 10.1109/AISP53593.2022.9760681
M. Ravi Kumar, S. Panda, Venkateswara Reddy Guruguluri, Namratha Potluri, Nagasree Kolli
In the last few decades, the forecasting of lower atmospheric carbon dioxide (CO2) levels has been emphasized as an important topic among the atmospheric scientists and engineers for developing better predictive models in to predict the levels of CO2 keeping eye on the accelerated pollution. In the present work, we exploited the autoregressive integrated moving average (ARIMA) model capability for time-series prediction of CO2 level by considering the long-term recordings of air sample at Mauna Loa lab Observatory in Hawaii, USA, during the period from March 1958 to December 2001. The results reveal that forecasting of the parameter through ARIMA model has significantly improvements as compared to the existing techniques for such lower atmospheric parameters.
在过去的几十年里,大气低层二氧化碳(CO2)水平的预测一直是大气科学家和工程师们强调的一个重要课题,以便开发更好的预测模型来预测二氧化碳水平,同时关注加速的污染。利用美国夏威夷莫纳罗亚实验室观测站1958年3月至2001年12月的长期记录资料,利用自回归综合移动平均(ARIMA)模式对CO2浓度进行了时间序列预测。结果表明,与现有的低大气参数预报技术相比,ARIMA模式对该参数的预报有显著改进。
{"title":"Forecasting Carbon Dioxide Levels Using Autoregressive Integrated Moving Average Model","authors":"M. Ravi Kumar, S. Panda, Venkateswara Reddy Guruguluri, Namratha Potluri, Nagasree Kolli","doi":"10.1109/AISP53593.2022.9760681","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760681","url":null,"abstract":"In the last few decades, the forecasting of lower atmospheric carbon dioxide (CO2) levels has been emphasized as an important topic among the atmospheric scientists and engineers for developing better predictive models in to predict the levels of CO2 keeping eye on the accelerated pollution. In the present work, we exploited the autoregressive integrated moving average (ARIMA) model capability for time-series prediction of CO2 level by considering the long-term recordings of air sample at Mauna Loa lab Observatory in Hawaii, USA, during the period from March 1958 to December 2001. The results reveal that forecasting of the parameter through ARIMA model has significantly improvements as compared to the existing techniques for such lower atmospheric parameters.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"28 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83093148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Honey Adulteration Detection using Hyperspectral Imaging and Machine Learning 利用高光谱成像和机器学习检测蜂蜜掺假
Pub Date : 2022-02-12 DOI: 10.1109/AISP53593.2022.9760585
Mokhtar A. Al-Awadhi, R. Deshmukh
This paper aims to develop a machine learning-based system for automatically detecting honey adulteration with sugar syrup, based on honey hyperspectral imaging data. First, the floral source of a honey sample is classified by a botanical origin identification subsystem. Then, the sugar syrup adulteration is identified, and its concentration is quantified by an adulteration detection subsystem. Both subsystems consist of two steps. The first step involves extracting relevant features from the honey sample using Linear Discriminant Analysis (LDA). In the second step, we utilize the K-Nearest Neighbors (KNN) model to classify the honey botanical origin in the first subsystem and identify the adulteration level in the second subsystem. We assess the proposed system performance on a public honey hyperspectral image dataset. The result indicates that the proposed system can detect adulteration in honey with an overall cross-validation accuracy of 96.39%, making it an appropriate alternative to the current chemical-based detection methods.
本文旨在基于蜂蜜高光谱成像数据,开发一种基于机器学习的蜂蜜糖浆掺假自动检测系统。首先,通过植物来源识别子系统对蜂蜜样品的花源进行分类。然后,对糖浆的掺假进行识别,并通过掺假检测子系统对其浓度进行量化。这两个子系统都由两个步骤组成。第一步是使用线性判别分析(LDA)从蜂蜜样本中提取相关特征。在第二步中,我们利用k近邻(KNN)模型对第一个子系统中的蜂蜜植物来源进行分类,并识别第二个子系统中的掺假水平。我们在一个公开的蜂蜜高光谱图像数据集上评估了所提出的系统性能。结果表明,该系统检测蜂蜜中掺假的总体交叉验证准确率为96.39%,可替代现有的基于化学的检测方法。
{"title":"Honey Adulteration Detection using Hyperspectral Imaging and Machine Learning","authors":"Mokhtar A. Al-Awadhi, R. Deshmukh","doi":"10.1109/AISP53593.2022.9760585","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760585","url":null,"abstract":"This paper aims to develop a machine learning-based system for automatically detecting honey adulteration with sugar syrup, based on honey hyperspectral imaging data. First, the floral source of a honey sample is classified by a botanical origin identification subsystem. Then, the sugar syrup adulteration is identified, and its concentration is quantified by an adulteration detection subsystem. Both subsystems consist of two steps. The first step involves extracting relevant features from the honey sample using Linear Discriminant Analysis (LDA). In the second step, we utilize the K-Nearest Neighbors (KNN) model to classify the honey botanical origin in the first subsystem and identify the adulteration level in the second subsystem. We assess the proposed system performance on a public honey hyperspectral image dataset. The result indicates that the proposed system can detect adulteration in honey with an overall cross-validation accuracy of 96.39%, making it an appropriate alternative to the current chemical-based detection methods.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"60 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89878172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Real-Time Emotion Recognition from Facial Expressions using Artificial Intelligence 利用人工智能从面部表情中实时识别情绪
Pub Date : 2022-02-12 DOI: 10.1109/AISP53593.2022.9760654
Prashant Dhope, Mahesh B. Neelagar
Emotion is the most important factor that distinguishes humans from robots. Machines are becoming more aware of human emotions as artificial intelligence advances. The objective of proposed method is to use artificial intelligence to build and construct a real-time facial emotion identification system. The proposed methodology has the capability of recognizing all the seven fundamental human face emotions. Those are angry, disgust, fear, happy, neutral, sad, and surprise. A self-prepared dataset is utilized to train the algorithm. The model is trained and facial expressions are recognized using a convolutional neural network. The real-time testing is accomplished using the Raspberry Pi 3B+ board and Pi-Camera. Using PyQt5, graphical user interface (GUI) is created for the system. The experimental result shows that, the proposed methodology has high recognition accuracy rate up to 99.88%.
情感是人类区别于机器人的最重要的因素。随着人工智能的进步,机器越来越能感知人类的情感。该方法的目的是利用人工智能技术构建实时面部情绪识别系统。所提出的方法具有识别所有七种基本人脸情绪的能力。它们是愤怒、厌恶、恐惧、快乐、中性、悲伤和惊讶。利用自己准备的数据集对算法进行训练。使用卷积神经网络对模型进行训练并识别面部表情。实时测试是使用树莓派3B+板和Pi- camera完成的。使用PyQt5,可以为系统创建图形用户界面(GUI)。实验结果表明,该方法的识别准确率高达99.88%。
{"title":"Real-Time Emotion Recognition from Facial Expressions using Artificial Intelligence","authors":"Prashant Dhope, Mahesh B. Neelagar","doi":"10.1109/AISP53593.2022.9760654","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760654","url":null,"abstract":"Emotion is the most important factor that distinguishes humans from robots. Machines are becoming more aware of human emotions as artificial intelligence advances. The objective of proposed method is to use artificial intelligence to build and construct a real-time facial emotion identification system. The proposed methodology has the capability of recognizing all the seven fundamental human face emotions. Those are angry, disgust, fear, happy, neutral, sad, and surprise. A self-prepared dataset is utilized to train the algorithm. The model is trained and facial expressions are recognized using a convolutional neural network. The real-time testing is accomplished using the Raspberry Pi 3B+ board and Pi-Camera. Using PyQt5, graphical user interface (GUI) is created for the system. The experimental result shows that, the proposed methodology has high recognition accuracy rate up to 99.88%.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"11 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88711422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A novel face recognition technique using Convolutional Neural Network, HOG, and histogram of LBP features 一种利用卷积神经网络、HOG和LBP特征直方图的人脸识别新技术
Pub Date : 2022-02-12 DOI: 10.1109/AISP53593.2022.9760679
S. Yallamandaiah, N. Purnachand
Face recognition is a process of verifying an individual using facial images and it is widely employed in identifying people on social media platforms, validating identity at ATMs, finding missing persons, controlling access to sensitive areas, finding lost pets, etc. Face recognition is still a trending research area because of various challenges like illumination variations, different poses, and expressions of the person. Here, a novel methodology is introduced for face recognition using Histogram of Oriented Gradients (HOG), histogram of Local Binary Patterns (LBP), and Convolutional Neural Network (CNN). The features from HOG, histogram of LBP, and deep features from the proposed CNN are linearly concatenated to produce the feature space and then classified by Support Vector Machine. The face databases ORL, Extended Yale B, and CMUPIE are used for experimental work and attained a recognition rate of 98.48%, 97.33%, and 97.28% respectively.
人脸识别是一个使用面部图像验证个人身份的过程,它被广泛应用于社交媒体平台上的身份识别、自动柜员机的身份验证、寻找失踪人员、控制进入敏感区域、寻找丢失的宠物等。人脸识别仍然是一个趋势研究领域,因为各种各样的挑战,如照明变化,不同的姿势,和人的表情。本文介绍了一种利用梯度直方图(HOG)、局部二值模式直方图(LBP)和卷积神经网络(CNN)进行人脸识别的新方法。将HOG的特征、LBP的直方图和CNN的深度特征进行线性拼接,形成特征空间,然后用支持向量机进行分类。使用人脸数据库ORL、Extended Yale B和cmpie进行实验,识别率分别为98.48%、97.33%和97.28%。
{"title":"A novel face recognition technique using Convolutional Neural Network, HOG, and histogram of LBP features","authors":"S. Yallamandaiah, N. Purnachand","doi":"10.1109/AISP53593.2022.9760679","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760679","url":null,"abstract":"Face recognition is a process of verifying an individual using facial images and it is widely employed in identifying people on social media platforms, validating identity at ATMs, finding missing persons, controlling access to sensitive areas, finding lost pets, etc. Face recognition is still a trending research area because of various challenges like illumination variations, different poses, and expressions of the person. Here, a novel methodology is introduced for face recognition using Histogram of Oriented Gradients (HOG), histogram of Local Binary Patterns (LBP), and Convolutional Neural Network (CNN). The features from HOG, histogram of LBP, and deep features from the proposed CNN are linearly concatenated to produce the feature space and then classified by Support Vector Machine. The face databases ORL, Extended Yale B, and CMUPIE are used for experimental work and attained a recognition rate of 98.48%, 97.33%, and 97.28% respectively.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"19 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87033450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Order Reduction of Continuous Interval Zeta Converter Model using Direct Truncation Method 用直接截断法降阶连续区间Zeta转换器模型
Pub Date : 2022-02-12 DOI: 10.1109/AISP53593.2022.9760520
V. Meena, Rishika Agrawal, Rajat Gumber, Anuja R. Tipare, Vinay Singh
Zeta Converter consists of two inductors and two capacitors, hence it is a fourth order system. Output voltage of a zeta converter can be lower or higher as compared to the input voltage. In this paper, small signal dynamic model is obtained using steady state averaging technique and further direct truncation method is used to reduce the order of the given fourth order test case to first, second and third order systems respectively. Impulse response and step response are plotted as well as integral square error is calculated for lower and upper limit of reduced order transfer functions. The results presented prove the applicability and effectiveness of the proposed method.
Zeta变换器由两个电感和两个电容器组成,因此它是一个四阶系统。与输入电压相比,zeta转换器的输出电压可以低或高。本文采用稳态平均技术得到小信号动态模型,并进一步采用直接截断法将给定的四阶测试用例分别降阶为一阶、二阶和三阶系统。绘制了脉冲响应和阶跃响应,并计算了降阶传递函数下限和上限的积分平方误差。实验结果证明了该方法的适用性和有效性。
{"title":"Order Reduction of Continuous Interval Zeta Converter Model using Direct Truncation Method","authors":"V. Meena, Rishika Agrawal, Rajat Gumber, Anuja R. Tipare, Vinay Singh","doi":"10.1109/AISP53593.2022.9760520","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760520","url":null,"abstract":"Zeta Converter consists of two inductors and two capacitors, hence it is a fourth order system. Output voltage of a zeta converter can be lower or higher as compared to the input voltage. In this paper, small signal dynamic model is obtained using steady state averaging technique and further direct truncation method is used to reduce the order of the given fourth order test case to first, second and third order systems respectively. Impulse response and step response are plotted as well as integral square error is calculated for lower and upper limit of reduced order transfer functions. The results presented prove the applicability and effectiveness of the proposed method.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"29 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82509459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Light Weight Encoder-Decoder for Underwater Images in Internet of Underwater Things (IoUT) 基于水下物联网(IoUT)的轻型水下图像编解码器
Pub Date : 2022-02-12 DOI: 10.1109/AISP53593.2022.9760532
Rashmi S. Nair, Rohit Agrawal, S. Domnic
Internet of Underwater Things (IoUT) explores the applications of Internet of Things (IoT) to monitor the sea animal habitat, observe atmosphere, and predict defense and predict defense and disaster. Raw underwater images are affected by absorption and dispersal of light due to underwater environment. Low power computational devices are preferred to cut down the cost of IoUT devices. Because of underwater environment nature, transmission of underwater images captured by underwater devices is considered as a big challenge. There is a need to provide solutions to amplify color, contrast and brightness aspects of captured underwater images to provide good visual understanding. Conventional compression techniques used for terrestrial environment, causes ringing artefacts due to the variable characteristics of underwater images. Deep image compression techniques consume more computational power and time, making them least efficient for low power computational devices. In this study, a low computational power and less time-consuming image compression technique is proposed to achieve high encoding efficiency and good reconstruction quality of underwater images. The proposed technique suggests using Convolutional Neural Network (CNN) at encoder side, which compresses and retains the structural data of the underwater image. And relative global histogram stretching based technique has been used at the decoder side to enhance the reconstructed underwater image. The proposed methodology is compared with conventional methods like Joint Pictures Experts Group (JPEG), Better Portable Graphics (BPG), Contrast Limited Adaptive Histogram Equalization (CLAHE) and deep learning techniques like Super Resolution Convolutional Neural Network (SRCNN) and Residual encoder-decoder methods to evaluate the reconstructed image quality. The presented work provides high quality image in comparison with both conventional and SRCNN method.
水下物联网(Internet of Underwater Things, IoUT)探索物联网在海洋动物栖息地监测、大气观测、防御预测和防灾预警等方面的应用。水下环境对光的吸收和散射会影响水下原始图像。低功耗计算器件是降低IoUT器件成本的首选器件。由于水下环境的性质,水下设备捕获的水下图像的传输被认为是一个很大的挑战。需要提供解决方案来放大所捕获的水下图像的颜色,对比度和亮度方面,以提供良好的视觉理解。传统的压缩技术用于陆地环境,由于水下图像的变化特性,导致环状伪影。深度图像压缩技术消耗更多的计算能力和时间,使其在低功耗计算设备上效率最低。本研究提出了一种计算能力低、耗时短的图像压缩技术,以达到较高的编码效率和较好的水下图像重建质量。该技术建议在编码器侧使用卷积神经网络(CNN),对水下图像的结构数据进行压缩和保留。在解码器侧采用基于相对全局直方图拉伸的技术对重建的水下图像进行增强。将提出的方法与传统方法如联合图像专家组(JPEG)、更好的便携式图形(BPG)、对比度有限自适应直方图均衡化(CLAHE)和深度学习技术如超分辨率卷积神经网络(SRCNN)和残差编码器-解码器方法进行比较,以评估重建图像质量。与传统方法和SRCNN方法相比,本文的工作提供了高质量的图像。
{"title":"Light Weight Encoder-Decoder for Underwater Images in Internet of Underwater Things (IoUT)","authors":"Rashmi S. Nair, Rohit Agrawal, S. Domnic","doi":"10.1109/AISP53593.2022.9760532","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760532","url":null,"abstract":"Internet of Underwater Things (IoUT) explores the applications of Internet of Things (IoT) to monitor the sea animal habitat, observe atmosphere, and predict defense and predict defense and disaster. Raw underwater images are affected by absorption and dispersal of light due to underwater environment. Low power computational devices are preferred to cut down the cost of IoUT devices. Because of underwater environment nature, transmission of underwater images captured by underwater devices is considered as a big challenge. There is a need to provide solutions to amplify color, contrast and brightness aspects of captured underwater images to provide good visual understanding. Conventional compression techniques used for terrestrial environment, causes ringing artefacts due to the variable characteristics of underwater images. Deep image compression techniques consume more computational power and time, making them least efficient for low power computational devices. In this study, a low computational power and less time-consuming image compression technique is proposed to achieve high encoding efficiency and good reconstruction quality of underwater images. The proposed technique suggests using Convolutional Neural Network (CNN) at encoder side, which compresses and retains the structural data of the underwater image. And relative global histogram stretching based technique has been used at the decoder side to enhance the reconstructed underwater image. The proposed methodology is compared with conventional methods like Joint Pictures Experts Group (JPEG), Better Portable Graphics (BPG), Contrast Limited Adaptive Histogram Equalization (CLAHE) and deep learning techniques like Super Resolution Convolutional Neural Network (SRCNN) and Residual encoder-decoder methods to evaluate the reconstructed image quality. The presented work provides high quality image in comparison with both conventional and SRCNN method.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"53 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82646797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of a Wideband Periodic Cylindrical Conformal Fork shaped Antenna for WiMAX Application 用于WiMAX的宽带周期圆柱共形叉形天线设计
Pub Date : 2022-02-12 DOI: 10.1109/AISP53593.2022.9760528
Ratikanta Sahoo
A novel shape of wideband conformal antenna is presented in this article. The antenna is a wideband directional conformal fork shaped antenna. A combination of the concept of tapered slot antenna and dipole arrays are used to obtain directional radiation pattern. The dipole element is formed like a fork and has a semi-annular ring structure. As a result, instead of using a wire dipole, the concept of a log periodic antenna, uses three fork-shaped dipole elements in the proposed antenna. The proposed radial cylindrical conformal antenna is operating from 3.5 to 4.1 GHz achieves an impedance bandwidth of 600 MHz. The gain of the proposed radial cylindrical conformal antenna is around 4.7 to 5.6 dBi within the operating frequency.The halfpower beamwidth HPBW at 3.3 and 3.5 GHz are 122° and 116° in the H-plane and at E-plane are 56° and 57° respectively. The proposed cylindrical antenna is a reasonable antenna for WiMAX applications.
本文提出了一种新型宽带共形天线。天线为宽带定向共形叉形天线。采用锥形缝隙天线和偶极子阵列相结合的方法获得定向辐射方向图。偶极子元素形成像叉一样,具有半环形结构。因此,对数周期天线的概念不是使用线偶极子,而是在拟议的天线中使用三个叉形偶极子元件。所提出的径向圆柱共形天线工作在3.5 ~ 4.1 GHz,阻抗带宽为600 MHz。在工作频率范围内,径向圆柱共形天线的增益约为4.7 ~ 5.6 dBi。3.3 GHz和3.5 GHz半功率波束宽度HPBW在h面分别为122°和116°,在e面分别为56°和57°。提出的圆柱形天线是一种适用于WiMAX应用的合理天线。
{"title":"Design of a Wideband Periodic Cylindrical Conformal Fork shaped Antenna for WiMAX Application","authors":"Ratikanta Sahoo","doi":"10.1109/AISP53593.2022.9760528","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760528","url":null,"abstract":"A novel shape of wideband conformal antenna is presented in this article. The antenna is a wideband directional conformal fork shaped antenna. A combination of the concept of tapered slot antenna and dipole arrays are used to obtain directional radiation pattern. The dipole element is formed like a fork and has a semi-annular ring structure. As a result, instead of using a wire dipole, the concept of a log periodic antenna, uses three fork-shaped dipole elements in the proposed antenna. The proposed radial cylindrical conformal antenna is operating from 3.5 to 4.1 GHz achieves an impedance bandwidth of 600 MHz. The gain of the proposed radial cylindrical conformal antenna is around 4.7 to 5.6 dBi within the operating frequency.The halfpower beamwidth HPBW at 3.3 and 3.5 GHz are 122° and 116° in the H-plane and at E-plane are 56° and 57° respectively. The proposed cylindrical antenna is a reasonable antenna for WiMAX applications.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"36 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85044492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hiding Sensitive Information in Surveillance Video without Affecting Nefarious Activity Detection 在不影响恶意活动检测的情况下隐藏监控视频中的敏感信息
Pub Date : 2022-02-12 DOI: 10.1109/AISP53593.2022.9760607
Sonali Rout, R. Mohapatra
Protection of private and sensitive information is the most alarming issue for security providers in surveillance videos. So to provide privacy as well as to enhance secrecy in surveillance video without affecting its efficiency in detection of violent activities is a challenging task. Here a steganography based algorithm has been proposed which hides private information inside the surveillance video without affecting its accuracy in criminal activity detection. Preprocessing of the surveillance video has been performed using Tunable Q-factor Wavelet Transform (TQWT), secret data has been hidden using Discrete Wavelet Transform (DWT) and after adding payload to the surveillance video, detection of criminal activities has been conducted with maintaining same accuracy as original surveillance video. UCF-crime dataset has been used to validate the proposed framework. Feature extraction is performed and after feature selection it has been trained to Temporal Convolutional Network (TCN) for detection. Performance measure has been compared to the state-of-the-art methods which shows that application of steganography does not affect the detection rate while preserving the perceptual quality of the surveillance video.
隐私和敏感信息的保护是监控视频中最令人担忧的问题。因此,既要保证监控录像的私密性,又要提高其保密性,同时又不影响其侦查暴力活动的效率,是一项具有挑战性的任务。本文提出了一种基于隐写术的隐写算法,在不影响监控视频犯罪活动检测准确性的前提下,将监控视频中的隐私信息隐藏起来。利用可调q因子小波变换(TQWT)对监控视频进行预处理,利用离散小波变换(DWT)对秘密数据进行隐藏,在对监控视频增加载荷后,在保持与原始监控视频相同精度的情况下,对犯罪活动进行检测。ucf犯罪数据集已用于验证所提出的框架。进行特征提取,特征选择后训练到时间卷积网络(TCN)进行检测。性能指标与最先进的方法进行了比较,表明隐写术的应用不会影响检测率,同时保留了监控视频的感知质量。
{"title":"Hiding Sensitive Information in Surveillance Video without Affecting Nefarious Activity Detection","authors":"Sonali Rout, R. Mohapatra","doi":"10.1109/AISP53593.2022.9760607","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760607","url":null,"abstract":"Protection of private and sensitive information is the most alarming issue for security providers in surveillance videos. So to provide privacy as well as to enhance secrecy in surveillance video without affecting its efficiency in detection of violent activities is a challenging task. Here a steganography based algorithm has been proposed which hides private information inside the surveillance video without affecting its accuracy in criminal activity detection. Preprocessing of the surveillance video has been performed using Tunable Q-factor Wavelet Transform (TQWT), secret data has been hidden using Discrete Wavelet Transform (DWT) and after adding payload to the surveillance video, detection of criminal activities has been conducted with maintaining same accuracy as original surveillance video. UCF-crime dataset has been used to validate the proposed framework. Feature extraction is performed and after feature selection it has been trained to Temporal Convolutional Network (TCN) for detection. Performance measure has been compared to the state-of-the-art methods which shows that application of steganography does not affect the detection rate while preserving the perceptual quality of the surveillance video.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"162 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74792465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1