Pub Date : 2022-02-12DOI: 10.1109/AISP53593.2022.9760634
Sachin Parajuli, Manoj Kumar Guragai
3D human pose estimation involves estimating human joint locations in 3D directly from 2D camera images. The estimation model would have to estimate the depth information directly from the 2D images. We explore two methods in this paper both of which represent human pose as a heatmap. The first one follows (Newell et al. [6]) and (Martinez et al. [7]) where we predict 2D poses and then lift these 2D poses to 3D. The second approach is inspired by (Pavlakos et al. [8]) and involves learning 3D pose directly from the 2D images. We observe that while both these approaches work well, the mean of both their predictions gives us the best mean per-joint prediction error (MPJPE) score.
{"title":"Human Pose Estimation in 3D using heatmaps","authors":"Sachin Parajuli, Manoj Kumar Guragai","doi":"10.1109/AISP53593.2022.9760634","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760634","url":null,"abstract":"3D human pose estimation involves estimating human joint locations in 3D directly from 2D camera images. The estimation model would have to estimate the depth information directly from the 2D images. We explore two methods in this paper both of which represent human pose as a heatmap. The first one follows (Newell et al. [6]) and (Martinez et al. [7]) where we predict 2D poses and then lift these 2D poses to 3D. The second approach is inspired by (Pavlakos et al. [8]) and involves learning 3D pose directly from the 2D images. We observe that while both these approaches work well, the mean of both their predictions gives us the best mean per-joint prediction error (MPJPE) score.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"36 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80920093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-12DOI: 10.1109/AISP53593.2022.9760597
Vydha Pradeep Kumar, D. Panda
The simulations and applications of nanostructured molybdenum disulfide particles and other complicated composite materials will be discussed in this paper. They have several appealing features that are related to the base element’s transfer characteristics, i.e. metal oxide semiconductor, and the strong chemical activity of Sulphur and oxygen family elements. The procedures followed in the creation then structuring of MoS2 nanoparticles and the mechanisms underpinning its biological characteristics and catalytic activity have made significant progress that helped us understand the properties of various materials. The most significant of this paper is that we will study the Simulation/Synthesis Analysis of Different TMDC Materials in different Model Design of FET Transistor and their applications. The benefits and prospects offered by MoS2 nanoparticles, nano-architectures, and other similar materials are discussed.
{"title":"Simulation Analysis of Different TMDC Materials and their Performances","authors":"Vydha Pradeep Kumar, D. Panda","doi":"10.1109/AISP53593.2022.9760597","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760597","url":null,"abstract":"The simulations and applications of nanostructured molybdenum disulfide particles and other complicated composite materials will be discussed in this paper. They have several appealing features that are related to the base element’s transfer characteristics, i.e. metal oxide semiconductor, and the strong chemical activity of Sulphur and oxygen family elements. The procedures followed in the creation then structuring of MoS2 nanoparticles and the mechanisms underpinning its biological characteristics and catalytic activity have made significant progress that helped us understand the properties of various materials. The most significant of this paper is that we will study the Simulation/Synthesis Analysis of Different TMDC Materials in different Model Design of FET Transistor and their applications. The benefits and prospects offered by MoS2 nanoparticles, nano-architectures, and other similar materials are discussed.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"5 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79450851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-12DOI: 10.1109/AISP53593.2022.9760681
M. Ravi Kumar, S. Panda, Venkateswara Reddy Guruguluri, Namratha Potluri, Nagasree Kolli
In the last few decades, the forecasting of lower atmospheric carbon dioxide (CO2) levels has been emphasized as an important topic among the atmospheric scientists and engineers for developing better predictive models in to predict the levels of CO2 keeping eye on the accelerated pollution. In the present work, we exploited the autoregressive integrated moving average (ARIMA) model capability for time-series prediction of CO2 level by considering the long-term recordings of air sample at Mauna Loa lab Observatory in Hawaii, USA, during the period from March 1958 to December 2001. The results reveal that forecasting of the parameter through ARIMA model has significantly improvements as compared to the existing techniques for such lower atmospheric parameters.
{"title":"Forecasting Carbon Dioxide Levels Using Autoregressive Integrated Moving Average Model","authors":"M. Ravi Kumar, S. Panda, Venkateswara Reddy Guruguluri, Namratha Potluri, Nagasree Kolli","doi":"10.1109/AISP53593.2022.9760681","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760681","url":null,"abstract":"In the last few decades, the forecasting of lower atmospheric carbon dioxide (CO2) levels has been emphasized as an important topic among the atmospheric scientists and engineers for developing better predictive models in to predict the levels of CO2 keeping eye on the accelerated pollution. In the present work, we exploited the autoregressive integrated moving average (ARIMA) model capability for time-series prediction of CO2 level by considering the long-term recordings of air sample at Mauna Loa lab Observatory in Hawaii, USA, during the period from March 1958 to December 2001. The results reveal that forecasting of the parameter through ARIMA model has significantly improvements as compared to the existing techniques for such lower atmospheric parameters.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"28 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83093148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-12DOI: 10.1109/AISP53593.2022.9760585
Mokhtar A. Al-Awadhi, R. Deshmukh
This paper aims to develop a machine learning-based system for automatically detecting honey adulteration with sugar syrup, based on honey hyperspectral imaging data. First, the floral source of a honey sample is classified by a botanical origin identification subsystem. Then, the sugar syrup adulteration is identified, and its concentration is quantified by an adulteration detection subsystem. Both subsystems consist of two steps. The first step involves extracting relevant features from the honey sample using Linear Discriminant Analysis (LDA). In the second step, we utilize the K-Nearest Neighbors (KNN) model to classify the honey botanical origin in the first subsystem and identify the adulteration level in the second subsystem. We assess the proposed system performance on a public honey hyperspectral image dataset. The result indicates that the proposed system can detect adulteration in honey with an overall cross-validation accuracy of 96.39%, making it an appropriate alternative to the current chemical-based detection methods.
{"title":"Honey Adulteration Detection using Hyperspectral Imaging and Machine Learning","authors":"Mokhtar A. Al-Awadhi, R. Deshmukh","doi":"10.1109/AISP53593.2022.9760585","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760585","url":null,"abstract":"This paper aims to develop a machine learning-based system for automatically detecting honey adulteration with sugar syrup, based on honey hyperspectral imaging data. First, the floral source of a honey sample is classified by a botanical origin identification subsystem. Then, the sugar syrup adulteration is identified, and its concentration is quantified by an adulteration detection subsystem. Both subsystems consist of two steps. The first step involves extracting relevant features from the honey sample using Linear Discriminant Analysis (LDA). In the second step, we utilize the K-Nearest Neighbors (KNN) model to classify the honey botanical origin in the first subsystem and identify the adulteration level in the second subsystem. We assess the proposed system performance on a public honey hyperspectral image dataset. The result indicates that the proposed system can detect adulteration in honey with an overall cross-validation accuracy of 96.39%, making it an appropriate alternative to the current chemical-based detection methods.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"60 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89878172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-12DOI: 10.1109/AISP53593.2022.9760654
Prashant Dhope, Mahesh B. Neelagar
Emotion is the most important factor that distinguishes humans from robots. Machines are becoming more aware of human emotions as artificial intelligence advances. The objective of proposed method is to use artificial intelligence to build and construct a real-time facial emotion identification system. The proposed methodology has the capability of recognizing all the seven fundamental human face emotions. Those are angry, disgust, fear, happy, neutral, sad, and surprise. A self-prepared dataset is utilized to train the algorithm. The model is trained and facial expressions are recognized using a convolutional neural network. The real-time testing is accomplished using the Raspberry Pi 3B+ board and Pi-Camera. Using PyQt5, graphical user interface (GUI) is created for the system. The experimental result shows that, the proposed methodology has high recognition accuracy rate up to 99.88%.
{"title":"Real-Time Emotion Recognition from Facial Expressions using Artificial Intelligence","authors":"Prashant Dhope, Mahesh B. Neelagar","doi":"10.1109/AISP53593.2022.9760654","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760654","url":null,"abstract":"Emotion is the most important factor that distinguishes humans from robots. Machines are becoming more aware of human emotions as artificial intelligence advances. The objective of proposed method is to use artificial intelligence to build and construct a real-time facial emotion identification system. The proposed methodology has the capability of recognizing all the seven fundamental human face emotions. Those are angry, disgust, fear, happy, neutral, sad, and surprise. A self-prepared dataset is utilized to train the algorithm. The model is trained and facial expressions are recognized using a convolutional neural network. The real-time testing is accomplished using the Raspberry Pi 3B+ board and Pi-Camera. Using PyQt5, graphical user interface (GUI) is created for the system. The experimental result shows that, the proposed methodology has high recognition accuracy rate up to 99.88%.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"11 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88711422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-12DOI: 10.1109/AISP53593.2022.9760679
S. Yallamandaiah, N. Purnachand
Face recognition is a process of verifying an individual using facial images and it is widely employed in identifying people on social media platforms, validating identity at ATMs, finding missing persons, controlling access to sensitive areas, finding lost pets, etc. Face recognition is still a trending research area because of various challenges like illumination variations, different poses, and expressions of the person. Here, a novel methodology is introduced for face recognition using Histogram of Oriented Gradients (HOG), histogram of Local Binary Patterns (LBP), and Convolutional Neural Network (CNN). The features from HOG, histogram of LBP, and deep features from the proposed CNN are linearly concatenated to produce the feature space and then classified by Support Vector Machine. The face databases ORL, Extended Yale B, and CMUPIE are used for experimental work and attained a recognition rate of 98.48%, 97.33%, and 97.28% respectively.
{"title":"A novel face recognition technique using Convolutional Neural Network, HOG, and histogram of LBP features","authors":"S. Yallamandaiah, N. Purnachand","doi":"10.1109/AISP53593.2022.9760679","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760679","url":null,"abstract":"Face recognition is a process of verifying an individual using facial images and it is widely employed in identifying people on social media platforms, validating identity at ATMs, finding missing persons, controlling access to sensitive areas, finding lost pets, etc. Face recognition is still a trending research area because of various challenges like illumination variations, different poses, and expressions of the person. Here, a novel methodology is introduced for face recognition using Histogram of Oriented Gradients (HOG), histogram of Local Binary Patterns (LBP), and Convolutional Neural Network (CNN). The features from HOG, histogram of LBP, and deep features from the proposed CNN are linearly concatenated to produce the feature space and then classified by Support Vector Machine. The face databases ORL, Extended Yale B, and CMUPIE are used for experimental work and attained a recognition rate of 98.48%, 97.33%, and 97.28% respectively.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"19 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87033450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-12DOI: 10.1109/AISP53593.2022.9760520
V. Meena, Rishika Agrawal, Rajat Gumber, Anuja R. Tipare, Vinay Singh
Zeta Converter consists of two inductors and two capacitors, hence it is a fourth order system. Output voltage of a zeta converter can be lower or higher as compared to the input voltage. In this paper, small signal dynamic model is obtained using steady state averaging technique and further direct truncation method is used to reduce the order of the given fourth order test case to first, second and third order systems respectively. Impulse response and step response are plotted as well as integral square error is calculated for lower and upper limit of reduced order transfer functions. The results presented prove the applicability and effectiveness of the proposed method.
{"title":"Order Reduction of Continuous Interval Zeta Converter Model using Direct Truncation Method","authors":"V. Meena, Rishika Agrawal, Rajat Gumber, Anuja R. Tipare, Vinay Singh","doi":"10.1109/AISP53593.2022.9760520","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760520","url":null,"abstract":"Zeta Converter consists of two inductors and two capacitors, hence it is a fourth order system. Output voltage of a zeta converter can be lower or higher as compared to the input voltage. In this paper, small signal dynamic model is obtained using steady state averaging technique and further direct truncation method is used to reduce the order of the given fourth order test case to first, second and third order systems respectively. Impulse response and step response are plotted as well as integral square error is calculated for lower and upper limit of reduced order transfer functions. The results presented prove the applicability and effectiveness of the proposed method.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"29 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82509459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-12DOI: 10.1109/AISP53593.2022.9760532
Rashmi S. Nair, Rohit Agrawal, S. Domnic
Internet of Underwater Things (IoUT) explores the applications of Internet of Things (IoT) to monitor the sea animal habitat, observe atmosphere, and predict defense and predict defense and disaster. Raw underwater images are affected by absorption and dispersal of light due to underwater environment. Low power computational devices are preferred to cut down the cost of IoUT devices. Because of underwater environment nature, transmission of underwater images captured by underwater devices is considered as a big challenge. There is a need to provide solutions to amplify color, contrast and brightness aspects of captured underwater images to provide good visual understanding. Conventional compression techniques used for terrestrial environment, causes ringing artefacts due to the variable characteristics of underwater images. Deep image compression techniques consume more computational power and time, making them least efficient for low power computational devices. In this study, a low computational power and less time-consuming image compression technique is proposed to achieve high encoding efficiency and good reconstruction quality of underwater images. The proposed technique suggests using Convolutional Neural Network (CNN) at encoder side, which compresses and retains the structural data of the underwater image. And relative global histogram stretching based technique has been used at the decoder side to enhance the reconstructed underwater image. The proposed methodology is compared with conventional methods like Joint Pictures Experts Group (JPEG), Better Portable Graphics (BPG), Contrast Limited Adaptive Histogram Equalization (CLAHE) and deep learning techniques like Super Resolution Convolutional Neural Network (SRCNN) and Residual encoder-decoder methods to evaluate the reconstructed image quality. The presented work provides high quality image in comparison with both conventional and SRCNN method.
水下物联网(Internet of Underwater Things, IoUT)探索物联网在海洋动物栖息地监测、大气观测、防御预测和防灾预警等方面的应用。水下环境对光的吸收和散射会影响水下原始图像。低功耗计算器件是降低IoUT器件成本的首选器件。由于水下环境的性质,水下设备捕获的水下图像的传输被认为是一个很大的挑战。需要提供解决方案来放大所捕获的水下图像的颜色,对比度和亮度方面,以提供良好的视觉理解。传统的压缩技术用于陆地环境,由于水下图像的变化特性,导致环状伪影。深度图像压缩技术消耗更多的计算能力和时间,使其在低功耗计算设备上效率最低。本研究提出了一种计算能力低、耗时短的图像压缩技术,以达到较高的编码效率和较好的水下图像重建质量。该技术建议在编码器侧使用卷积神经网络(CNN),对水下图像的结构数据进行压缩和保留。在解码器侧采用基于相对全局直方图拉伸的技术对重建的水下图像进行增强。将提出的方法与传统方法如联合图像专家组(JPEG)、更好的便携式图形(BPG)、对比度有限自适应直方图均衡化(CLAHE)和深度学习技术如超分辨率卷积神经网络(SRCNN)和残差编码器-解码器方法进行比较,以评估重建图像质量。与传统方法和SRCNN方法相比,本文的工作提供了高质量的图像。
{"title":"Light Weight Encoder-Decoder for Underwater Images in Internet of Underwater Things (IoUT)","authors":"Rashmi S. Nair, Rohit Agrawal, S. Domnic","doi":"10.1109/AISP53593.2022.9760532","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760532","url":null,"abstract":"Internet of Underwater Things (IoUT) explores the applications of Internet of Things (IoT) to monitor the sea animal habitat, observe atmosphere, and predict defense and predict defense and disaster. Raw underwater images are affected by absorption and dispersal of light due to underwater environment. Low power computational devices are preferred to cut down the cost of IoUT devices. Because of underwater environment nature, transmission of underwater images captured by underwater devices is considered as a big challenge. There is a need to provide solutions to amplify color, contrast and brightness aspects of captured underwater images to provide good visual understanding. Conventional compression techniques used for terrestrial environment, causes ringing artefacts due to the variable characteristics of underwater images. Deep image compression techniques consume more computational power and time, making them least efficient for low power computational devices. In this study, a low computational power and less time-consuming image compression technique is proposed to achieve high encoding efficiency and good reconstruction quality of underwater images. The proposed technique suggests using Convolutional Neural Network (CNN) at encoder side, which compresses and retains the structural data of the underwater image. And relative global histogram stretching based technique has been used at the decoder side to enhance the reconstructed underwater image. The proposed methodology is compared with conventional methods like Joint Pictures Experts Group (JPEG), Better Portable Graphics (BPG), Contrast Limited Adaptive Histogram Equalization (CLAHE) and deep learning techniques like Super Resolution Convolutional Neural Network (SRCNN) and Residual encoder-decoder methods to evaluate the reconstructed image quality. The presented work provides high quality image in comparison with both conventional and SRCNN method.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"53 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82646797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-12DOI: 10.1109/AISP53593.2022.9760528
Ratikanta Sahoo
A novel shape of wideband conformal antenna is presented in this article. The antenna is a wideband directional conformal fork shaped antenna. A combination of the concept of tapered slot antenna and dipole arrays are used to obtain directional radiation pattern. The dipole element is formed like a fork and has a semi-annular ring structure. As a result, instead of using a wire dipole, the concept of a log periodic antenna, uses three fork-shaped dipole elements in the proposed antenna. The proposed radial cylindrical conformal antenna is operating from 3.5 to 4.1 GHz achieves an impedance bandwidth of 600 MHz. The gain of the proposed radial cylindrical conformal antenna is around 4.7 to 5.6 dBi within the operating frequency.The halfpower beamwidth HPBW at 3.3 and 3.5 GHz are 122° and 116° in the H-plane and at E-plane are 56° and 57° respectively. The proposed cylindrical antenna is a reasonable antenna for WiMAX applications.
{"title":"Design of a Wideband Periodic Cylindrical Conformal Fork shaped Antenna for WiMAX Application","authors":"Ratikanta Sahoo","doi":"10.1109/AISP53593.2022.9760528","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760528","url":null,"abstract":"A novel shape of wideband conformal antenna is presented in this article. The antenna is a wideband directional conformal fork shaped antenna. A combination of the concept of tapered slot antenna and dipole arrays are used to obtain directional radiation pattern. The dipole element is formed like a fork and has a semi-annular ring structure. As a result, instead of using a wire dipole, the concept of a log periodic antenna, uses three fork-shaped dipole elements in the proposed antenna. The proposed radial cylindrical conformal antenna is operating from 3.5 to 4.1 GHz achieves an impedance bandwidth of 600 MHz. The gain of the proposed radial cylindrical conformal antenna is around 4.7 to 5.6 dBi within the operating frequency.The halfpower beamwidth HPBW at 3.3 and 3.5 GHz are 122° and 116° in the H-plane and at E-plane are 56° and 57° respectively. The proposed cylindrical antenna is a reasonable antenna for WiMAX applications.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"36 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85044492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-12DOI: 10.1109/AISP53593.2022.9760607
Sonali Rout, R. Mohapatra
Protection of private and sensitive information is the most alarming issue for security providers in surveillance videos. So to provide privacy as well as to enhance secrecy in surveillance video without affecting its efficiency in detection of violent activities is a challenging task. Here a steganography based algorithm has been proposed which hides private information inside the surveillance video without affecting its accuracy in criminal activity detection. Preprocessing of the surveillance video has been performed using Tunable Q-factor Wavelet Transform (TQWT), secret data has been hidden using Discrete Wavelet Transform (DWT) and after adding payload to the surveillance video, detection of criminal activities has been conducted with maintaining same accuracy as original surveillance video. UCF-crime dataset has been used to validate the proposed framework. Feature extraction is performed and after feature selection it has been trained to Temporal Convolutional Network (TCN) for detection. Performance measure has been compared to the state-of-the-art methods which shows that application of steganography does not affect the detection rate while preserving the perceptual quality of the surveillance video.
{"title":"Hiding Sensitive Information in Surveillance Video without Affecting Nefarious Activity Detection","authors":"Sonali Rout, R. Mohapatra","doi":"10.1109/AISP53593.2022.9760607","DOIUrl":"https://doi.org/10.1109/AISP53593.2022.9760607","url":null,"abstract":"Protection of private and sensitive information is the most alarming issue for security providers in surveillance videos. So to provide privacy as well as to enhance secrecy in surveillance video without affecting its efficiency in detection of violent activities is a challenging task. Here a steganography based algorithm has been proposed which hides private information inside the surveillance video without affecting its accuracy in criminal activity detection. Preprocessing of the surveillance video has been performed using Tunable Q-factor Wavelet Transform (TQWT), secret data has been hidden using Discrete Wavelet Transform (DWT) and after adding payload to the surveillance video, detection of criminal activities has been conducted with maintaining same accuracy as original surveillance video. UCF-crime dataset has been used to validate the proposed framework. Feature extraction is performed and after feature selection it has been trained to Temporal Convolutional Network (TCN) for detection. Performance measure has been compared to the state-of-the-art methods which shows that application of steganography does not affect the detection rate while preserving the perceptual quality of the surveillance video.","PeriodicalId":6793,"journal":{"name":"2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP)","volume":"162 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74792465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}