Pub Date : 2023-10-27DOI: 10.5121/sipij.2023.14501
Michael Karnes, Alper Yilmaz
Deep neural network (DNN) image classification has grown rapidly as a general pattern detection tool for an extremely diverse set of applications; yet dataset accessibility remains a major limiting factor for many applications. This paper presents a novel dynamic learning approach to leverage pretrained knowledge to novel image spaces in the effort to extend the algorithm knowledge domain and reduce dataset collection requirements. The proposed Omni-Modeler generates a dynamic knowledge set by reshaping known concepts to create dynamic representation models of unknown concepts. The Omni-Modeler embeds images with a pretrained DNN and formulates compressed language encoder. The language encoded feature space is then used to rapidly generate a dynamic dictionary of concept appearance models. The results of this study demonstrate the Omni-Modeler capability to rapidly adapt across a range of image types enabling the usage of dynamically learning image classification with limited data availability.
{"title":"Omni-Modeler: Rapid Adaptive Visual Recognition with Dynamic Learning","authors":"Michael Karnes, Alper Yilmaz","doi":"10.5121/sipij.2023.14501","DOIUrl":"https://doi.org/10.5121/sipij.2023.14501","url":null,"abstract":"Deep neural network (DNN) image classification has grown rapidly as a general pattern detection tool for an extremely diverse set of applications; yet dataset accessibility remains a major limiting factor for many applications. This paper presents a novel dynamic learning approach to leverage pretrained knowledge to novel image spaces in the effort to extend the algorithm knowledge domain and reduce dataset collection requirements. The proposed Omni-Modeler generates a dynamic knowledge set by reshaping known concepts to create dynamic representation models of unknown concepts. The Omni-Modeler embeds images with a pretrained DNN and formulates compressed language encoder. The language encoded feature space is then used to rapidly generate a dynamic dictionary of concept appearance models. The results of this study demonstrate the Omni-Modeler capability to rapidly adapt across a range of image types enabling the usage of dynamically learning image classification with limited data availability.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"197 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136319809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-31DOI: 10.5121/sipij.2021.12603
Anam Hashmi, B. Khan, Omar Farooq
In this paper, different machine learning algorithms such as Linear Discriminant Analysis, Support vector machine (SVM), Multi-layer perceptron, Random forest, K-nearest neighbour, and Autoencoder with SVM have been compared. This comparison was conducted to seek a robust method that would produce good classification accuracy. To this end, a robust method of classifying raw Electroencephalography (EEG) signals associated with imagined movement of the right hand and relaxation state, namely Autoencoder with SVM has been proposed. The EEG dataset used in this research was created by the University of Tubingen, Germany. The best classification accuracy achieved was 70.4% with SVM through feature engineering. However, our prosed method of autoencoder in combination with SVM produced a similar accuracy of 65% without using any feature engineering technique. This research shows that this system of classification of motor movements can be used in a Brain-Computer Interface system (BCI) to mentally control a robotic device or an exoskeleton.
{"title":"A Comparative Study of Machine Learning Algorithms for EEG Signal Classification","authors":"Anam Hashmi, B. Khan, Omar Farooq","doi":"10.5121/sipij.2021.12603","DOIUrl":"https://doi.org/10.5121/sipij.2021.12603","url":null,"abstract":"In this paper, different machine learning algorithms such as Linear Discriminant Analysis, Support vector machine (SVM), Multi-layer perceptron, Random forest, K-nearest neighbour, and Autoencoder with SVM have been compared. This comparison was conducted to seek a robust method that would produce good classification accuracy. To this end, a robust method of classifying raw Electroencephalography (EEG) signals associated with imagined movement of the right hand and relaxation state, namely Autoencoder with SVM has been proposed. The EEG dataset used in this research was created by the University of Tubingen, Germany. The best classification accuracy achieved was 70.4% with SVM through feature engineering. However, our prosed method of autoencoder in combination with SVM produced a similar accuracy of 65% without using any feature engineering technique. This research shows that this system of classification of motor movements can be used in a Brain-Computer Interface system (BCI) to mentally control a robotic device or an exoskeleton.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89486316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-31DOI: 10.5121/sipij.2021.12501
Xiaohan Feng, Makoto Murakami
The information explosion makes it easier to ignore information that requires social attention, and news games can make that information stand out. There is also considerable research that shows that people are more likely to remember narrative content. Virtual environments can also increase the amount of information a person can recall. If these elements are blended together, it may help people remember important information. This research aims to provide directional results for researchers interested in combining VR and narrative, enumerating the advantages and limitations of using text or non-text plot prompts in news games. It also provides hints for the use of virtual environments as learning platforms in news games. The research method is to first derive a theoretical derivation, then create a sample of news games, and then compare the experimental data of the sample to prove the theory. The research compares the survey data of a VR game that presents a story in non-text format (Group VR), a game that presents the story in non-text format (Group NVR), a VR game that presents the story in text (Group VRIT), and a game that presents the story in text (Group NVRIT) will be compared and analyzed. This paper describes the experiment. The results of the experiment show that among the four groups, the means that can make subjects remember the most information is a VR news game with a storyline. And there is a positive correlation between subjects' experience and confidence in recognizing memories, and empathy is positively correlated with the correctness of memories. In addition, the effects of "VR," "experience," and "presenting a story from text or video" on the percentage of correct answers differed depending on the type of question.
{"title":"Combining of Narrative News and VR Games: Comparison of Various Forms of News Games","authors":"Xiaohan Feng, Makoto Murakami","doi":"10.5121/sipij.2021.12501","DOIUrl":"https://doi.org/10.5121/sipij.2021.12501","url":null,"abstract":"The information explosion makes it easier to ignore information that requires social attention, and news games can make that information stand out. There is also considerable research that shows that people are more likely to remember narrative content. Virtual environments can also increase the amount of information a person can recall. If these elements are blended together, it may help people remember important information. This research aims to provide directional results for researchers interested in combining VR and narrative, enumerating the advantages and limitations of using text or non-text plot prompts in news games. It also provides hints for the use of virtual environments as learning platforms in news games. The research method is to first derive a theoretical derivation, then create a sample of news games, and then compare the experimental data of the sample to prove the theory. The research compares the survey data of a VR game that presents a story in non-text format (Group VR), a game that presents the story in non-text format (Group NVR), a VR game that presents the story in text (Group VRIT), and a game that presents the story in text (Group NVRIT) will be compared and analyzed. This paper describes the experiment. The results of the experiment show that among the four groups, the means that can make subjects remember the most information is a VR news game with a storyline. And there is a positive correlation between subjects' experience and confidence in recognizing memories, and empathy is positively correlated with the correctness of memories. In addition, the effects of \"VR,\" \"experience,\" and \"presenting a story from text or video\" on the percentage of correct answers differed depending on the type of question.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"781 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80847468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-31DOI: 10.5121/sipij.2021.12502
R. Sabre
This paper concerns the continuous-time stable alpha symmetric processes which are inivitable in the modeling of certain signals with indefinitely increasing variance. Particularly the case where the spectral measurement is mixed: sum of a continuous measurement and a discrete measurement. Our goal is to estimate the spectral density of the continuous part by observing the signal in a discrete way. For that, we propose a method which consists in sampling the signal at periodic instants. We use Jackson's polynomial kernel to build a periodogram which we then smooth by two spectral windows taking into account the width of the interval where the spectral density is non-zero. Thus, we bypass the phenomenon of aliasing often encountered in the case of estimation from discrete observations of a continuous time process.
{"title":"Mixed Spectra for Stable Signals from Discrete Observations","authors":"R. Sabre","doi":"10.5121/sipij.2021.12502","DOIUrl":"https://doi.org/10.5121/sipij.2021.12502","url":null,"abstract":"This paper concerns the continuous-time stable alpha symmetric processes which are inivitable in the modeling of certain signals with indefinitely increasing variance. Particularly the case where the spectral measurement is mixed: sum of a continuous measurement and a discrete measurement. Our goal is to estimate the spectral density of the continuous part by observing the signal in a discrete way. For that, we propose a method which consists in sampling the signal at periodic instants. We use Jackson's polynomial kernel to build a periodogram which we then smooth by two spectral windows taking into account the width of the interval where the spectral density is non-zero. Thus, we bypass the phenomenon of aliasing often encountered in the case of estimation from discrete observations of a continuous time process.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82737930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-31DOI: 10.5121/sipij.2021.12503
Hadi Mohsen Alkanfery, Ibrahim Mustafa Mehedi
The non-invasive Fetal Electrocardiogram (FECG) signal has become a significant method for monitoring the fetus's physiological conditions, extracted from the Abdominal Electrocardiogram (AECG) during pregnancy. The current techniques are limited during delivery for detecting and analyzing fECG. The non - intrusive fECG recorded from the mother's abdomen is contaminated by a variety of noise sources, can be a more challenging task for removing the maternal ECG. These contaminated noises have become a major challenge during the extraction of fetal ECG is managed by uni-modal technique. In this research, a new method based on the combination of Wavelet Transform (WT) and Fast Independent Component Analysis (FICA) algorithm approach to extract fECG from AECG recordings of the pregnant woman is proposed. Initially, preprocessing of a signal is done by applying a Fractional Order Butterworth Filter (FBWF). To select the Direct ECG signal which is characterized as a reference signal and the abdominal signal which is characterized as an input signal to the WT, the cross-correlation technique is used to find the signal with greater similarity among the available four abdominal signals. The model performance of the proposed method shows the most frequent similarity of fetal heartbeat rate present in the database can be evaluated through MAE and MAPE is 0.6 and 0.041209 respectively. Thus the proposed methodology of de-noising and separation of fECG signals will act as the predominant one and assist in understanding the nature of the delivery on further analysis.
{"title":"Fractional Order Butterworth Filter for Fetal Electrocardiographic Signal Feature Extraction","authors":"Hadi Mohsen Alkanfery, Ibrahim Mustafa Mehedi","doi":"10.5121/sipij.2021.12503","DOIUrl":"https://doi.org/10.5121/sipij.2021.12503","url":null,"abstract":"The non-invasive Fetal Electrocardiogram (FECG) signal has become a significant method for monitoring the fetus's physiological conditions, extracted from the Abdominal Electrocardiogram (AECG) during pregnancy. The current techniques are limited during delivery for detecting and analyzing fECG. The non - intrusive fECG recorded from the mother's abdomen is contaminated by a variety of noise sources, can be a more challenging task for removing the maternal ECG. These contaminated noises have become a major challenge during the extraction of fetal ECG is managed by uni-modal technique. In this research, a new method based on the combination of Wavelet Transform (WT) and Fast Independent Component Analysis (FICA) algorithm approach to extract fECG from AECG recordings of the pregnant woman is proposed. Initially, preprocessing of a signal is done by applying a Fractional Order Butterworth Filter (FBWF). To select the Direct ECG signal which is characterized as a reference signal and the abdominal signal which is characterized as an input signal to the WT, the cross-correlation technique is used to find the signal with greater similarity among the available four abdominal signals. The model performance of the proposed method shows the most frequent similarity of fetal heartbeat rate present in the database can be evaluated through MAE and MAPE is 0.6 and 0.041209 respectively. Thus the proposed methodology of de-noising and separation of fECG signals will act as the predominant one and assist in understanding the nature of the delivery on further analysis.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87954681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-30DOI: 10.5121/SIPIJ.2021.12202
Ali Ahmad Aminu, N. N. Agwu
Digital image tampering detection has been an active area of research in recent times due to the ease with which digital image can be modified to convey false or misleading information. To address this problem, several studies have proposed forensics algorithms for digital image tampering detection. While these approaches have shown remarkable improvement, most of them only focused on detecting a specific type of image tampering. The limitation of these approaches is that new forensic method must be designed for each new manipulation approach that is developed. Consequently, there is a need to develop methods capable of detecting multiple tampering operations. In this paper, we proposed a novel general purpose image tampering scheme based on CNNs and Local Optimal Oriented Pattern (LOOP) which is capable of detecting five types of image tampering in both binary and multiclass scenarios. Unlike the existing deep learning techniques which used constrained pre-processing layers to suppress the effect of image content in order to capture image tampering traces, our method uses LOOP features, which can effectively subdue the effect image content, thus, allowing the proposed CNNs to capture the needed features to distinguish among different types of image tampering. Through a number of detailed experiments, our results demonstrate that the proposed general purpose image tampering method can achieve high detection accuracies in individual and multiclass image tampering detections respectively and a comparative analysis of our results with the existing state of the arts reveals that the proposed model is more robust than most of the exiting methods.
{"title":"General Purpose Image Tampering Detection using Convolutional Neural Network and Local Optimal Oriented Pattern (LOOP)","authors":"Ali Ahmad Aminu, N. N. Agwu","doi":"10.5121/SIPIJ.2021.12202","DOIUrl":"https://doi.org/10.5121/SIPIJ.2021.12202","url":null,"abstract":"Digital image tampering detection has been an active area of research in recent times due to the ease with which digital image can be modified to convey false or misleading information. To address this problem, several studies have proposed forensics algorithms for digital image tampering detection. While these approaches have shown remarkable improvement, most of them only focused on detecting a specific type of image tampering. The limitation of these approaches is that new forensic method must be designed for each new manipulation approach that is developed. Consequently, there is a need to develop methods capable of detecting multiple tampering operations. In this paper, we proposed a novel general purpose image tampering scheme based on CNNs and Local Optimal Oriented Pattern (LOOP) which is capable of detecting five types of image tampering in both binary and multiclass scenarios. Unlike the existing deep learning techniques which used constrained pre-processing layers to suppress the effect of image content in order to capture image tampering traces, our method uses LOOP features, which can effectively subdue the effect image content, thus, allowing the proposed CNNs to capture the needed features to distinguish among different types of image tampering. Through a number of detailed experiments, our results demonstrate that the proposed general purpose image tampering method can achieve high detection accuracies in individual and multiclass image tampering detections respectively and a comparative analysis of our results with the existing state of the arts reveals that the proposed model is more robust than most of the exiting methods.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"60 1","pages":"13-32"},"PeriodicalIF":0.0,"publicationDate":"2021-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87525407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-30DOI: 10.5121/SIPIJ.2021.12203
C. Kwan, David Gribben, Bence Budavari
Long range infrared videos such as the Defense Systems Information Analysis Center (DSIAC) videos usually do not have high resolution. In recent years, there are significant advancement in video super-resolution algorithms. Here, we summarize our study on the use of super-resolution videos for target detection and classification. We observed that super-resolution videos can significantly improve the detection and classification performance. For example, for 3000 m range videos, we were able to improve the average precision of target detection from 11% (without super-resolution) to 44% (with 4x super-resolution) and the overall accuracy of target classification from 10% (without super-resolution) to 44% (with 2x superresolution).
{"title":"Target Detection and Classification Performance Enhancement using Super-Resolution Infrared Videos","authors":"C. Kwan, David Gribben, Bence Budavari","doi":"10.5121/SIPIJ.2021.12203","DOIUrl":"https://doi.org/10.5121/SIPIJ.2021.12203","url":null,"abstract":"Long range infrared videos such as the Defense Systems Information Analysis Center (DSIAC) videos usually do not have high resolution. In recent years, there are significant advancement in video super-resolution algorithms. Here, we summarize our study on the use of super-resolution videos for target detection and classification. We observed that super-resolution videos can significantly improve the detection and classification performance. For example, for 3000 m range videos, we were able to improve the average precision of target detection from 11% (without super-resolution) to 44% (with 4x super-resolution) and the overall accuracy of target classification from 10% (without super-resolution) to 44% (with 2x superresolution).","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"21 1","pages":"33-45"},"PeriodicalIF":0.0,"publicationDate":"2021-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82220640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-28DOI: 10.5121/SIPIJ.2021.12102
Xiang-Song Zhang, Wei-Xin Gao, Shihuan Zhu
In order to eliminate the salt pepper and Gaussian mixed noise in X-ray weld image, the extreme value characteristics of salt and pepper noise are used to separate the mixed noise, and the non local mean filtering algorithm is used to denoise it. Because the smoothness of the exponential weighted kernel function is too large, it is easy to cause the image details fuzzy, so the cosine coefficient based on the function is adopted. An improved non local mean image denoising algorithm is designed by using weighted Gaussian kernel function. The experimental results show that the new algorithm reduces the noise and retains the details of the original image, and the peak signal-to-noise ratio is increased by 1.5 dB. An adaptive salt and pepper noise elimination algorithm is proposed, which can automatically adjust the filtering window to identify the noise probability. Firstly, the median filter is applied to the image, and the filtering results are compared with the pre filtering results to get the noise points. Then the weighted average of the middle three groups of data under each filtering window is used to estimate the image noise probability. Before filtering, the obvious noise points are removed by threshold method, and then the central pixel is estimated by the reciprocal square of the distance from the center pixel of the window. Finally, according to Takagi Sugeno (T-S) fuzzy rules, the output estimates of different models are fused by using noise probability. Experimental results show that the algorithm has the ability of automatic noise estimation and adaptive window adjustment. After filtering, the standard mean square deviation can be reduced by more than 20%, and the speed can be increased more than twice. In the enhancement part, a nonlinear image enhancement method is proposed, which can adjust the parameters adaptively and enhance the weld area automatically instead of the background area. The enhancement effect achieves the best personal visual effect. Compared with the traditional method, the enhancement effect is better and more in line with the needs of industrial field.
{"title":"Research on Noise Reduction and Enhancement Algorithm of Girth Weld Image","authors":"Xiang-Song Zhang, Wei-Xin Gao, Shihuan Zhu","doi":"10.5121/SIPIJ.2021.12102","DOIUrl":"https://doi.org/10.5121/SIPIJ.2021.12102","url":null,"abstract":"In order to eliminate the salt pepper and Gaussian mixed noise in X-ray weld image, the extreme value characteristics of salt and pepper noise are used to separate the mixed noise, and the non local mean filtering algorithm is used to denoise it. Because the smoothness of the exponential weighted kernel function is too large, it is easy to cause the image details fuzzy, so the cosine coefficient based on the function is adopted. An improved non local mean image denoising algorithm is designed by using weighted Gaussian kernel function. The experimental results show that the new algorithm reduces the noise and retains the details of the original image, and the peak signal-to-noise ratio is increased by 1.5 dB. An adaptive salt and pepper noise elimination algorithm is proposed, which can automatically adjust the filtering window to identify the noise probability. Firstly, the median filter is applied to the image, and the filtering results are compared with the pre filtering results to get the noise points. Then the weighted average of the middle three groups of data under each filtering window is used to estimate the image noise probability. Before filtering, the obvious noise points are removed by threshold method, and then the central pixel is estimated by the reciprocal square of the distance from the center pixel of the window. Finally, according to Takagi Sugeno (T-S) fuzzy rules, the output estimates of different models are fused by using noise probability. Experimental results show that the algorithm has the ability of automatic noise estimation and adaptive window adjustment. After filtering, the standard mean square deviation can be reduced by more than 20%, and the speed can be increased more than twice. In the enhancement part, a nonlinear image enhancement method is proposed, which can adjust the parameters adaptively and enhance the weld area automatically instead of the background area. The enhancement effect achieves the best personal visual effect. Compared with the traditional method, the enhancement effect is better and more in line with the needs of industrial field.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"49 1","pages":"9-21"},"PeriodicalIF":0.0,"publicationDate":"2021-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72835839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-28DOI: 10.5121/SIPIJ.2021.12101
J. Wilkins, M. Nguyen, B. Rahmani
Lawn area measurement is an application of image processing and deep learning. Researchers used hierarchical networks, segmented images, and other methods to measure the lawn area. Methods’ effectiveness and accuracy varies. In this project, deep learning method, specifically Convolutional neural network, was applied to measure the lawn area. We used Keras and TensorFlow in Python to develop a model that was trained on the dataset of houses then tuned the parameters with GridSearchCV in ScikitLearn (a machine learning library in Python) to estimate the lawn area. Convolutional neural network or shortly CNN shows high accuracy (94 -97%). We may conclude that deep learning method, especially CNN, could be a good method with a high state-of-art accuracy.
{"title":"Application of Convolutional Neural Network In LAWN Measurement","authors":"J. Wilkins, M. Nguyen, B. Rahmani","doi":"10.5121/SIPIJ.2021.12101","DOIUrl":"https://doi.org/10.5121/SIPIJ.2021.12101","url":null,"abstract":"Lawn area measurement is an application of image processing and deep learning. Researchers used hierarchical networks, segmented images, and other methods to measure the lawn area. Methods’ effectiveness and accuracy varies. In this project, deep learning method, specifically Convolutional neural network, was applied to measure the lawn area. We used Keras and TensorFlow in Python to develop a model that was trained on the dataset of houses then tuned the parameters with GridSearchCV in ScikitLearn (a machine learning library in Python) to estimate the lawn area. Convolutional neural network or shortly CNN shows high accuracy (94 -97%). We may conclude that deep learning method, especially CNN, could be a good method with a high state-of-art accuracy.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"2 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2021-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89694845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-28DOI: 10.5121/SIPIJ.2021.12104
Rachana Jaiswal, S. Satarkar
Image processing technologies may be employed for quicker and accurate diagnosis in analysis and feature extraction of medical images. Here, existing level set algorithm is modified and it is employed for extracting contour of fetus in an image. In traditional approach, fetal parameters are extracted manually from ultrasound images. An automatic technique is highly desirable to obtain fetal biometric measurements due to some problems in traditional approach such as lack of consistency and accuracy. The proposed approach utilizes global & local region information for fetal contour extraction from ultrasonic images. The main goal of this research is to develop a new methodology to aid the analysis and feature extraction.
{"title":"Role of Hybrid Level Set in Fetal Contour Extraction","authors":"Rachana Jaiswal, S. Satarkar","doi":"10.5121/SIPIJ.2021.12104","DOIUrl":"https://doi.org/10.5121/SIPIJ.2021.12104","url":null,"abstract":"Image processing technologies may be employed for quicker and accurate diagnosis in analysis and feature extraction of medical images. Here, existing level set algorithm is modified and it is employed for extracting contour of fetus in an image. In traditional approach, fetal parameters are extracted manually from ultrasound images. An automatic technique is highly desirable to obtain fetal biometric measurements due to some problems in traditional approach such as lack of consistency and accuracy. The proposed approach utilizes global & local region information for fetal contour extraction from ultrasonic images. The main goal of this research is to develop a new methodology to aid the analysis and feature extraction.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"17 9","pages":"39-52"},"PeriodicalIF":0.0,"publicationDate":"2021-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72371788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}