Pub Date : 2020-12-09DOI: 10.1109/ISSPIT51521.2020.9408901
Miao Sun, Gurjeet Singh, Patrick Chiang
3D face reconstruction is an attractive topic in computer vision. We have seen dramatic rise in its development recently. Now the state-of-the-art method can reconstruct a face from a single 2D face image freely, which brings a threat to facial security society. Since they are very similar in feature distributions, an efficient work to discriminate reconstructed face and real face is vital. Since Generative Adversarial Nets (GAN) has been proposed by Ian J. Goodfellow in 2014, it is extensively trained to approximate data distributions of many applications. For its adversarial mechanism, GAN shows a powerful generative ability to get the state of art. Inspired by its adversarial mechanism, we propose a similar framework called Anti-GAN to discriminate an adversarial dataset from real 3D face datasets and reconstructed face datasets. Considering the computation of backpropagation, G and D all adopt convolutional neural network architecture. Additionally, experiments show that Anti-GAN is a powerful way to distinguish real faces and reconstructed faces. At the same time, it can also offer robust features for a facial identity task.
三维人脸重建是计算机视觉领域的一个热门课题。最近我们看到它的发展有了惊人的增长。目前最先进的方法可以从单个二维人脸图像中自由地重建人脸,这给人脸安全社会带来了威胁。由于重建人脸和真实人脸在特征分布上非常相似,因此有效区分重建人脸和真实人脸至关重要。自Ian J. Goodfellow于2014年提出生成对抗网络(GAN)以来,它被广泛训练以近似许多应用程序的数据分布。由于其对抗机制,GAN显示出强大的生成能力,达到了最先进的水平。受其对抗机制的启发,我们提出了一个类似的框架Anti-GAN,用于区分真实3D人脸数据集和重建人脸数据集的对抗数据集。考虑到反向传播的计算,G和D都采用卷积神经网络架构。此外,实验表明,Anti-GAN是一种有效区分真实人脸和重建人脸的方法。同时,它还可以为面部识别任务提供强大的功能。
{"title":"Anti-Gan: Discriminating 3D reconstructed and real faces for robust facial Identity in Anti-spoofing Generator Adversarial Network","authors":"Miao Sun, Gurjeet Singh, Patrick Chiang","doi":"10.1109/ISSPIT51521.2020.9408901","DOIUrl":"https://doi.org/10.1109/ISSPIT51521.2020.9408901","url":null,"abstract":"3D face reconstruction is an attractive topic in computer vision. We have seen dramatic rise in its development recently. Now the state-of-the-art method can reconstruct a face from a single 2D face image freely, which brings a threat to facial security society. Since they are very similar in feature distributions, an efficient work to discriminate reconstructed face and real face is vital. Since Generative Adversarial Nets (GAN) has been proposed by Ian J. Goodfellow in 2014, it is extensively trained to approximate data distributions of many applications. For its adversarial mechanism, GAN shows a powerful generative ability to get the state of art. Inspired by its adversarial mechanism, we propose a similar framework called Anti-GAN to discriminate an adversarial dataset from real 3D face datasets and reconstructed face datasets. Considering the computation of backpropagation, G and D all adopt convolutional neural network architecture. Additionally, experiments show that Anti-GAN is a powerful way to distinguish real faces and reconstructed faces. At the same time, it can also offer robust features for a facial identity task.","PeriodicalId":111385,"journal":{"name":"2020 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130994211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-09DOI: 10.1109/ISSPIT51521.2020.9408713
Zhaohui Wang
Recently, a variable frame rate imaging method based on Fourier transformation has been developed to increase resolution and reduce sidelobe. Experiments with the imaging methods including D&S, 1-angle HFR (HFR 1), 11-angle HFR (HFR 11), 19-angle HFR (HFR 19), and 91-angle HFR (HFR 91) have also been carried out. In the experiment, one linear array was used to construct 2D B-mode images for a tissue-equivalent phantom and pointer scatterer. The array had a center frequency of 2.5MHz, dimensions of 19.2mm×14mm, and 128 elements. The experiments on the resolution and sidelobe were done with pointer scatterer in the water tank. Results show that HFR 11, HFR 19, and HFR 91 have higher resolution than D&S at all depths. The sidelobe for HFR 1, HFR 11, HFR 19, D&S, and HFR 91 decreases in turn, and HFR 91 has the lowest sidelobe. The experiments on the contrast comparison between HFR and D&S method are made on one tissue-equivalent phantom, eight cones with different contrasts (−15dB, −10dB, −5dB, −2dB, 2dB, 4dB, 7.5dB and 12dB) over background. The contrast curves of eight cones for HFR 1, HFR 11, HFR 19, D&S, and HFR 91 shift downward in turn, which is compatible with their sidelobe property. The contrast recognition accuracy of HFR 91 is the best. All evaluation standards show that the high frame rate imaging method is better than the conventional delay and sum method if their frame rates are the same, so high resolution and low-sidelobe images can be constructed at a high frame rate with this Fourier method.
{"title":"Resolution, Sidelobe, and Contrast Analysis of Ultrasound Fourier Based High Frame Rate Imaging","authors":"Zhaohui Wang","doi":"10.1109/ISSPIT51521.2020.9408713","DOIUrl":"https://doi.org/10.1109/ISSPIT51521.2020.9408713","url":null,"abstract":"Recently, a variable frame rate imaging method based on Fourier transformation has been developed to increase resolution and reduce sidelobe. Experiments with the imaging methods including D&S, 1-angle HFR (HFR 1), 11-angle HFR (HFR 11), 19-angle HFR (HFR 19), and 91-angle HFR (HFR 91) have also been carried out. In the experiment, one linear array was used to construct 2D B-mode images for a tissue-equivalent phantom and pointer scatterer. The array had a center frequency of 2.5MHz, dimensions of 19.2mm×14mm, and 128 elements. The experiments on the resolution and sidelobe were done with pointer scatterer in the water tank. Results show that HFR 11, HFR 19, and HFR 91 have higher resolution than D&S at all depths. The sidelobe for HFR 1, HFR 11, HFR 19, D&S, and HFR 91 decreases in turn, and HFR 91 has the lowest sidelobe. The experiments on the contrast comparison between HFR and D&S method are made on one tissue-equivalent phantom, eight cones with different contrasts (−15dB, −10dB, −5dB, −2dB, 2dB, 4dB, 7.5dB and 12dB) over background. The contrast curves of eight cones for HFR 1, HFR 11, HFR 19, D&S, and HFR 91 shift downward in turn, which is compatible with their sidelobe property. The contrast recognition accuracy of HFR 91 is the best. All evaluation standards show that the high frame rate imaging method is better than the conventional delay and sum method if their frame rates are the same, so high resolution and low-sidelobe images can be constructed at a high frame rate with this Fourier method.","PeriodicalId":111385,"journal":{"name":"2020 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133920433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-09DOI: 10.1109/ISSPIT51521.2020.9408941
Areej Althubaity, R. Ammar, Song Han
The Routing Protocol for Low Power and Lossy Networks (RPL) was designed to meet the routing requirements of resource-constrained wireless networks to support different topologies as well as various Quality of Services (QoS). In RPL, nodes carefully select the best routes toward the root and avoid routing loops according to their locations in the network. Unfortunately, nodes can be compromised to perform a variety of internal attacks against the RPL rules. To improve the security within the RPL-based networks, in this paper, we extend a centralized Intrusion Detection System (IDS) called ARM, with specification-based intrusion modules added to both the root and the RPL nodes to enhance their ability in detecting a wider range of RPL rules-related attacks. Our extensive simulation results show that the proposed IDS, ARM-Pro, can achieve high accuracy in detecting the RPL rules-related attacks while incurring a moderate overhead on the devices resources.
{"title":"Detecting Rules-related Attacks in RPL-based Resource-Constrained Wireless Networks","authors":"Areej Althubaity, R. Ammar, Song Han","doi":"10.1109/ISSPIT51521.2020.9408941","DOIUrl":"https://doi.org/10.1109/ISSPIT51521.2020.9408941","url":null,"abstract":"The Routing Protocol for Low Power and Lossy Networks (RPL) was designed to meet the routing requirements of resource-constrained wireless networks to support different topologies as well as various Quality of Services (QoS). In RPL, nodes carefully select the best routes toward the root and avoid routing loops according to their locations in the network. Unfortunately, nodes can be compromised to perform a variety of internal attacks against the RPL rules. To improve the security within the RPL-based networks, in this paper, we extend a centralized Intrusion Detection System (IDS) called ARM, with specification-based intrusion modules added to both the root and the RPL nodes to enhance their ability in detecting a wider range of RPL rules-related attacks. Our extensive simulation results show that the proposed IDS, ARM-Pro, can achieve high accuracy in detecting the RPL rules-related attacks while incurring a moderate overhead on the devices resources.","PeriodicalId":111385,"journal":{"name":"2020 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134020121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-09DOI: 10.1109/ISSPIT51521.2020.9408968
N. Madhu, Mohammed Krini
Spectral refinement (SR) offers a computationally in-expensive means of generating a refined (higher resolution) signal spectrum by linearly combining the spectra of shorter, contiguous signal segments. The benefit of this method has previously been demonstrated on the problem of fundamental frequency (F0) estimation in speech processing – specifically for the improved estimation of very low F0. One drawback of SR is, however, the poorer detection of voicing onsets due to the Heisenberg-Gabor limit on time and frequency resolution. This may also lead to degraded performance in noisy conditions. Transitioning between long- and short-time windows for the spectral analysis may offer a good trade-off in these situations. This contribution presents a method to adaptively switch between short- and long-time windows (and, correspondingly, between the short-term and the refined spectrum) for voicing detection and F0 estimation. The improvements in voicing detection and F0 estimation due to this adaptive switching is conclusively demonstrated on audio signals in clean and corrupted conditions.
{"title":"Spectral refinement with adaptive window-size selection for voicing detection and fundamental frequency estimation","authors":"N. Madhu, Mohammed Krini","doi":"10.1109/ISSPIT51521.2020.9408968","DOIUrl":"https://doi.org/10.1109/ISSPIT51521.2020.9408968","url":null,"abstract":"Spectral refinement (SR) offers a computationally in-expensive means of generating a refined (higher resolution) signal spectrum by linearly combining the spectra of shorter, contiguous signal segments. The benefit of this method has previously been demonstrated on the problem of fundamental frequency (F0) estimation in speech processing – specifically for the improved estimation of very low F0. One drawback of SR is, however, the poorer detection of voicing onsets due to the Heisenberg-Gabor limit on time and frequency resolution. This may also lead to degraded performance in noisy conditions. Transitioning between long- and short-time windows for the spectral analysis may offer a good trade-off in these situations. This contribution presents a method to adaptively switch between short- and long-time windows (and, correspondingly, between the short-term and the refined spectrum) for voicing detection and F0 estimation. The improvements in voicing detection and F0 estimation due to this adaptive switching is conclusively demonstrated on audio signals in clean and corrupted conditions.","PeriodicalId":111385,"journal":{"name":"2020 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134171702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-09DOI: 10.1109/ISSPIT51521.2020.9408985
Hussain Albarakati, R. Ammar, Raafat S. Elfouly
underwater wireless acoustic sensor networks (UWASNs) have been used as an efficient means of communication to discover and extract data in aquatic environments. Applications of UWASNs include marine exploration, mine reconnaissance, oil and gas inspection, marine exploration, and border surveillance and military applications. However, these applications are limited by the huge volumes of data involved in detection, discovery, transmission, and forwarding. In particular, the transmission and receipt of large volumes of data require an exhaustive amount of time and substantial power to execute, and may still fail to meet real-time constraints. This shortcoming directed our research focus to the advancement of an underwater computer embedded system to meet the required limitations. Our research activities have included the extraction of valuable information from under the ocean using data mining approaches. We previously introduced real-time underwater system architectures that use a single computer. In this study, we extend our results and propose a new real-time underwater system architecture for large-scale networks. This architecture uses multiple computers to enhance its reliability. Determining the optimal locations of computers and their membership of acoustic sensors with minimum delay time, power consumption, and load balance is an NP-hard problem. We therefore propose a heuristic approach to find the optimal locations of computers and their membership of acoustic sensor nodes. We then develop sensor network topologies that reduce data-aggregation latency and data loss and increase the network lifespan. This paper merges heuristic solutions and topologies to achieve the best network performance. A simulation is performed to show the merit of our results and to measure the performance of our proposed solution.
{"title":"Efficient Topology of Multilevel Clustering Algorithm for Underwater Sensor Networks","authors":"Hussain Albarakati, R. Ammar, Raafat S. Elfouly","doi":"10.1109/ISSPIT51521.2020.9408985","DOIUrl":"https://doi.org/10.1109/ISSPIT51521.2020.9408985","url":null,"abstract":"underwater wireless acoustic sensor networks (UWASNs) have been used as an efficient means of communication to discover and extract data in aquatic environments. Applications of UWASNs include marine exploration, mine reconnaissance, oil and gas inspection, marine exploration, and border surveillance and military applications. However, these applications are limited by the huge volumes of data involved in detection, discovery, transmission, and forwarding. In particular, the transmission and receipt of large volumes of data require an exhaustive amount of time and substantial power to execute, and may still fail to meet real-time constraints. This shortcoming directed our research focus to the advancement of an underwater computer embedded system to meet the required limitations. Our research activities have included the extraction of valuable information from under the ocean using data mining approaches. We previously introduced real-time underwater system architectures that use a single computer. In this study, we extend our results and propose a new real-time underwater system architecture for large-scale networks. This architecture uses multiple computers to enhance its reliability. Determining the optimal locations of computers and their membership of acoustic sensors with minimum delay time, power consumption, and load balance is an NP-hard problem. We therefore propose a heuristic approach to find the optimal locations of computers and their membership of acoustic sensor nodes. We then develop sensor network topologies that reduce data-aggregation latency and data loss and increase the network lifespan. This paper merges heuristic solutions and topologies to achieve the best network performance. A simulation is performed to show the merit of our results and to measure the performance of our proposed solution.","PeriodicalId":111385,"journal":{"name":"2020 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114887013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-09DOI: 10.1109/ISSPIT51521.2020.9408967
Abdullah Alenizi, R. Ammar, Raafat S. Elfouly, Mohammad Alsulami
Cloud applications can be modeled as workflows. These workflows are represented by Directed Acyclic Graphs (DAGs) or non-DAGs. The graph shows the relationship between tasks that compose a workflow and the dependencies between these tasks. in our previous work, we presented a method for transforming a workflow into an equivalent graph that shows all possible paths that a workflow will take. In this paper, we use the results of that method for multiple workflows coming to a queue and use the famous pollaczek–khintchine formula to estimate the average waiting and completion time for submitted workflows. Then, we use different scheduling algorithms, namely, Shortest Job First (SJF) and Longest Job First (LJF) and compare them with First Come First Serve (FCFS).
{"title":"Queue Analysis for Probabilistic Cloud Workflows","authors":"Abdullah Alenizi, R. Ammar, Raafat S. Elfouly, Mohammad Alsulami","doi":"10.1109/ISSPIT51521.2020.9408967","DOIUrl":"https://doi.org/10.1109/ISSPIT51521.2020.9408967","url":null,"abstract":"Cloud applications can be modeled as workflows. These workflows are represented by Directed Acyclic Graphs (DAGs) or non-DAGs. The graph shows the relationship between tasks that compose a workflow and the dependencies between these tasks. in our previous work, we presented a method for transforming a workflow into an equivalent graph that shows all possible paths that a workflow will take. In this paper, we use the results of that method for multiple workflows coming to a queue and use the famous pollaczek–khintchine formula to estimate the average waiting and completion time for submitted workflows. Then, we use different scheduling algorithms, namely, Shortest Job First (SJF) and Longest Job First (LJF) and compare them with First Come First Serve (FCFS).","PeriodicalId":111385,"journal":{"name":"2020 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116706038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-09DOI: 10.1109/ISSPIT51521.2020.9408841
Abdullah Alenizi, R. Ammar, Raafat S. Elfouly, Mohammad Alsulami
Cloud Computing offers resources as a utility that can be accessed and rented via web browsers. It has also made it easy for paying for resources with different ways of pricing. However, that could result in overpaying for resources or underutilizing the reserved resources. In this paper, we focus on two main options in the pricing model, namely, pay-per-use and reserved instances. In the first one, users can pay for what they use only while the second option offers up to 72% discount but they have to pay in advance for the whole reserved period. In this paper, we present two algorithms for provisioning cloud resources to help cloud customers pick the most cost-effective plans for their jobs
{"title":"Cost Minimization Algorithm for Provisioning Cloud Resources","authors":"Abdullah Alenizi, R. Ammar, Raafat S. Elfouly, Mohammad Alsulami","doi":"10.1109/ISSPIT51521.2020.9408841","DOIUrl":"https://doi.org/10.1109/ISSPIT51521.2020.9408841","url":null,"abstract":"Cloud Computing offers resources as a utility that can be accessed and rented via web browsers. It has also made it easy for paying for resources with different ways of pricing. However, that could result in overpaying for resources or underutilizing the reserved resources. In this paper, we focus on two main options in the pricing model, namely, pay-per-use and reserved instances. In the first one, users can pay for what they use only while the second option offers up to 72% discount but they have to pay in advance for the whole reserved period. In this paper, we present two algorithms for provisioning cloud resources to help cloud customers pick the most cost-effective plans for their jobs","PeriodicalId":111385,"journal":{"name":"2020 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116147220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-09DOI: 10.1109/ISSPIT51521.2020.9408865
Webert Montlouis
Estimating the Direction of Arrival of a point using a planar array is well understood when the source is assumed stationary during the observation interval. In this paper, we classify every technique that relies on this strong assumption as a conventional approach. Many techniques have been proposed to estimate the azimuth and elevation angles pair. If we move away from this assumption and let the source move during the observation window, the parameters azimuth and elevation are now time-varying. In this case, additional parameters such as angular velocities in azimuth and elevation can also be estimated. The additional parameters can provide more accurate information to help predict the next position of the object. Often when the number of parameters of interest increases, as is the case here, the complexity of the problem also increases. In this situation, we always look for techniques to reduce computational complexity. Sometimes the reduction in complexity comes in the form of transformation, rotation, or antenna array geometry. In this presentation, we use a special antenna array structure to reduce computational complexity and estimate the planar array parameters of interest.
{"title":"DOAV Estimation Using Special Antenna Array Structure","authors":"Webert Montlouis","doi":"10.1109/ISSPIT51521.2020.9408865","DOIUrl":"https://doi.org/10.1109/ISSPIT51521.2020.9408865","url":null,"abstract":"Estimating the Direction of Arrival of a point using a planar array is well understood when the source is assumed stationary during the observation interval. In this paper, we classify every technique that relies on this strong assumption as a conventional approach. Many techniques have been proposed to estimate the azimuth and elevation angles pair. If we move away from this assumption and let the source move during the observation window, the parameters azimuth and elevation are now time-varying. In this case, additional parameters such as angular velocities in azimuth and elevation can also be estimated. The additional parameters can provide more accurate information to help predict the next position of the object. Often when the number of parameters of interest increases, as is the case here, the complexity of the problem also increases. In this situation, we always look for techniques to reduce computational complexity. Sometimes the reduction in complexity comes in the form of transformation, rotation, or antenna array geometry. In this presentation, we use a special antenna array structure to reduce computational complexity and estimate the planar array parameters of interest.","PeriodicalId":111385,"journal":{"name":"2020 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129297928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-09DOI: 10.1109/ISSPIT51521.2020.9408866
Zabit Hameed, S. Shapoval, B. Garcia-Zapirain, Amaia Méndez Zorilla
This paper presents a comparably simpler yet effective deep learning approach for sentiment analysis of Twitter topics. We automatically collected positive and negative tweets and labeled them manually, and thus created a new dataset. We then leveraged BiGRU model with an ensemble approach for the binary classification of tweets. Our finalized BiGRU model offered an accuracy of 84.8% as well as an averaged F1-measure of 84.8%(±0.3). Moreover, the ensemble approach, using an averaged prediction of 5-fold strategy, provided the accuracy of 86.3% along with the averaged F1-measure of 86.3%(±0.05). Consequently, the ensemble approach offered better performance even on a smaller dataset used in this study.
{"title":"Sentiment analysis using an ensemble approach of BiGRU model: A case study of AMIS tweets","authors":"Zabit Hameed, S. Shapoval, B. Garcia-Zapirain, Amaia Méndez Zorilla","doi":"10.1109/ISSPIT51521.2020.9408866","DOIUrl":"https://doi.org/10.1109/ISSPIT51521.2020.9408866","url":null,"abstract":"This paper presents a comparably simpler yet effective deep learning approach for sentiment analysis of Twitter topics. We automatically collected positive and negative tweets and labeled them manually, and thus created a new dataset. We then leveraged BiGRU model with an ensemble approach for the binary classification of tweets. Our finalized BiGRU model offered an accuracy of 84.8% as well as an averaged F1-measure of 84.8%(±0.3). Moreover, the ensemble approach, using an averaged prediction of 5-fold strategy, provided the accuracy of 86.3% along with the averaged F1-measure of 86.3%(±0.05). Consequently, the ensemble approach offered better performance even on a smaller dataset used in this study.","PeriodicalId":111385,"journal":{"name":"2020 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115901350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-09DOI: 10.1109/ISSPIT51521.2020.9408859
Byron Arteaga, M. Díaz, M. Jojoa
Nowadays, fires in forest areas are very frequent, mainly caused by climate change and bad practices by the people who live in these areas. In the world the climatic "El Niño" phenomenon has intensified in recent years, increasing the frequency of forest fires, due to high temperatures and prolonged periods of drought that occur. Most forest fires are detected visually and from the ground or from the air using a helicopter; this method is not very efficient since it takes too long to alert the relief corps and requires well-organized logistics. The lack of early detection means has been evident in the events that have occurred in recent months (last fires) and it can be concluded that there are not enough measures to counteract this problem.The purpose of this article is to evaluate the performance of different CNN models pre-trained in the classification of forest fire images, which can be applied in economic development cards such as a Raspberry.
{"title":"Deep Learning Applied to Forest Fire Detection","authors":"Byron Arteaga, M. Díaz, M. Jojoa","doi":"10.1109/ISSPIT51521.2020.9408859","DOIUrl":"https://doi.org/10.1109/ISSPIT51521.2020.9408859","url":null,"abstract":"Nowadays, fires in forest areas are very frequent, mainly caused by climate change and bad practices by the people who live in these areas. In the world the climatic \"El Niño\" phenomenon has intensified in recent years, increasing the frequency of forest fires, due to high temperatures and prolonged periods of drought that occur. Most forest fires are detected visually and from the ground or from the air using a helicopter; this method is not very efficient since it takes too long to alert the relief corps and requires well-organized logistics. The lack of early detection means has been evident in the events that have occurred in recent months (last fires) and it can be concluded that there are not enough measures to counteract this problem.The purpose of this article is to evaluate the performance of different CNN models pre-trained in the classification of forest fire images, which can be applied in economic development cards such as a Raspberry.","PeriodicalId":111385,"journal":{"name":"2020 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126816766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}