Pub Date : 2021-07-24DOI: 10.5121/CSIT.2021.111110
D. Pinto-Roa, H. Medina, Federico Román, M. García-Torres, F. Divina, Francisco Gómez-Vela, Félix Morales, Gustavo Velázquez, Federico Daumas, José L. Vázquez-Noguera, Carlos Sauer Ayala, P. E. Gardel-Sotomayor
The discovery and description of patterns in electric energy consumption time series is fundamental for timely management of the system. A bicluster describes a subset of observation points in a time period in which a consumption pattern occurs as abrupt changes or instabilities homogeneously. Nevertheless, the pattern detection complexity increases with the number of observation points and samples of the study period. In this context, current bi-clustering techniques may not detect significant patterns given the increased search space. This study develops a parallel evolutionary computation scheme to find biclusters in electric energy. Numerical simulations show the benefits of the proposed approach, discovering significantly more electricity consumption patterns compared to a state-of-the-art non-parallel competitive algorithm.
{"title":"Parallel Evolutionary Biclustering of Short-term Electric Energy Consumption","authors":"D. Pinto-Roa, H. Medina, Federico Román, M. García-Torres, F. Divina, Francisco Gómez-Vela, Félix Morales, Gustavo Velázquez, Federico Daumas, José L. Vázquez-Noguera, Carlos Sauer Ayala, P. E. Gardel-Sotomayor","doi":"10.5121/CSIT.2021.111110","DOIUrl":"https://doi.org/10.5121/CSIT.2021.111110","url":null,"abstract":"The discovery and description of patterns in electric energy consumption time series is fundamental for timely management of the system. A bicluster describes a subset of observation points in a time period in which a consumption pattern occurs as abrupt changes or instabilities homogeneously. Nevertheless, the pattern detection complexity increases with the number of observation points and samples of the study period. In this context, current bi-clustering techniques may not detect significant patterns given the increased search space. This study develops a parallel evolutionary computation scheme to find biclusters in electric energy. Numerical simulations show the benefits of the proposed approach, discovering significantly more electricity consumption patterns compared to a state-of-the-art non-parallel competitive algorithm.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47320594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-24DOI: 10.5121/CSIT.2021.111102
Hoda Nematy
In traditional cellular infrastructure, cellular devices communicate with each other directly even when they are close together. This strategy causes massive traffic to the cellular network therefore D2D communication has introduced to overcome this issue, bring more bandwidth and also higher rates to the cellular network. One of the major challenges for D2D Communication is to have one single secure protocol that can adapt in four D2D scenarios defined in references. These scenarios are Direct D2D and relaying D2D communication with and without cellular infrastructure. In this paper, we propose a Secure D2D protocol based on ARIADNE with TESLA. Also we use LTE-A AKA protocol for authentication and key agreement procedure between Source and Destination. Next, we adapt this scenario to be applicable in without cellular infrastructure ones. This protocol could be used in direct D2D also. Based on the results, our proposed protocol has a few computation overhead compare to recent works and have less communication overhead than SODE with preserve many security properties such as Authentication, Authorization, Confidentiality, Integrity, Secure Key Agreement, Secure Routing Transmission…. We check Authentication, Confidentiality, Reachability and Secure Key Agreement of the proposed protocol with ProVerif verification tools.
{"title":"Secure Protocol for Four D2D Scenarios","authors":"Hoda Nematy","doi":"10.5121/CSIT.2021.111102","DOIUrl":"https://doi.org/10.5121/CSIT.2021.111102","url":null,"abstract":"In traditional cellular infrastructure, cellular devices communicate with each other directly even when they are close together. This strategy causes massive traffic to the cellular network therefore D2D communication has introduced to overcome this issue, bring more bandwidth and also higher rates to the cellular network. One of the major challenges for D2D Communication is to have one single secure protocol that can adapt in four D2D scenarios defined in references. These scenarios are Direct D2D and relaying D2D communication with and without cellular infrastructure. In this paper, we propose a Secure D2D protocol based on ARIADNE with TESLA. Also we use LTE-A AKA protocol for authentication and key agreement procedure between Source and Destination. Next, we adapt this scenario to be applicable in without cellular infrastructure ones. This protocol could be used in direct D2D also. Based on the results, our proposed protocol has a few computation overhead compare to recent works and have less communication overhead than SODE with preserve many security properties such as Authentication, Authorization, Confidentiality, Integrity, Secure Key Agreement, Secure Routing Transmission…. We check Authentication, Confidentiality, Reachability and Secure Key Agreement of the proposed protocol with ProVerif verification tools.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48217749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-24DOI: 10.5121/CSIT.2021.111109
M. A. L. Vinagreiro, Edson C. Kitani, A. Laganá, L. Yoshioka
Computer vision plays a crucial role in ADAS security and navigation, as most systems are based on deep CNN architectures the computational resource to run a CNN algorithm is demanding. Therefore, the methods to speed up computation have become a relevant research issue. Even though several works on acceleration techniques found in the literature have not yet been achieved satisfactory results for embedded real-time system applications. This paper presents an alternative approach based on the Multilinear Feature Space (MFS) method resorting to transfer learning from large CNN architectures. The proposed method uses CNNs to generate feature maps, although it does not work as complexity reduction approach. When the training process ends, the generated maps are used to create vector feature space. We use this new vector space to make projections of any new sample in order to classify them. Our method, named MFS-CNN, uses the transfer learning from pre trained CNN to reduce the classification time of new sample image, with minimal loss in accuracy. Our method uses the VGG-16 model as the base CNN architecture for experiments; however, the method works with any similar CNN model. Using the well-known Vehicle Image Database and the German Traffic Sign Recognition Benchmark we compared the classification time of original VGG-16 model with the MFS-CNN method and our method is, on average, 17 times faster. The fast classification time reduces the computational and memories demand in embedded applications that requires a large CNN architecture.
{"title":"Using Multilinear Feature Space to Accelerate CNN Classification","authors":"M. A. L. Vinagreiro, Edson C. Kitani, A. Laganá, L. Yoshioka","doi":"10.5121/CSIT.2021.111109","DOIUrl":"https://doi.org/10.5121/CSIT.2021.111109","url":null,"abstract":"Computer vision plays a crucial role in ADAS security and navigation, as most systems are based on deep CNN architectures the computational resource to run a CNN algorithm is demanding. Therefore, the methods to speed up computation have become a relevant research issue. Even though several works on acceleration techniques found in the literature have not yet been achieved satisfactory results for embedded real-time system applications. This paper presents an alternative approach based on the Multilinear Feature Space (MFS) method resorting to transfer learning from large CNN architectures. The proposed method uses CNNs to generate feature maps, although it does not work as complexity reduction approach. When the training process ends, the generated maps are used to create vector feature space. We use this new vector space to make projections of any new sample in order to classify them. Our method, named MFS-CNN, uses the transfer learning from pre trained CNN to reduce the classification time of new sample image, with minimal loss in accuracy. Our method uses the VGG-16 model as the base CNN architecture for experiments; however, the method works with any similar CNN model. Using the well-known Vehicle Image Database and the German Traffic Sign Recognition Benchmark we compared the classification time of original VGG-16 model with the MFS-CNN method and our method is, on average, 17 times faster. The fast classification time reduces the computational and memories demand in embedded applications that requires a large CNN architecture.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48568170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-24DOI: 10.5121/CSIT.2021.111104
M. Falcó, Gabriela Robiolo
Project managers, product owners, and quality assurance leaders need to visualize and understand the entire picture of the development process as well as comprehend the product quality level, in a synthetic and intuitive way in order to facilitate the decision of accepting or rejecting each iteration within the software life cycle. This idea is extremely important nowadays, due to the fact that time is a key resource and it should be managed wisely to obtain a feasible quality level for each software deliverable. This article presents a novel solution called Product Quality Evaluation Method (PQEM) to evaluate a set of quality characteristics for each iteration of a software product. PQEM is based on the Goal-Question-Metric approach, the standard ISO/IEC 25010, and the extension made of testing coverage in order to obtain the quality coverage of each quality characteristic. The outcome of PQEM is a single value representing the quality per each iteration of a product, as an aggregated measure. Even though a value it is not the regular idea of measuring quality, we believe that it can be useful to use this value to understand easily the quality level of each iteration. An illustrative example of the method was carried out with a web and mobile application, within the healthcare environment.
{"title":"Product Quality Evaluation Method (PQEM): A Comprehensive Approach for the Software Product Life Cycle","authors":"M. Falcó, Gabriela Robiolo","doi":"10.5121/CSIT.2021.111104","DOIUrl":"https://doi.org/10.5121/CSIT.2021.111104","url":null,"abstract":"Project managers, product owners, and quality assurance leaders need to visualize and understand the entire picture of the development process as well as comprehend the product quality level, in a synthetic and intuitive way in order to facilitate the decision of accepting or rejecting each iteration within the software life cycle. This idea is extremely important nowadays, due to the fact that time is a key resource and it should be managed wisely to obtain a feasible quality level for each software deliverable. This article presents a novel solution called Product Quality Evaluation Method (PQEM) to evaluate a set of quality characteristics for each iteration of a software product. PQEM is based on the Goal-Question-Metric approach, the standard ISO/IEC 25010, and the extension made of testing coverage in order to obtain the quality coverage of each quality characteristic. The outcome of PQEM is a single value representing the quality per each iteration of a product, as an aggregated measure. Even though a value it is not the regular idea of measuring quality, we believe that it can be useful to use this value to understand easily the quality level of each iteration. An illustrative example of the method was carried out with a web and mobile application, within the healthcare environment.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48976993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-24DOI: 10.5121/CSIT.2021.111107
Vikas Thammanna Gowda
In the present monetary situation, credit card use has gotten normal. These cards allow the user to make payments online and even in person. Online payments are very convenient, but it comes with its own risk of fraud. With the expanding number of credit card users, frauds are also expanding at the same rate. Some machine learning algorithms can be applied to tackle this problem. In this paper an evaluation of supervised and unsupervised machine learning algorithms has been presented for credit card fraud detection.
{"title":"Credit Card Fraud Detection using Supervised and Unsupervised Learning","authors":"Vikas Thammanna Gowda","doi":"10.5121/CSIT.2021.111107","DOIUrl":"https://doi.org/10.5121/CSIT.2021.111107","url":null,"abstract":"In the present monetary situation, credit card use has gotten normal. These cards allow the user to make payments online and even in person. Online payments are very convenient, but it comes with its own risk of fraud. With the expanding number of credit card users, frauds are also expanding at the same rate. Some machine learning algorithms can be applied to tackle this problem. In this paper an evaluation of supervised and unsupervised machine learning algorithms has been presented for credit card fraud detection.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44887981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-24DOI: 10.5121/CSIT.2021.111105
Roberto Maranca, M. Staiano
The “data supply chains” (DSCs), which are connecting the point where physical information is digitized to the point where the data is consumed, are getting longer and more convoluted. Although plenty of frameworks have emerged in the recent past, none of them, in the authors’ opinion, have so far provided a robust set of formalised “how to”, that would connect a “well built” DSC to a higher likelihood to achieve the expected value. This paper aims at demonstrating: (i) a generalized model of the DSC in its constituent parts (source, target, process, controls), and (ii) a quantification methodology that would link the underlying current quality as well as the legacy “bad data” to the cost or effort of attaining the desired value. Such approach offers a practical and scalable model enabling to restructure at its foundation some practices of data management priming them for the digital challenges of the future.
{"title":"A Generalized Approach to Data Supply Chain Management – Balancing Data Value and Data Debt","authors":"Roberto Maranca, M. Staiano","doi":"10.5121/CSIT.2021.111105","DOIUrl":"https://doi.org/10.5121/CSIT.2021.111105","url":null,"abstract":"The “data supply chains” (DSCs), which are connecting the point where physical information is digitized to the point where the data is consumed, are getting longer and more convoluted. Although plenty of frameworks have emerged in the recent past, none of them, in the authors’ opinion, have so far provided a robust set of formalised “how to”, that would connect a “well built” DSC to a higher likelihood to achieve the expected value. This paper aims at demonstrating: (i) a generalized model of the DSC in its constituent parts (source, target, process, controls), and (ii) a quantification methodology that would link the underlying current quality as well as the legacy “bad data” to the cost or effort of attaining the desired value. Such approach offers a practical and scalable model enabling to restructure at its foundation some practices of data management priming them for the digital challenges of the future.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45045387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-10DOI: 10.5121/CSIT.2021.111010
T. Schirgi
In contrast to the increasing degree of automation in the production industry, commissioning and maintenance activities will essentially be limited to manual activities. Production involves repetitive actions that are manageable and clearly defined as a process. Unlike this, commissioning and maintenance have to deal with uncontrollable, undefined, and non - standardized processes. The paper provides a framework for a multimedia assistance system for singletons. It was found that the paradigm has to consist of five key components to provide tailored assistance to customers. These key components are Expertise, Infrastructure, Application & Platforms, Security & Privacy and Business Process & Business Model. The resulting stack and the overlaying business model are called "CaRE – Custom Assistance for Remote Employees". With a user-centered approach, the needs of the target group were identified. Based on this, the framework was implemented in the form of a prototypical application. To check, whether the assumptions regarding a Multimedia Assistance System are correct, the prototypical developed application was tested with aremote-usability test.
{"title":"Care – A Framework for a Multimedia Assistance System for Singletons “Does It Help?”","authors":"T. Schirgi","doi":"10.5121/CSIT.2021.111010","DOIUrl":"https://doi.org/10.5121/CSIT.2021.111010","url":null,"abstract":"In contrast to the increasing degree of automation in the production industry, commissioning and maintenance activities will essentially be limited to manual activities. Production involves repetitive actions that are manageable and clearly defined as a process. Unlike this, commissioning and maintenance have to deal with uncontrollable, undefined, and non - standardized processes. The paper provides a framework for a multimedia assistance system for singletons. It was found that the paradigm has to consist of five key components to provide tailored assistance to customers. These key components are Expertise, Infrastructure, Application & Platforms, Security & Privacy and Business Process & Business Model. The resulting stack and the overlaying business model are called \"CaRE – Custom Assistance for Remote Employees\". With a user-centered approach, the needs of the target group were identified. Based on this, the framework was implemented in the form of a prototypical application. To check, whether the assumptions regarding a Multimedia Assistance System are correct, the prototypical developed application was tested with aremote-usability test.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45988453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-10DOI: 10.5121/CSIT.2021.111008
Hernandez Pedro, Espitia Edinson
{"title":"Enhancing Security in Internet of Things Environment by Developing an AuthenticationMechanism using COAP Protocol","authors":"Hernandez Pedro, Espitia Edinson","doi":"10.5121/CSIT.2021.111008","DOIUrl":"https://doi.org/10.5121/CSIT.2021.111008","url":null,"abstract":"","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":"11 1","pages":"101"},"PeriodicalIF":0.0,"publicationDate":"2021-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70598815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-10DOI: 10.5121/CSIT.2021.111003
obert B. Cohen
When AI models and machine learning are fully interconnected in factories with cabling-free 5G wireless networks, firms become “fully digital”. This analysis argues that it is not the initial efficiencies gained by optimizing a plant’s operations but rather a firm’s ability to build a collection of knowledge about each step of its operations, what we call “knowledge synthesis”. This is information about how each product is produced, how the process to produce it is managed and optimized, and the software and systems required. This knowledge is important because it permits firms to exploit network effects based upon connecting plants together or sharing expertise with partners. This greatly expands the potential for economic benefits from the use of AI and 5G. This review explores cases from firms with smart factories that have adopted AI and 5G communications including Moderna, Sanofi, Mercedes, Ford, and VW. It examines how these firms have benefitted from the move to smart factories with 5G communications networks. It also explores how firms have improved their value chains by building smart factories that connect nearly all manufacturing processes to machine learning and AI models that analyze machine and process data rapidly. Next, they take advantage of network effects – due to “knowledge synthesis” that permits early smart factories with 5G networks --to derive even larger benefits inside their production operations and in their supply chains. In both phases, the adoption of 5th Generation wireless in plants ramps up firms’ abilities to interconnect their digital systems. Once the interconnected systems exist, firms exploit network effects to create “knowledge synthesis” or knowledge platforms to consolidate insights gained from optimizing many machines and processes. Using “knowledge synthesis”, firms can also transfer knowledge from one group of equipment to another that is not optimized even when the equipment is in different facilities. This makes firms far more flexible, interoperable, and scalable.
{"title":"Manufacturers, AI Models and Machine Learning, Value Chains, and 5th Generation Wireless Networks","authors":"obert B. Cohen","doi":"10.5121/CSIT.2021.111003","DOIUrl":"https://doi.org/10.5121/CSIT.2021.111003","url":null,"abstract":"When AI models and machine learning are fully interconnected in factories with cabling-free 5G wireless networks, firms become “fully digital”. This analysis argues that it is not the initial efficiencies gained by optimizing a plant’s operations but rather a firm’s ability to build a collection of knowledge about each step of its operations, what we call “knowledge synthesis”. This is information about how each product is produced, how the process to produce it is managed and optimized, and the software and systems required. This knowledge is important because it permits firms to exploit network effects based upon connecting plants together or sharing expertise with partners. This greatly expands the potential for economic benefits from the use of AI and 5G. This review explores cases from firms with smart factories that have adopted AI and 5G communications including Moderna, Sanofi, Mercedes, Ford, and VW. It examines how these firms have benefitted from the move to smart factories with 5G communications networks. It also explores how firms have improved their value chains by building smart factories that connect nearly all manufacturing processes to machine learning and AI models that analyze machine and process data rapidly. Next, they take advantage of network effects – due to “knowledge synthesis” that permits early smart factories with 5G networks --to derive even larger benefits inside their production operations and in their supply chains. In both phases, the adoption of 5th Generation wireless in plants ramps up firms’ abilities to interconnect their digital systems. Once the interconnected systems exist, firms exploit network effects to create “knowledge synthesis” or knowledge platforms to consolidate insights gained from optimizing many machines and processes. Using “knowledge synthesis”, firms can also transfer knowledge from one group of equipment to another that is not optimized even when the equipment is in different facilities. This makes firms far more flexible, interoperable, and scalable.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47365838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-10DOI: 10.5121/CSIT.2021.111002
Jaekwang Kim
In this study, we study the technique for predicting heavy / non-rain rainfall after 6 hours from the present using the values of the weather attributes. Through this study, we investigated whether each attribute value is influenced by a specific pattern of weather maps representing heavy and non-heavy rains or seasonally when making heavy / non-heavy forecasts. For the experiment, a 20-year cumulative weather map was learned with Support Vector Machine (SVM) and tested using a set of correct answers for heavy rain and heavy rain. As a result of the experiment, it was found that the heavy rain prediction of SVM showed an accuracy rate of up to 70%, and that it was seasonal variation rather than a specific pattern that influenced the prediction.
{"title":"Seasonal Heavy Rain Forecasting Method","authors":"Jaekwang Kim","doi":"10.5121/CSIT.2021.111002","DOIUrl":"https://doi.org/10.5121/CSIT.2021.111002","url":null,"abstract":"In this study, we study the technique for predicting heavy / non-rain rainfall after 6 hours from the present using the values of the weather attributes. Through this study, we investigated whether each attribute value is influenced by a specific pattern of weather maps representing heavy and non-heavy rains or seasonally when making heavy / non-heavy forecasts. For the experiment, a 20-year cumulative weather map was learned with Support Vector Machine (SVM) and tested using a set of correct answers for heavy rain and heavy rain. As a result of the experiment, it was found that the heavy rain prediction of SVM showed an accuracy rate of up to 70%, and that it was seasonal variation rather than a specific pattern that influenced the prediction.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41589923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}