Pub Date : 2021-11-28DOI: 10.15849/ijasca.211128.03
Basil Al-Kasasbeh
Cryptography is the core method utilized to protect the communications between different applications, terminals, and agents distributed worldwide and connected via the internet. Yet, with the distribution of the low-energy and low-storage devices, in the Internet-of-Things (IoT), the cryptography protocols cannot be implemented because of the power constraints or because the implementation is beyond the time constraints that hindered their usability of these protocols in real-time critical applications. To solve this problem, an Adaptive Multi-Application Cryptography System is proposed in this paper. The proposed system consists of the requirements identifier and the implementer, implemented on the application and transportation layer. The requirement identifier examines the header of the data, determines the underlying application and its type. The requirements are then identified and encoded according to four options: high, moderate, low, and no security requirements. The inputs are processed, and ciphertext is produced based on the identified requirements and the suitable cryptography algorithm. The results showed that the proposed system reduces the delay by 97% relative to the utilized algorithms' upper-bound delay. Keywords: Cryptography, symmetric key encryption, block cipher, delay and performance, quantum computing.
{"title":"Adaptive Multi-Applications Cryptographic System","authors":"Basil Al-Kasasbeh","doi":"10.15849/ijasca.211128.03","DOIUrl":"https://doi.org/10.15849/ijasca.211128.03","url":null,"abstract":"Cryptography is the core method utilized to protect the communications between different applications, terminals, and agents distributed worldwide and connected via the internet. Yet, with the distribution of the low-energy and low-storage devices, in the Internet-of-Things (IoT), the cryptography protocols cannot be implemented because of the power constraints or because the implementation is beyond the time constraints that hindered their usability of these protocols in real-time critical applications. To solve this problem, an Adaptive Multi-Application Cryptography System is proposed in this paper. The proposed system consists of the requirements identifier and the implementer, implemented on the application and transportation layer. The requirement identifier examines the header of the data, determines the underlying application and its type. The requirements are then identified and encoded according to four options: high, moderate, low, and no security requirements. The inputs are processed, and ciphertext is produced based on the identified requirements and the suitable cryptography algorithm. The results showed that the proposed system reduces the delay by 97% relative to the utilized algorithms' upper-bound delay. Keywords: Cryptography, symmetric key encryption, block cipher, delay and performance, quantum computing.","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46172695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-28DOI: 10.15849/ijasca.211128.01
The goal of dependency parsing is to seek a functional relationship among words. For instance, it tells the subject-object relation in a sentence. Parsing the Indonesian language requires information about the morphology of a word. Indonesian grammar relies heavily on affixation to combine root words with affixes to form another word. Thus, morphology information should be incorporated. Fortunately, it can be encoded implicitly by word representation. Embeddings from Language Models (ELMo) is a word representation which be able to capture morphology information. Unlike most widely used word representations such as word2vec or Global Vectors (GloVe), ELMo utilizes a Convolutional Neural Network (CNN) over characters. With it, the affixation process could ideally encoded in a word representation. We did an analysis using nearest neighbor words and T-distributed Stochastic Neighbor Embedding (t-SNE) word visualization to compare word2vec and ELMo. Our result showed that ELMo representation is richer in encoding the morphology information than it's counterpart. We trained our parser using word2vec and ELMo. To no surprise, the parser which uses ELMo gets a higher accuracy than word2vec. We obtain Unlabeled Attachment Score (UAS) at 83.08 for ELMo and 81.35 for word2vec. Hence, we confirmed that morphology information is necessary, especially in a morphologically rich language like Indonesian. Keywords: ELMo, Dependency Parser, Natural Language Processing, word2vec
{"title":"Embedding from Language Models (ELMos)- based Dependency Parser for Indonesian Language","authors":"","doi":"10.15849/ijasca.211128.01","DOIUrl":"https://doi.org/10.15849/ijasca.211128.01","url":null,"abstract":"The goal of dependency parsing is to seek a functional relationship among words. For instance, it tells the subject-object relation in a sentence. Parsing the Indonesian language requires information about the morphology of a word. Indonesian grammar relies heavily on affixation to combine root words with affixes to form another word. Thus, morphology information should be incorporated. Fortunately, it can be encoded implicitly by word representation. Embeddings from Language Models (ELMo) is a word representation which be able to capture morphology information. Unlike most widely used word representations such as word2vec or Global Vectors (GloVe), ELMo utilizes a Convolutional Neural Network (CNN) over characters. With it, the affixation process could ideally encoded in a word representation. We did an analysis using nearest neighbor words and T-distributed Stochastic Neighbor Embedding (t-SNE) word visualization to compare word2vec and ELMo. Our result showed that ELMo representation is richer in encoding the morphology information than it's counterpart. We trained our parser using word2vec and ELMo. To no surprise, the parser which uses ELMo gets a higher accuracy than word2vec. We obtain Unlabeled Attachment Score (UAS) at 83.08 for ELMo and 81.35 for word2vec. Hence, we confirmed that morphology information is necessary, especially in a morphologically rich language like Indonesian. Keywords: ELMo, Dependency Parser, Natural Language Processing, word2vec","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45361488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-28DOI: 10.15849/ijasca.211128.05
Ahmad T. Al-Taani
Recently, the volume of the Arabic texts and documents on the internet had increased rabidly and generated a rich and valuable content on the www. Several parties had contributed to this content, this includes researchers, companies, governmental agencies, educational institutions, etc. With this big content it became difficult to search and extract useful information using only mankind skills and search engines. This motivated researchers to propose automated methodologies to extract summaries or useful information from those documents. A lot of research has been proposed for the automatic extraction of summaries for the English language and other languages. Unfortunately, the research for the Arabic automatic text summarization is still humble and needs more attention. This study presents a critical review and analysis of recent studies in Arabic automatic text summarization. The review includes all recent studies used the different text summarization approaches which include statistical-based, graph-based, evolutionary-based, and machine learning-based approaches. The selection criteria of the literature are based on the venue of publication and year of publication; back to five years. All review papers in Arabic ATS are excluded from the review since the study considers the recent methodologies in Arabic ATS. As a conclusion of this research, we recommend researchers in Arabic text summarization to investigate the use of machine learning on abstractive approach for text summarization due to the lack of research in this area. Keywords: Automatic Text Summarization, The Arabic Language, Machine Learning, Natural Language Processing, Text Processing, Computational Linguistics.
{"title":"Recent Advances in Arabic Automatic Text Summarization","authors":"Ahmad T. Al-Taani","doi":"10.15849/ijasca.211128.05","DOIUrl":"https://doi.org/10.15849/ijasca.211128.05","url":null,"abstract":"Recently, the volume of the Arabic texts and documents on the internet had increased rabidly and generated a rich and valuable content on the www. Several parties had contributed to this content, this includes researchers, companies, governmental agencies, educational institutions, etc. With this big content it became difficult to search and extract useful information using only mankind skills and search engines. This motivated researchers to propose automated methodologies to extract summaries or useful information from those documents. A lot of research has been proposed for the automatic extraction of summaries for the English language and other languages. Unfortunately, the research for the Arabic automatic text summarization is still humble and needs more attention. This study presents a critical review and analysis of recent studies in Arabic automatic text summarization. The review includes all recent studies used the different text summarization approaches which include statistical-based, graph-based, evolutionary-based, and machine learning-based approaches. The selection criteria of the literature are based on the venue of publication and year of publication; back to five years. All review papers in Arabic ATS are excluded from the review since the study considers the recent methodologies in Arabic ATS. As a conclusion of this research, we recommend researchers in Arabic text summarization to investigate the use of machine learning on abstractive approach for text summarization due to the lack of research in this area. Keywords: Automatic Text Summarization, The Arabic Language, Machine Learning, Natural Language Processing, Text Processing, Computational Linguistics.","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49320031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-28DOI: 10.15849/ijasca.211128.15
Mekonnen Redi, M. Dananto, N. Thillaigovindan
Reservoir operation studies purely based on the storage level, inflow, and release decisions during dry periods only fail to serve the optimal reservoir operation policy design because of the fact that the release decision during this period is highly dependent on wet season water conservation and flood risk management operations. Imperatively, the operation logic in the two seasons are quite different. If the two operations are not sufficiently coordinated, they may produce poor responses to the system dynamics. There are high levels of uncertainties on the model parameters, values and how they are logically operated by human or automated systems. Soft computing methods represent the system as an artificial neural network (ANN) in which the input- output relations take the form of fuzzy numbers, fuzzy arithmetic and fuzzy logic (FL). Neuro-Fuzzy System (NFS) soft computing combine the approaches of FL and ANN for single purpose reservoir operation. Thus, this study proposes a Bi-Level Neuro-Fuzzy System (BL-NFS) soft computing methodology for short and long term operation policies for a newly inaugurated irrigation project in Gidabo Watershed of Main Ethiopian Rift Valley Basin. Keywords: Bankruptcy rule, BL-NFS, Reservoir operation, Sensitivity analysis, Soft computing, Water conservation.
{"title":"A Bi-level Neuro-Fuzzy System Soft Computing for Reservoir Operation","authors":"Mekonnen Redi, M. Dananto, N. Thillaigovindan","doi":"10.15849/ijasca.211128.15","DOIUrl":"https://doi.org/10.15849/ijasca.211128.15","url":null,"abstract":"Reservoir operation studies purely based on the storage level, inflow, and release decisions during dry periods only fail to serve the optimal reservoir operation policy design because of the fact that the release decision during this period is highly dependent on wet season water conservation and flood risk management operations. Imperatively, the operation logic in the two seasons are quite different. If the two operations are not sufficiently coordinated, they may produce poor responses to the system dynamics. There are high levels of uncertainties on the model parameters, values and how they are logically operated by human or automated systems. Soft computing methods represent the system as an artificial neural network (ANN) in which the input- output relations take the form of fuzzy numbers, fuzzy arithmetic and fuzzy logic (FL). Neuro-Fuzzy System (NFS) soft computing combine the approaches of FL and ANN for single purpose reservoir operation. Thus, this study proposes a Bi-Level Neuro-Fuzzy System (BL-NFS) soft computing methodology for short and long term operation policies for a newly inaugurated irrigation project in Gidabo Watershed of Main Ethiopian Rift Valley Basin. Keywords: Bankruptcy rule, BL-NFS, Reservoir operation, Sensitivity analysis, Soft computing, Water conservation.","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49340658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-28DOI: 10.15849/ijasca.211128.09
Wahyu Wibowo, Iis Dewi Ratih
Financing analysis is the process of analyzing the ability of bank customers to pay installments to minimize the risk of a customer not paying installments, which is also called Non-Performing Financing (NPF). In 2020 the NPF ratio at one of the Islamic banks in Indonesia increased due to the decline in people’s income during the Covid-19 pandemic. This phenomenon has led to bad banking performance. In December 2020 the percentage of NPF was 17%. The imbalance between the number of good-financing and NPF customers has resulted in poor classification accuracy results. Therefore, this study classifies NPF customers using the Logistic Regression and Synthetic Minority Over-sampling Technique Nominal Continuous (SMOTE-NC) method. The results of this study indicate that the logistic regression with SMOTE-NC model is the best model for the classification of NPF customers compared to the logistic regression method without SMOTE-NC. The variables that have a significant effect are financing period, type of use, type of collateral, and occupation. The logistic regression with SMOTE-NC can handle the imbalanced dataset and increase the specificity when using logistic regression without SMOTE-NC from 0.04 to 0.21, with an accuracy of 0.81, sensitivity of 0.94, and precision of 0.86. Keywords: Classification, Islamic Bank, Logistic Regression, Non-Performing Financing, SMOTE-NC.
{"title":"Classification of Non-Performing Financing Using Logistic Regression and Synthetic Minority Over-sampling Technique-Nominal Continuous (SMOTE-NC)","authors":"Wahyu Wibowo, Iis Dewi Ratih","doi":"10.15849/ijasca.211128.09","DOIUrl":"https://doi.org/10.15849/ijasca.211128.09","url":null,"abstract":"Financing analysis is the process of analyzing the ability of bank customers to pay installments to minimize the risk of a customer not paying installments, which is also called Non-Performing Financing (NPF). In 2020 the NPF ratio at one of the Islamic banks in Indonesia increased due to the decline in people’s income during the Covid-19 pandemic. This phenomenon has led to bad banking performance. In December 2020 the percentage of NPF was 17%. The imbalance between the number of good-financing and NPF customers has resulted in poor classification accuracy results. Therefore, this study classifies NPF customers using the Logistic Regression and Synthetic Minority Over-sampling Technique Nominal Continuous (SMOTE-NC) method. The results of this study indicate that the logistic regression with SMOTE-NC model is the best model for the classification of NPF customers compared to the logistic regression method without SMOTE-NC. The variables that have a significant effect are financing period, type of use, type of collateral, and occupation. The logistic regression with SMOTE-NC can handle the imbalanced dataset and increase the specificity when using logistic regression without SMOTE-NC from 0.04 to 0.21, with an accuracy of 0.81, sensitivity of 0.94, and precision of 0.86. Keywords: Classification, Islamic Bank, Logistic Regression, Non-Performing Financing, SMOTE-NC.","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43598928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-28DOI: 10.15849/ijasca.211128.04
Sarah Kamil, L. Muhammed
Arrhythmia is a heart condition that occurs due to abnormalities in the heartbeat, which means that the heart's electrical signals do not work properly, resulting in an irregular heartbeat or rhythm and thus defeating the pumping of blood. Some cases of arrhythmia are not considered serious, while others are very dangerous, life-threatening, and cause death in a short period of time. In the clinical routine, cardiac arrhythmia detection is performed by electrocardiogram (ECG) signals. The ECG is a significant diagnosis tool that is used to record the electrical activity of the heart, and its signals can reveal abnormal heart activity. However, because of their small amplitude and duration, visual interpretation of ECG signals is difficult. As a result, we present a significant approach for identifying arrhythmias using ECG signals. In this study, we proposed an approach based on Deep Learning (DL) technology that is a framework of nine-layer one-dimension Conventional Neural Network (1D CNN) for classifying automatically ECG signals into four cardiac conditions named: normal (N), Atrial Premature Beat (APB), Left Bundle Branch Block (LBBB), and Right Bundle Branch Block (RBBB). The practical test of this work was executed with the benchmark MIT-BIH database. We achieved an average accuracy of 99%, precision of 98%, recall of 96.5%, specificity of 99.08%, and an F1-score of 95.75%. The obtained results were compared with some relevant models, and they showed that the proposed framework outperformed those models in some measures. The new approach’s performance indicates its success. Also, it has been shown that deep convolutional neural networks can be used efficiently in automated detection and, therefore, cardiovascular disease protection as well as help cardiologists in medical practice by saving time and effort. Keywords: 1-D CNN, Arrhythmia, Cardiovascular Disease, Classification, Deep learning, Electrocardiogram(ECG), MIT-BIH arrhythmia database.
{"title":"Arrhythmia Classification Using One Dimensional Conventional Neural Network","authors":"Sarah Kamil, L. Muhammed","doi":"10.15849/ijasca.211128.04","DOIUrl":"https://doi.org/10.15849/ijasca.211128.04","url":null,"abstract":"Arrhythmia is a heart condition that occurs due to abnormalities in the heartbeat, which means that the heart's electrical signals do not work properly, resulting in an irregular heartbeat or rhythm and thus defeating the pumping of blood. Some cases of arrhythmia are not considered serious, while others are very dangerous, life-threatening, and cause death in a short period of time. In the clinical routine, cardiac arrhythmia detection is performed by electrocardiogram (ECG) signals. The ECG is a significant diagnosis tool that is used to record the electrical activity of the heart, and its signals can reveal abnormal heart activity. However, because of their small amplitude and duration, visual interpretation of ECG signals is difficult. As a result, we present a significant approach for identifying arrhythmias using ECG signals. In this study, we proposed an approach based on Deep Learning (DL) technology that is a framework of nine-layer one-dimension Conventional Neural Network (1D CNN) for classifying automatically ECG signals into four cardiac conditions named: normal (N), Atrial Premature Beat (APB), Left Bundle Branch Block (LBBB), and Right Bundle Branch Block (RBBB). The practical test of this work was executed with the benchmark MIT-BIH database. We achieved an average accuracy of 99%, precision of 98%, recall of 96.5%, specificity of 99.08%, and an F1-score of 95.75%. The obtained results were compared with some relevant models, and they showed that the proposed framework outperformed those models in some measures. The new approach’s performance indicates its success. Also, it has been shown that deep convolutional neural networks can be used efficiently in automated detection and, therefore, cardiovascular disease protection as well as help cardiologists in medical practice by saving time and effort. Keywords: 1-D CNN, Arrhythmia, Cardiovascular Disease, Classification, Deep learning, Electrocardiogram(ECG), MIT-BIH arrhythmia database.","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41626741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-28DOI: 10.15849/ijasca.211128.10
Zainab Oufqir, Lamiae Binan, A. El Abderrahmani, K. Satori
In this article, we give a comprehensive overview of recent methods in object detection using deep learning and their uses in augmented reality. The objective is to present a complete understanding of these algorithms and how augmented reality functions and services can be improved by integrating these methods. We discuss in detail the different characteristics of each approach and their influence on real-time detection performance. Experimental analyses are provided to compare the performance of each method and make meaningful conclusions for their use in augmented reality. Two-stage detectors generally provide better detection performance, while single-stage detectors are significantly more time efficient and more applicable to real-time object detection. Finally, we discuss several future directions to facilitate and stimulate future research on object detection in augmented reality. Keywords: object detection, deep learning, convolutional neural network, augmented reality.
{"title":"Deep Learning for the Improvement of Object Detection in Augmented Reality","authors":"Zainab Oufqir, Lamiae Binan, A. El Abderrahmani, K. Satori","doi":"10.15849/ijasca.211128.10","DOIUrl":"https://doi.org/10.15849/ijasca.211128.10","url":null,"abstract":"In this article, we give a comprehensive overview of recent methods in object detection using deep learning and their uses in augmented reality. The objective is to present a complete understanding of these algorithms and how augmented reality functions and services can be improved by integrating these methods. We discuss in detail the different characteristics of each approach and their influence on real-time detection performance. Experimental analyses are provided to compare the performance of each method and make meaningful conclusions for their use in augmented reality. Two-stage detectors generally provide better detection performance, while single-stage detectors are significantly more time efficient and more applicable to real-time object detection. Finally, we discuss several future directions to facilitate and stimulate future research on object detection in augmented reality. Keywords: object detection, deep learning, convolutional neural network, augmented reality.","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49255953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kishore Sugali, Christine D. Sprunger, Venkata N. Inukollu
Artificial Intelligence and Machine Learning have been around for a long time. In recent years, there has been a surge in popularity for applications integrating AI and ML technology. As with traditional development, software testing is a critical component of a successful AI/ML application. The development methodology used in AI/ML contrasts significantly from traditional development. In light of these distinctions, various software testing challenges arise. The emphasis of this paper is on the challenge of effectively splitting the data into training and testing data sets. By applying a k-Means clustering strategy to the data set followed by a decision tree, we can significantly increase the likelihood of the training data set to represent the domain of the full dataset and thus avoid training a model that is likely to fail because it has only learned a subset of the full data domain.
{"title":"AI Testing: Ensuring a Good Data Split Between Data Sets (Training and Test) using K-means Clustering and Decision Tree Analysis","authors":"Kishore Sugali, Christine D. Sprunger, Venkata N. Inukollu","doi":"10.5121/IJSC.2021.12101","DOIUrl":"https://doi.org/10.5121/IJSC.2021.12101","url":null,"abstract":"Artificial Intelligence and Machine Learning have been around for a long time. In recent years, there has been a surge in popularity for applications integrating AI and ML technology. As with traditional development, software testing is a critical component of a successful AI/ML application. The development methodology used in AI/ML contrasts significantly from traditional development. In light of these distinctions, various software testing challenges arise. The emphasis of this paper is on the challenge of effectively splitting the data into training and testing data sets. By applying a k-Means clustering strategy to the data set followed by a decision tree, we can significantly increase the likelihood of the training data set to represent the domain of the full dataset and thus avoid training a model that is likely to fail because it has only learned a subset of the full data domain.","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":"71 1","pages":"1-11"},"PeriodicalIF":0.0,"publicationDate":"2021-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91157354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1007/978-3-030-89820-5
{"title":"Advances in Soft Computing: 20th Mexican International Conference on Artificial Intelligence, MICAI 2021, Mexico City, Mexico, October 25–30, 2021, Proceedings, Part II","authors":"","doi":"10.1007/978-3-030-89820-5","DOIUrl":"https://doi.org/10.1007/978-3-030-89820-5","url":null,"abstract":"","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89461878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The present paper deals with an inventory management system with ramp type and quadratic demand rates. A constant deterioration rate is considered into the model. In the two types models, the optimum time and total cost are derived when demand is ramp type and quadratic. A structural comparative study is demonstrated here by illustrating the model with sensitivity analysis.
{"title":"An Inventory Management System for Deteriorating Items with Ramp Type and Quadratic Demand: A Structural Comparative Study","authors":"Biswaranjan Mandal","doi":"10.5121/ijsc.2020.11401","DOIUrl":"https://doi.org/10.5121/ijsc.2020.11401","url":null,"abstract":"The present paper deals with an inventory management system with ramp type and quadratic demand rates. A constant deterioration rate is considered into the model. In the two types models, the optimum time and total cost are derived when demand is ramp type and quadratic. A structural comparative study is demonstrated here by illustrating the model with sensitivity analysis.","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":"140 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77771449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}