Abstract The authors consider the construction of a nonlinear multiple regression model, its confidence and prediction intervals to evaluate the efforts of mobile application development in the planning phase based on the multivariate normalizing transformation and outlier detection. The constructed model is compared to the linear regression model and nonlinear regression models based on the univariate transformations, such as the decimal logarithm, Box–Cox, and Johnson transformation. This model, in comparison with other regression models, has better prediction accuracy.
{"title":"Estimating the Efforts of Mobile Application Development in the Planning Phase Using Nonlinear Regression Analysis","authors":"S. Prykhodko, N. Prykhodko, K. Knyrik","doi":"10.2478/acss-2020-0019","DOIUrl":"https://doi.org/10.2478/acss-2020-0019","url":null,"abstract":"Abstract The authors consider the construction of a nonlinear multiple regression model, its confidence and prediction intervals to evaluate the efforts of mobile application development in the planning phase based on the multivariate normalizing transformation and outlier detection. The constructed model is compared to the linear regression model and nonlinear regression models based on the univariate transformations, such as the decimal logarithm, Box–Cox, and Johnson transformation. This model, in comparison with other regression models, has better prediction accuracy.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"572 1","pages":"172 - 179"},"PeriodicalIF":1.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83536853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The objective of the paper is to identify predictive models in stock market prediction focusing on a scenario of the emerging markets. An exploratory analysis and conceptual modelling based on the extant literature during 1933 to 2020 have been used in the study. The databases of Web of Science, Scopus, and JSTOR ensure the reliability of the literature. Bibliometrics and scientometric techniques have been applied to the retrieved articles to create a conceptual framework by mapping interlinks and limitations in past studies. Focus of research is hybrid models that integrate big data, social media, and real-time streaming data. Key finding is that actual phenomena affecting stock market sectors are diverse and, hence, limited in generalization. The future research must focus on models empirically validated within the emerging markets. Such an approach will offer an insight to analysts and researchers, policymakers or regulators.
摘要本文的目的是确定股票市场预测的预测模型,重点是新兴市场的一个场景。基于1933年至2020年的现有文献,本研究采用了探索性分析和概念建模。Web of Science、Scopus、JSTOR等数据库保证了文献的可靠性。文献计量学和科学计量学技术被应用于检索的文章,通过映射过去研究的相互联系和局限性来创建一个概念框架。研究重点是集成大数据、社交媒体和实时流数据的混合模型。关键的发现是,影响股票市场部门的实际现象是多种多样的,因此,泛化有限。未来的研究必须集中在新兴市场中经过实证验证的模型上。这种方法将为分析师、研究人员、政策制定者或监管机构提供洞见。
{"title":"A Bibliometric Review of Stock Market Prediction: Perspective of Emerging Markets","authors":"Arjun R, Suprabha Kudigrama Rama","doi":"10.2478/acss-2020-0010","DOIUrl":"https://doi.org/10.2478/acss-2020-0010","url":null,"abstract":"Abstract The objective of the paper is to identify predictive models in stock market prediction focusing on a scenario of the emerging markets. An exploratory analysis and conceptual modelling based on the extant literature during 1933 to 2020 have been used in the study. The databases of Web of Science, Scopus, and JSTOR ensure the reliability of the literature. Bibliometrics and scientometric techniques have been applied to the retrieved articles to create a conceptual framework by mapping interlinks and limitations in past studies. Focus of research is hybrid models that integrate big data, social media, and real-time streaming data. Key finding is that actual phenomena affecting stock market sectors are diverse and, hence, limited in generalization. The future research must focus on models empirically validated within the emerging markets. Such an approach will offer an insight to analysts and researchers, policymakers or regulators.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"16 1","pages":"77 - 86"},"PeriodicalIF":1.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73154376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Applications of the Internet of Things (IoT) are famously known for connecting devices via the internet. The main purpose of IoT systems (wireless or wired) is to connect devices together for data collection, buffering and data gateway. The collected large size of data is often captured from remote sources for automatic data analytics or for direct decision making by its users. This paper applies the programming pattern for Big Data in IoT systems that makes use of lightweight Java methods, introduced in the recently published work on ClientNet Distributed Cluster. Considering Big Data in IoT systems means the sensing of data from different resources, the network of IoT devices collaborating in data collection and processing; and the gateways servers where the resulting big data is supposed to be directed or further processed. This mainly involves resolving the issues of Big Data, i.e., the size and the network transfer speed along with many other issues of coordination and concurrency. The computer network that connects IoT may further include techniques such as Fog and Edge computing that resolve much of the network issues. This paper provides solutions to these problems that occur in wireless and wired systems. The talk is about the ClientNet programming model and its application in IoT systems for orchestration, such as coordination, data communication, device identification and synchronization between the gateway servers and devices. These devices include sensors attached with appliances (e.g., home automations, supply chain systems, light and heavy machines, vehicles, power grids etc.) or buildings, bridges and computers running data processing applications. As described in earlier papers, the introduced ClientNet techniques prevent from big data transfers and streaming that occupy more resources (hardware and bandwidth) and time. The idea is motivated by Big Data problems that make it difficult to collect it from different resources through small devices and then redirecting it. The proposed programming model of ClientNet Distributed Cluster stores Big Data on the nearest server coordinated by the nearest coordinator. The gateways and the systems that run analytics programs communicate by running programs from other computers when it is essentially required. This makes it possible to let Big Data rarely move across a communication network and allow only the source code to move around the network. The given programming model greatly simplifies data communication overheads, communication patterns among devices, networks and servers.
{"title":"Lightweight Coordination Patterns for Applications of the Internet of Things","authors":"Waseem Akhtar Mufti","doi":"10.2478/acss-2020-0013","DOIUrl":"https://doi.org/10.2478/acss-2020-0013","url":null,"abstract":"Abstract Applications of the Internet of Things (IoT) are famously known for connecting devices via the internet. The main purpose of IoT systems (wireless or wired) is to connect devices together for data collection, buffering and data gateway. The collected large size of data is often captured from remote sources for automatic data analytics or for direct decision making by its users. This paper applies the programming pattern for Big Data in IoT systems that makes use of lightweight Java methods, introduced in the recently published work on ClientNet Distributed Cluster. Considering Big Data in IoT systems means the sensing of data from different resources, the network of IoT devices collaborating in data collection and processing; and the gateways servers where the resulting big data is supposed to be directed or further processed. This mainly involves resolving the issues of Big Data, i.e., the size and the network transfer speed along with many other issues of coordination and concurrency. The computer network that connects IoT may further include techniques such as Fog and Edge computing that resolve much of the network issues. This paper provides solutions to these problems that occur in wireless and wired systems. The talk is about the ClientNet programming model and its application in IoT systems for orchestration, such as coordination, data communication, device identification and synchronization between the gateway servers and devices. These devices include sensors attached with appliances (e.g., home automations, supply chain systems, light and heavy machines, vehicles, power grids etc.) or buildings, bridges and computers running data processing applications. As described in earlier papers, the introduced ClientNet techniques prevent from big data transfers and streaming that occupy more resources (hardware and bandwidth) and time. The idea is motivated by Big Data problems that make it difficult to collect it from different resources through small devices and then redirecting it. The proposed programming model of ClientNet Distributed Cluster stores Big Data on the nearest server coordinated by the nearest coordinator. The gateways and the systems that run analytics programs communicate by running programs from other computers when it is essentially required. This makes it possible to let Big Data rarely move across a communication network and allow only the source code to move around the network. The given programming model greatly simplifies data communication overheads, communication patterns among devices, networks and servers.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"1 1","pages":"117 - 123"},"PeriodicalIF":1.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90792687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Clustering has now become a very important tool to manage the data in many areas such as pattern recognition, machine learning, information retrieval etc. The database is increasing day by day and thus it is required to maintain the data in such a manner that useful information can easily be extracted and used accordingly. In this process, clustering plays an important role as it forms clusters of the data on the basis of similarity in data. There are more than hundred clustering methods and algorithms that can be used for mining the data but all these algorithms do not provide models for their clusters and thus it becomes difficult to categorise all of them. This paper describes the most commonly used and popular clustering techniques and also compares them on the basis of their merits, demerits and time complexity.
{"title":"A Systematic Comparative Analysis of Clustering Techniques","authors":"Satinder Bal Gupta, R. Yadav, Shiva Gupta","doi":"10.2478/acss-2020-0011","DOIUrl":"https://doi.org/10.2478/acss-2020-0011","url":null,"abstract":"Abstract Clustering has now become a very important tool to manage the data in many areas such as pattern recognition, machine learning, information retrieval etc. The database is increasing day by day and thus it is required to maintain the data in such a manner that useful information can easily be extracted and used accordingly. In this process, clustering plays an important role as it forms clusters of the data on the basis of similarity in data. There are more than hundred clustering methods and algorithms that can be used for mining the data but all these algorithms do not provide models for their clusters and thus it becomes difficult to categorise all of them. This paper describes the most commonly used and popular clustering techniques and also compares them on the basis of their merits, demerits and time complexity.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"5 1","pages":"87 - 104"},"PeriodicalIF":1.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87808461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Kuchin, R. Mukhamediev, K. Yakunin, J. Grundspeņķis, A. Symagulov
Abstract Machine learning (ML) methods are nowadays widely used to automate geophysical study. Some of ML algorithms are used to solve lithological classification problems during uranium mining process. One of the key aspects of using classical ML methods is causing data features and estimating their influence on the classification. This paper presents a quantitative assessment of the impact of expert opinions on the classification process. In other words, we have prepared the data, identified the experts and performed a series of experiments with and without taking into account the fact that the expert identifier is supplied to the input of the automatic classifier during training and testing. Feedforward artificial neural network (ANN) has been used as a classifier. The results of the experiments show that the “knowledge” of the ANN of which expert interpreted the data improves the quality of the automatic classification in terms of accuracy (by 5 %) and recall (by 20 %). However, due to the fact that the input parameters of the model may depend on each other, the SHapley Additive exPlanations (SHAP) method has been used to further assess the impact of expert identifier. SHAP has allowed assessing the degree of parameter influence. It has revealed that the expert ID is at least two times more influential than any of the other input parameters of the neural network. This circumstance imposes significant restrictions on the application of ANNs to solve the task of lithological classification at the uranium deposits.
{"title":"Assessing the Impact of Expert Labelling of Training Data on the Quality of Automatic Classification of Lithological Groups Using Artificial Neural Networks","authors":"Y. Kuchin, R. Mukhamediev, K. Yakunin, J. Grundspeņķis, A. Symagulov","doi":"10.2478/acss-2020-0016","DOIUrl":"https://doi.org/10.2478/acss-2020-0016","url":null,"abstract":"Abstract Machine learning (ML) methods are nowadays widely used to automate geophysical study. Some of ML algorithms are used to solve lithological classification problems during uranium mining process. One of the key aspects of using classical ML methods is causing data features and estimating their influence on the classification. This paper presents a quantitative assessment of the impact of expert opinions on the classification process. In other words, we have prepared the data, identified the experts and performed a series of experiments with and without taking into account the fact that the expert identifier is supplied to the input of the automatic classifier during training and testing. Feedforward artificial neural network (ANN) has been used as a classifier. The results of the experiments show that the “knowledge” of the ANN of which expert interpreted the data improves the quality of the automatic classification in terms of accuracy (by 5 %) and recall (by 20 %). However, due to the fact that the input parameters of the model may depend on each other, the SHapley Additive exPlanations (SHAP) method has been used to further assess the impact of expert identifier. SHAP has allowed assessing the degree of parameter influence. It has revealed that the expert ID is at least two times more influential than any of the other input parameters of the neural network. This circumstance imposes significant restrictions on the application of ANNs to solve the task of lithological classification at the uranium deposits.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"34 1","pages":"145 - 152"},"PeriodicalIF":1.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76097876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Software development method, which does not have any faults or gaps in project implementation, has not been elaborated so far. Due to this reason, the authors have decided to perform this study to make it easier for the companies, which use one of the agile development methods, to better foresee potential risks and to deal with their consequences. The aim of the research is to identify and classify risks in agile software development methods and the related projects based on the obtained survey data. To achieve the goal, the authors have developed evaluation criteria, as well as implemented practical questionnaire in various software development companies. From the obtained survey data, the risks are classified according to various factors, i.e., the changing highest and lowest priorities and needs in various projects. Thus, the obtained research results can be applied in various areas of project development, changing the order of priority factors.
{"title":"Survey on Risk Classification in Agile Software Development Projects in Latvia","authors":"O. Ņikiforova, Kristaps Babris, Jānis Kristapsons","doi":"10.2478/acss-2020-0012","DOIUrl":"https://doi.org/10.2478/acss-2020-0012","url":null,"abstract":"Abstract Software development method, which does not have any faults or gaps in project implementation, has not been elaborated so far. Due to this reason, the authors have decided to perform this study to make it easier for the companies, which use one of the agile development methods, to better foresee potential risks and to deal with their consequences. The aim of the research is to identify and classify risks in agile software development methods and the related projects based on the obtained survey data. To achieve the goal, the authors have developed evaluation criteria, as well as implemented practical questionnaire in various software development companies. From the obtained survey data, the risks are classified according to various factors, i.e., the changing highest and lowest priorities and needs in various projects. Thus, the obtained research results can be applied in various areas of project development, changing the order of priority factors.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"18 1","pages":"105 - 116"},"PeriodicalIF":1.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78941039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Well logging, also known as a geophysical survey, is one of the main components of a nuclear fuel cycle. This survey follows directly after the drilling process, and the operational quality assessment of its results is a very serious problem. Any mistake in this survey can lead to the culling of the whole well. This paper examines the feasibility of applying machine learning techniques to quickly assess the well logging quality results. The studies were carried out by a reference well modelling for the selected uranium deposit of the Republic of Kazakhstan and further comparing it with the results of geophysical surveys recorded earlier. The parameters of the geophysical methods and the comparison rules for them were formulated after the reference well modelling process. The classification trees and the artificial neural networks were used during the research process and the results obtained for both methods were compared with each other. The results of this paper may be useful to the enterprises engaged in the geophysical well surveys and data processing obtained during the logging process.
{"title":"Suitability Determination of Machine Learning Techniques for the Operational Quality Assessment of Geophysical Survey Results","authors":"Kirill Abramov, J. Grundspeņķis","doi":"10.2478/acss-2020-0017","DOIUrl":"https://doi.org/10.2478/acss-2020-0017","url":null,"abstract":"Abstract Well logging, also known as a geophysical survey, is one of the main components of a nuclear fuel cycle. This survey follows directly after the drilling process, and the operational quality assessment of its results is a very serious problem. Any mistake in this survey can lead to the culling of the whole well. This paper examines the feasibility of applying machine learning techniques to quickly assess the well logging quality results. The studies were carried out by a reference well modelling for the selected uranium deposit of the Republic of Kazakhstan and further comparing it with the results of geophysical surveys recorded earlier. The parameters of the geophysical methods and the comparison rules for them were formulated after the reference well modelling process. The classification trees and the artificial neural networks were used during the research process and the results obtained for both methods were compared with each other. The results of this paper may be useful to the enterprises engaged in the geophysical well surveys and data processing obtained during the logging process.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"187 1","pages":"153 - 162"},"PeriodicalIF":1.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74936613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Nowadays, interoperability of learning management systems is still not very high. The authoring tools can help transfer e-learning content between different learning management systems. However, in this context, they should be able to produce learning content that is compliant with some industry standards. One of the most widely used standards is the SCORM 1.2 release. The research addresses the extension of the functionality of the previously developed content development tool EMMA by incorporating into it the support for the subset of SCORM 1.2 requirements. The paper describes the process of the acquisition, implementation, and validation of the defined requirements. Moreover, it presents the results of the analysis of 33 SCORM authoring tools and 16 SCORM players.
{"title":"Definition and Validation of the Subset of SCORM Requirements for the Enhanced Reusability of Learning Content in Learning Management Systems","authors":"S. Petrovica, Alla Anohina-Naumeca, Andris Kikans","doi":"10.2478/acss-2020-0015","DOIUrl":"https://doi.org/10.2478/acss-2020-0015","url":null,"abstract":"Abstract Nowadays, interoperability of learning management systems is still not very high. The authoring tools can help transfer e-learning content between different learning management systems. However, in this context, they should be able to produce learning content that is compliant with some industry standards. One of the most widely used standards is the SCORM 1.2 release. The research addresses the extension of the functionality of the previously developed content development tool EMMA by incorporating into it the support for the subset of SCORM 1.2 requirements. The paper describes the process of the acquisition, implementation, and validation of the defined requirements. Moreover, it presents the results of the analysis of 33 SCORM authoring tools and 16 SCORM players.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"117 1","pages":"134 - 144"},"PeriodicalIF":1.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86809176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Due to the rapid increase in the demand for information that supports tourists after, before, and during the trip, many tour systems are available. However, these systems are not able to successfully replace a human facilitator that is expensive to hire. The primary key qualities of a human tourist guide are his/her knowledge, communication skills, and interpretation of destination attractions. Traditional tourist facilitator systems are lacking in these qualities. The main idea of the research is to design an agent to guide tourists, provide them accurate information about visitable places, without having any bound for a specific region and it will have human-like communication skills along with the point of interest knowledge, which depends on its internal knowledge base as well as its online searching techniques.
{"title":"Design and Development of AI-Based Tourist Facilitator and Information Agent","authors":"Adeel Munawar, S. Raza, Awais Qasim","doi":"10.2478/acss-2020-0014","DOIUrl":"https://doi.org/10.2478/acss-2020-0014","url":null,"abstract":"Abstract Due to the rapid increase in the demand for information that supports tourists after, before, and during the trip, many tour systems are available. However, these systems are not able to successfully replace a human facilitator that is expensive to hire. The primary key qualities of a human tourist guide are his/her knowledge, communication skills, and interpretation of destination attractions. Traditional tourist facilitator systems are lacking in these qualities. The main idea of the research is to design an agent to guide tourists, provide them accurate information about visitable places, without having any bound for a specific region and it will have human-like communication skills along with the point of interest knowledge, which depends on its internal knowledge base as well as its online searching techniques.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"31 1","pages":"124 - 133"},"PeriodicalIF":1.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82205624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Dutta, J. K. Mandal, Tai Hoon Kim, S. Bandyopadhyay
Abstract Breast Cancer diagnosis is one of the most studied problems in the medical domain. Cancer diagnosis has been studied extensively, which instantiates the need for early prediction of cancer disease. To obtain advance prediction, health records are exploited and given as input to an automated system. The paper focuses on constructing an automated system by employing deep learning based recurrent neural network models. A stacked GRU-LSTM-BRNN is proposed in this paper that accepts health records of a patient for determining the possibility of being affected by breast cancer. The proposed model is compared against other baseline classifiers such as stacked simple-RNN model, stacked LSTM-RNN model, stacked GRU-RNN model. Comparative results obtained in this study indicate that the stacked GRU-LSTM-BRNN model yields better classification performance for predictions related to breast cancer disease.
{"title":"Breast Cancer Prediction Using Stacked GRU-LSTM-BRNN","authors":"S. Dutta, J. K. Mandal, Tai Hoon Kim, S. Bandyopadhyay","doi":"10.2478/acss-2020-0018","DOIUrl":"https://doi.org/10.2478/acss-2020-0018","url":null,"abstract":"Abstract Breast Cancer diagnosis is one of the most studied problems in the medical domain. Cancer diagnosis has been studied extensively, which instantiates the need for early prediction of cancer disease. To obtain advance prediction, health records are exploited and given as input to an automated system. The paper focuses on constructing an automated system by employing deep learning based recurrent neural network models. A stacked GRU-LSTM-BRNN is proposed in this paper that accepts health records of a patient for determining the possibility of being affected by breast cancer. The proposed model is compared against other baseline classifiers such as stacked simple-RNN model, stacked LSTM-RNN model, stacked GRU-RNN model. Comparative results obtained in this study indicate that the stacked GRU-LSTM-BRNN model yields better classification performance for predictions related to breast cancer disease.","PeriodicalId":41960,"journal":{"name":"Applied Computer Systems","volume":"53 1","pages":"163 - 171"},"PeriodicalIF":1.0,"publicationDate":"2020-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87737370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}