Achieving high level of data quality is considered one of the most important assets for any small, medium and large size organizations. Data quality is the main hype for both practitioners and researchers who deal with traditional or big data. The level of data quality is measured through several quality dimensions. High percentage of the current studies focus on assessing and applying data quality on traditional data. As we are in the era of big data, the attention should be paid to the tremendous volume of generated and processed data in which 80% of all the generated data is unstructured. However, the initiatives for creating big data quality evaluation models are still under development. This paper investigates the data quality dimensions that are mostly used in both traditional and big data to figure out the metrics and techniques that are used to measure and handle each dimension. A complete definition for each traditional and big data quality dimension, metrics and handling techniques are presented in this paper. Many data quality dimensions can be applied to both traditional and big data, while few number of quality dimensions are either applied to traditional data or big data. Few number of data quality metrics and barely handling techniques are presented in the current works.
{"title":"DATA QUALITY DIMENSIONS, METRICS, AND IMPROVEMENT TECHNIQUES","authors":"Menna Ibrahim Gabr, Y. Helmy, Doaa S. Elzanfaly","doi":"10.54623/fue.fcij.6.1.3","DOIUrl":"https://doi.org/10.54623/fue.fcij.6.1.3","url":null,"abstract":"Achieving high level of data quality is considered one of the most important assets for any small, medium and large size organizations. Data quality is the main hype for both practitioners and researchers who deal with traditional or big data. The level of data quality is measured through several quality dimensions. High percentage of the current studies focus on assessing and applying data quality on traditional data. As we are in the era of big data, the attention should be paid to the tremendous volume of generated and processed data in which 80% of all the generated data is unstructured. However, the initiatives for creating big data quality evaluation models are still under development. This paper investigates the data quality dimensions that are mostly used in both traditional and big data to figure out the metrics and techniques that are used to measure and handle each dimension. A complete definition for each traditional and big data quality dimension, metrics and handling techniques are presented in this paper. Many data quality dimensions can be applied to both traditional and big data, while few number of quality dimensions are either applied to traditional data or big data. Few number of data quality metrics and barely handling techniques are presented in the current works.","PeriodicalId":100561,"journal":{"name":"Future Computing and Informatics Journal","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84121433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Forecasting future values of time-series data is a critical task in many disciplines including financial planning and decision-making. Researchers and practitioners in statistics apply traditional statistical methods (such as ARMA, ARIMA, ES, and GARCH) for a long time with varying. accuracies. Deep learning provides more sophisticated and non-linear approximation that supersede traditional statistical methods in most cases. Deep learning methods require minimal features engineering compared to other methods; it adopts an end-to-end learning methodology. In addition, it can handle a huge amount of data and variables. Financial time series forecasting poses a challenge due to its high volatility and non-stationarity nature. This work presents a hybrid deep learning model based on recurrent neural network and Autoencoders techniques to forecast commodity materials' global prices. Results showbetter accuracy compared to traditional regression methods for short-term forecast horizons (1,2,3 and 7days).
{"title":"A DEEP LEARNING APPROACH FOR FORECASTING GLOBAL COMMODITIES PRICES","authors":"A. S. Elberawi, M. Belal","doi":"10.54623/fue.fcij.6.1.4","DOIUrl":"https://doi.org/10.54623/fue.fcij.6.1.4","url":null,"abstract":"Forecasting future values of time-series data is a critical task in many disciplines including financial planning and decision-making. Researchers and practitioners in statistics apply traditional statistical methods (such as ARMA, ARIMA, ES, and GARCH) for a long time with varying. accuracies. Deep learning provides more sophisticated and non-linear approximation that supersede traditional statistical methods in most cases. Deep learning methods require minimal features engineering compared to other methods; it adopts an end-to-end learning methodology. In addition, it can handle a huge amount of data and variables. Financial time series forecasting poses a challenge due to its high volatility and non-stationarity nature. This work presents a hybrid deep learning model based on recurrent neural network and Autoencoders techniques to forecast commodity materials' global prices. Results showbetter accuracy compared to traditional regression methods for short-term forecast horizons (1,2,3 and 7days).","PeriodicalId":100561,"journal":{"name":"Future Computing and Informatics Journal","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86579592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The telecommunication sector has been developed rapidly and with large amounts of data obtained as a result of increasing in the number of subscribers, modern techniques, data-based applications, and services. As well as better awareness of customer requirements and excellent quality that meets their satisfaction. This satisfaction raises rivalry between firms to maintain the quality of their services and upgrade them. These data can be helpfully extracted for analysis and used for predicting churners. Researchers around the world have conducted important research to understand the uses of Data mining (DM) that can be used to predict customers' churn. This paper supplies a review of nearly 73 recent journalistic articles starting in 2003 to introduce the different DM techniques used in many customerbased churning models. It epitomizes the present literature in the field of communications by highlighting the impact of service quality on customer satisfaction, detecting churners in the telecoms industry, in addition to the sample size used, the churn variables used and the results of various DM technologies. Eventually, the most common techniques for predicting telecommunication churning such as classification, regression analysis, and clustering are included, thus presenting a roadmap for new researchers to build new churn management models.
{"title":"Review of Data Mining Techniques for Detecting Churners in the Telecommunication Industry","authors":"Mahmoud Ewieda, Essam M. Shaaban, M. Roushdy","doi":"10.54623/fue.fcij.6.1.1","DOIUrl":"https://doi.org/10.54623/fue.fcij.6.1.1","url":null,"abstract":"The telecommunication sector has been developed rapidly and with large amounts of data obtained as a result of increasing in the number of subscribers, modern techniques, data-based applications, and services. As well as better awareness of customer requirements and excellent quality that meets their satisfaction. This satisfaction raises rivalry between firms to maintain the quality of their services and upgrade them. These data can be helpfully extracted for analysis and used for predicting churners. Researchers around the world have conducted important research to understand the uses of Data mining (DM) that can be used to predict customers' churn. This paper supplies a review of nearly 73 recent journalistic articles starting in 2003 to introduce the different DM techniques used in many customerbased churning models. It epitomizes the present literature in the field of communications by highlighting the impact of service quality on customer satisfaction, detecting churners in the telecoms industry, in addition to the sample size used, the churn variables used and the results of various DM technologies. Eventually, the most common techniques for predicting telecommunication churning such as classification, regression analysis, and clustering are included, thus presenting a roadmap for new researchers to build new churn management models.","PeriodicalId":100561,"journal":{"name":"Future Computing and Informatics Journal","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85032189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is no doubt that this age is the age of data and technology. Moreover, there is tremendous development in all fields. The personalized material is a good approach in the different fields. It provides a fit material that matches the styles of readers. It supports readers in various reading domains. This research paper aims to support students in the educational system. Additionally, the research paper designs to increase education values for students. Furthermore, the research paper builds the smart appropriate materials through Egyptian Knowledge Banking (EKB) based on the learner question. The Egyptian Knowledge Bank (EKB) is a rich platform for data. The research paper is implemented in the faculty of Commerce and Business Administration, Business Information System program (BIS) at Helwan University, Egypt.
{"title":"A CONFIGURABLE MINING APPROACH FOR LEARNING SERVICES CUSTOMIZATION","authors":"Aya M. Mostafa, Y. Helmy, A. Idrees","doi":"10.54623/fue.fcij.6.1.2","DOIUrl":"https://doi.org/10.54623/fue.fcij.6.1.2","url":null,"abstract":"There is no doubt that this age is the age of data and technology. Moreover, there is tremendous development in all fields. The personalized material is a good approach in the different fields. It provides a fit material that matches the styles of readers. It supports readers in various reading domains. This research paper aims to support students in the educational system. Additionally, the research paper designs to increase education values for students. Furthermore, the research paper builds the smart appropriate materials through Egyptian Knowledge Banking (EKB) based on the learner question. The Egyptian Knowledge Bank (EKB) is a rich platform for data. The research paper is implemented in the faculty of Commerce and Business Administration, Business Information System program (BIS) at Helwan University, Egypt.","PeriodicalId":100561,"journal":{"name":"Future Computing and Informatics Journal","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88946055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A common symptom of Parkinson's Disease is Freezing of Gait (FoG) that causes an interrupt of the forward progression of the patient’s feet while walking. Therefore, Freezing of Gait episodes is always engaged to the patient's falls. This paper proposes a model for Freezing of Gait episodes detection and prediction in patients with Parkinson's disease. Predicting Freezing of Gait in this paper considers as a multi-class classification problem with 3 classes namely, FoG, pre-FoG, and walking episodes. In this paper, the extracted feature scheme applied for the detection and the prediction of FoG is Convolutional Neural Network (CNN) spectrogram time-frequency features. The dataset is collected from three tri-axial accelerometer sensors for PD patients with FoG. The performance of the suggested approach has been distinguished by different machine learning classifiers and accelerometer axes.
{"title":"Deep feature learning for FoG episodes prediction In patients with PD","authors":"Hadeer Elziaat, Nashwa El-Bendary, Ramdan Mowad","doi":"10.54623/fue.fcij.5.2.2","DOIUrl":"https://doi.org/10.54623/fue.fcij.5.2.2","url":null,"abstract":"A common symptom of Parkinson's Disease is Freezing of Gait (FoG) that causes an interrupt of the forward progression of the patient’s feet while walking. Therefore, Freezing of Gait episodes is always engaged to the patient's falls. This paper proposes a model for Freezing of Gait episodes detection and prediction in patients with Parkinson's disease. Predicting Freezing of Gait in this paper considers as a multi-class classification problem with 3 classes namely, FoG, pre-FoG, and walking episodes. In this paper, the extracted feature scheme applied for the detection and the prediction of FoG is Convolutional Neural Network (CNN) spectrogram time-frequency features. The dataset is collected from three tri-axial accelerometer sensors for PD patients with FoG. The performance of the suggested approach has been distinguished by different machine learning classifiers and accelerometer axes.","PeriodicalId":100561,"journal":{"name":"Future Computing and Informatics Journal","volume":"179 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80924542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data clustering is the method of gathering of data points so that the more similar points will be in the same group. It is a key role in exploratory data mining and a popular technique used in many fields to analyze statistical data. Quality clusters are the key requirement of the cluster analysis result. There will be tradeoffs between the speed of the clustering algorithm and the quality of clusters it produces. Both the quality and speed criteria must be considered for the state-of-the-art clustering algorithm for applications. The Bio-inspired technique has ensured that the process is not trapped in local minima, which is the main bottleneck of the traditional clustering algorithm. The results produced by the bio-inspired clustering algorithms are better than the traditional clustering algorithms. The newly introduced Whale optimization-based clustering is one of the promising algorithms from the bio-inspired family. The quality of clusters produced by Whale optimization-based clustering is compared with k-means, Kohonen self-organizing feature diagram, Grey wolf optimization. Popular quality measures such as the Silhouette index, Davies-Bouldin index, and Calianski-Harabasz index are used in the evaluation.
{"title":"Performance Analysis of Whale optimization based Data Clustering","authors":"Ahamed B M Shafeeq","doi":"10.54623/fue.fcij.5.2.4","DOIUrl":"https://doi.org/10.54623/fue.fcij.5.2.4","url":null,"abstract":"Data clustering is the method of gathering of data points so that the more similar points will be in the same group. It is a key role in exploratory data mining and a popular technique used in many fields to analyze statistical data. Quality clusters are the key requirement of the cluster analysis result. There will be tradeoffs between the speed of the clustering algorithm and the quality of clusters it produces. Both the quality and speed criteria must be considered for the state-of-the-art clustering algorithm for applications. The Bio-inspired technique has ensured that the process is not trapped in local minima, which is the main bottleneck of the traditional clustering algorithm. The results produced by the bio-inspired clustering algorithms are better than the traditional clustering algorithms. The newly introduced Whale optimization-based clustering is one of the promising algorithms from the bio-inspired family. The quality of clusters produced by Whale optimization-based clustering is compared with k-means, Kohonen self-organizing feature diagram, Grey wolf optimization. Popular quality measures such as the Silhouette index, Davies-Bouldin index, and Calianski-Harabasz index are used in the evaluation.","PeriodicalId":100561,"journal":{"name":"Future Computing and Informatics Journal","volume":"223 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90438514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Handwritten signature identification and verification has become an active area of research in recent years. Handwritten signature identification systems are used for identifying the user among all users enrolled in the system while handwritten signature verification systems are used for authenticating a user by comparing a specific signature with his signature that is stored in the system. This paper presents a review for commonly used methods for pre-processing, feature extraction and classification techniques in signature identification and verification systems, in addition to a comparison between the systems implemented in the literature for identification techniques and verification techniques in online and offline systems with taking into consideration the datasets used and results for each system
{"title":"SIGNATURE IDENTIFICATION AND VERIFICATION SYSTEMS: A COMPARATIVE STUDY ON THE ONLINE AND OFFLINE TECHNIQUES","authors":"Nehal Hamdy Al-banhawy, H. Mohsen, N. Ghali","doi":"10.54623/fue.fcij.5.1.3","DOIUrl":"https://doi.org/10.54623/fue.fcij.5.1.3","url":null,"abstract":"Handwritten signature identification and verification has become an active area of research in recent years. Handwritten signature identification systems are used for identifying the user among all users enrolled in the system while handwritten signature verification systems are used for authenticating a user by comparing a specific signature with his signature that is stored in the system. This paper presents a review for commonly used methods for pre-processing, feature extraction and classification techniques in signature identification and verification systems, in addition to a comparison between the systems implemented in the literature for identification techniques and verification techniques in online and offline systems with taking into consideration the datasets used and results for each system","PeriodicalId":100561,"journal":{"name":"Future Computing and Informatics Journal","volume":"468 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89661203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Noorh H. Alharbi, Rana O. Bameer, Shahad S. Geddan, Hajar M Alharbi
Sickle cell disease is a severe hereditary disease caused by an abnormality of the red blood cells. The current therapeutic decision-making process applied to sickle cell disease includes monitoring a patient’s symptoms and complications and then adjusting the treatment accordingly. This process is time-consuming, which might result in serious consequences for patients’ lives and could lead to irreversible disease complications. Artificial intelligence, specifically machine learning, is a powerful technique that has been used to support medical decisions. This paper aims to review the recently developed machine learning models designed to interpret medical data regarding sickle cell disease. To propose an intelligence model, the suggested framework has to be performed in the following sequence. First, the data is preprocessed by imputing missing values and balancing them. Then, suitable feature selection methods are applied, and different classifiers are trained and tested. Finally, the performing model with the highest predefined performance metric over all experiments conducted is nominated. Thus, the aim of developing such a model is to predict the severity of a patient’s case, to determine the clinical complications of the disease, and to suggest the correct dosage of the treatment(s).
{"title":"Recent Advances and Machine Learning Techniques on Sickle Cell Disease","authors":"Noorh H. Alharbi, Rana O. Bameer, Shahad S. Geddan, Hajar M Alharbi","doi":"10.54623/fue.fcij.5.1.4","DOIUrl":"https://doi.org/10.54623/fue.fcij.5.1.4","url":null,"abstract":"Sickle cell disease is a severe hereditary disease caused by an abnormality of the red blood cells. The current therapeutic decision-making process applied to sickle cell disease includes monitoring a patient’s symptoms and complications and then adjusting the treatment accordingly. This process is time-consuming, which might result in serious consequences for patients’ lives and could lead to irreversible disease complications. Artificial intelligence, specifically machine learning, is a powerful technique that has been used to support medical decisions. This paper aims to review the recently developed machine learning models designed to interpret medical data regarding sickle cell disease. To propose an intelligence model, the suggested framework has to be performed in the following sequence. First, the data is preprocessed by imputing missing values and balancing them. Then, suitable feature selection methods are applied, and different classifiers are trained and tested. Finally, the performing model with the highest predefined performance metric over all experiments conducted is nominated. Thus, the aim of developing such a model is to predict the severity of a patient’s case, to determine the clinical complications of the disease, and to suggest the correct dosage of the treatment(s).","PeriodicalId":100561,"journal":{"name":"Future Computing and Informatics Journal","volume":"73 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76948338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Early event detection, monitor, and response can significantly decrease the impact of disasters. Lately, the usage of social media for detecting events has displayed hopeful results. Objectives: for event detection and mapping; the tweets will locate and monitor them on a map. This new approach uses grouped geoparsing then scoring for each tweet based on three spatial indicators. Method/Approach: Our approach uses a geoparsing technique to match a location in tweets to geographic locations of multiple-events tweets in Egypt country, administrative subdivision. Thus, additional geographic information acquired from the tweet itself to detect the actual locations that the user mentioned in the tweet. Results: The approach was developed from a large pool of tweets related to various crisis events over one year. Only all (very specific) tweets that were plotted on a crisis map to monitor these events. The tweets were analyzed through predefined geo-graphical displays, message content filters (damage, casualties). Conclusion: A method was implemented to predict the effective start of any crisis event and an inequity condition is applied to determine the end of the event. Results indicate that our automated filtering of information provides valuable information for operational response and crisis communication
{"title":"Twitter Analysis based on Damage Detection and Geoparsing for Event Mapping Management","authors":"Yasmeen Ali Ameen, Khaled Bahnasy, Adel Elmahdy","doi":"10.54623/fue.fcij.5.1.1","DOIUrl":"https://doi.org/10.54623/fue.fcij.5.1.1","url":null,"abstract":"Background: Early event detection, monitor, and response can significantly decrease the impact of disasters. Lately, the usage of social media for detecting events has displayed hopeful results. Objectives: for event detection and mapping; the tweets will locate and monitor them on a map. This new approach uses grouped geoparsing then scoring for each tweet based on three spatial indicators. Method/Approach: Our approach uses a geoparsing technique to match a location in tweets to geographic locations of multiple-events tweets in Egypt country, administrative subdivision. Thus, additional geographic information acquired from the tweet itself to detect the actual locations that the user mentioned in the tweet. Results: The approach was developed from a large pool of tweets related to various crisis events over one year. Only all (very specific) tweets that were plotted on a crisis map to monitor these events. The tweets were analyzed through predefined geo-graphical displays, message content filters (damage, casualties). Conclusion: A method was implemented to predict the effective start of any crisis event and an inequity condition is applied to determine the end of the event. Results indicate that our automated filtering of information provides valuable information for operational response and crisis communication","PeriodicalId":100561,"journal":{"name":"Future Computing and Informatics Journal","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80401149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Khaled AbdElazim Muhammad, R. Moawad, Essam Elfakharany
Requirements engineering is a crucial phase of software engineering, and requirements prioritization is an essential stage of requirements engineering particularly in agile software development. Requirements prioritization goals at eliciting which requirements of software need to be covered in a particular release. The key point is which requirement will be selected in the next iteration and which one will be delayed to other iterations for minimizing risk during development and meeting stakeholders’ needs. There are many existing techniques for requirement prioritization, but most of these techniques do not cover continuous growth, change of requirements, and requirements dependencies. The prioritization techniques need to be more continuous, scalable, implemented easily and integrated with software development life cycle. This paper introduces a supporting tool for a proposed framework to prioritize requirements in agile software development. This framework tries to find solutions for the challenges facing this prioritization process such as how to make this prioritization continuous and scalable and how to deal with rapidly requirement changes and its dependencies. The proposed framework is validated in a real case study using its supporting tool, and the results are promising
{"title":"A Supporting Tool for Requirements Prioritization Process in Agile Software Development","authors":"Khaled AbdElazim Muhammad, R. Moawad, Essam Elfakharany","doi":"10.54623/fue.fcij.5.1.2","DOIUrl":"https://doi.org/10.54623/fue.fcij.5.1.2","url":null,"abstract":"Requirements engineering is a crucial phase of software engineering, and requirements prioritization is an essential stage of requirements engineering particularly in agile software development. Requirements prioritization goals at eliciting which requirements of software need to be covered in a particular release. The key point is which requirement will be selected in the next iteration and which one will be delayed to other iterations for minimizing risk during development and meeting stakeholders’ needs. There are many existing techniques for requirement prioritization, but most of these techniques do not cover continuous growth, change of requirements, and requirements dependencies. The prioritization techniques need to be more continuous, scalable, implemented easily and integrated with software development life cycle. This paper introduces a supporting tool for a proposed framework to prioritize requirements in agile software development. This framework tries to find solutions for the challenges facing this prioritization process such as how to make this prioritization continuous and scalable and how to deal with rapidly requirement changes and its dependencies. The proposed framework is validated in a real case study using its supporting tool, and the results are promising","PeriodicalId":100561,"journal":{"name":"Future Computing and Informatics Journal","volume":"57 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80655882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}