Pub Date : 2023-01-01DOI: 10.5455/jjcit.71-1682317264
D. Reddy, B. R.
The development of reliable stock market models has enabled investors to make better-informed decisions. Investors may be able to locate companies that offer the highest dividend yields and lower their investment risks by using a trading strategy. The degree to which stock prices are significantly correlated, however, makes stock market analysis more complicated when using batch processing methods. The stock market prediction has entered a time of advanced technology with the rise of technological wonders like global digitalization. The significance of artificial intelligence models has greatly increased as a result of the significantly enhance in market capitalization. Because it builds a strong time-series framework based on Deep Learning (DL) for predicting future stock prices, the proposed study is novel. Deep learning has recently enjoyed considerable success in some domains due to its exceptional capacity for handling data. For instance, it is commonly used in financial disciplines such as trade execution strategies, portfolio optimization, and stock market forecasting. In this research, we propose a structure based on Mobile U-Net V3 and a hybrid of a (Mobile U-Net V3-BiLSTM) with BiLSTM to forecast the closing prices of Apple, Inc. and S&P 500 stock data. The Root Mean Squared Error (RMSE), Mean Squared Error (MSE), Pearson's Correlation (R), and Normalization Root Mean Squared Error (NRMSE) metrics were utilized to calculate the outcomes of the DL stock prediction methods. The Mobile U-Net V3-BiLSTM model outperformed other techniques for forecasting stock market prices.
{"title":"MOBILE U-NET V3 AND BILSTM: PREDICTING STOCK MARKET PRICES BASED ON DEEP LEARNING APPROACHES","authors":"D. Reddy, B. R.","doi":"10.5455/jjcit.71-1682317264","DOIUrl":"https://doi.org/10.5455/jjcit.71-1682317264","url":null,"abstract":"The development of reliable stock market models has enabled investors to make better-informed decisions. Investors may be able to locate companies that offer the highest dividend yields and lower their investment risks by using a trading strategy. The degree to which stock prices are significantly correlated, however, makes stock market analysis more complicated when using batch processing methods. The stock market prediction has entered a time of advanced technology with the rise of technological wonders like global digitalization. The significance of artificial intelligence models has greatly increased as a result of the significantly enhance in market capitalization. Because it builds a strong time-series framework based on Deep Learning (DL) for predicting future stock prices, the proposed study is novel. Deep learning has recently enjoyed considerable success in some domains due to its exceptional capacity for handling data. For instance, it is commonly used in financial disciplines such as trade execution strategies, portfolio optimization, and stock market forecasting. In this research, we propose a structure based on Mobile U-Net V3 and a hybrid of a (Mobile U-Net V3-BiLSTM) with BiLSTM to forecast the closing prices of Apple, Inc. and S&P 500 stock data. The Root Mean Squared Error (RMSE), Mean Squared Error (MSE), Pearson's Correlation (R), and Normalization Root Mean Squared Error (NRMSE) metrics were utilized to calculate the outcomes of the DL stock prediction methods. The Mobile U-Net V3-BiLSTM model outperformed other techniques for forecasting stock market prices.","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70821677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.5455/jjcit.71-1685381801
Yassine Saoudi, Mohamed Gammoudi
A conversational system is a natural language processing task that has recently attracted increasing attention with the advancements in Large Language Models (LLMs) and Language Models for Dialogue Applications (LaMDA). However, Conversational Artificial Intelligence (AI) research has mainly been carried out in English. Despite the growing popularity of Arabic as one of the most widely used languages on the Internet, only a few studies have concentrated on Arabic conversational dialogue systems thus far. In this study, we conduct a comprehensive qualitative analysis of the key research works in this domain, examining the limitations and strengths of existing approaches. We start with chatbot history and classification. Then, we examine approaches that leverage Arabic chatbots Rule-based/Retrieval-based and Deep learning-based. In particular, we survey the evolution of Generative Conversational AI with the evolution of deep-learning techniques. Next, we look at the different metrics used to assess conversational systems. Finally, we outline language Challenges for building Generative Arabic Conversational AI.
{"title":"Trends and challenges of Arabic Chatbots: Literature review","authors":"Yassine Saoudi, Mohamed Gammoudi","doi":"10.5455/jjcit.71-1685381801","DOIUrl":"https://doi.org/10.5455/jjcit.71-1685381801","url":null,"abstract":"A conversational system is a natural language processing task that has recently attracted increasing attention with the advancements in Large Language Models (LLMs) and Language Models for Dialogue Applications (LaMDA). However, Conversational Artificial Intelligence (AI) research has mainly been carried out in English. Despite the growing popularity of Arabic as one of the most widely used languages on the Internet, only a few studies have concentrated on Arabic conversational dialogue systems thus far. In this study, we conduct a comprehensive qualitative analysis of the key research works in this domain, examining the limitations and strengths of existing approaches. We start with chatbot history and classification. Then, we examine approaches that leverage Arabic chatbots Rule-based/Retrieval-based and Deep learning-based. In particular, we survey the evolution of Generative Conversational AI with the evolution of deep-learning techniques. Next, we look at the different metrics used to assess conversational systems. Finally, we outline language Challenges for building Generative Arabic Conversational AI.","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70821772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.5455/jjcit.71-1643700224
Nishant Doshi
In Internet of Things (IoT), encryption is a technique in which plaintext is converted to ciphertext to make it non-recovered by the attacker without secret key. Ciphertext policy attribute based encryption (CP-ABE) is an encryption technique aimed at multicasting feature i.e. user can only decrypt the message if policy of attributes mentioned in ciphertext is satisfied by the user’s secret key attributes. In literature, the authors have improvised the existing technique to enhance the naïve CP-ABE scheme. Recently, in 2021, Wang et al. have proposed the CP-ABE scheme with proxy re-encryption and claimed it to be efficient as to its predecessors. However, it follows the variable length ciphertext in which size of ciphertext is increased with the number of attributes. Also, it leads to computation overhead on the receiver during decryption which will be performed by the IoT devices. Thus, in this paper we have proposed the improved scheme to provide the constant length ciphertext with proxy re-encryption to reduce the computation and communication time. The proposed scheme is secured under Decisional Bilinear Diffie-Hellman (DBDH) problem.
{"title":"An Enhanced Approach for CP-ABE with Proxy Re-encryption in IoT Paradigm","authors":"Nishant Doshi","doi":"10.5455/jjcit.71-1643700224","DOIUrl":"https://doi.org/10.5455/jjcit.71-1643700224","url":null,"abstract":"In Internet of Things (IoT), encryption is a technique in which plaintext is converted to ciphertext to make it non-recovered by the attacker without secret key. Ciphertext policy attribute based encryption (CP-ABE) is an encryption technique aimed at multicasting feature i.e. user can only decrypt the message if policy of attributes mentioned in ciphertext is satisfied by the user’s secret key attributes. In literature, the authors have improvised the existing technique to enhance the naïve CP-ABE scheme. Recently, in 2021, Wang et al. have proposed the CP-ABE scheme with proxy re-encryption and claimed it to be efficient as to its predecessors. However, it follows the variable length ciphertext in which size of ciphertext is increased with the number of attributes. Also, it leads to computation overhead on the receiver during decryption which will be performed by the IoT devices. Thus, in this paper we have proposed the improved scheme to provide the constant length ciphertext with proxy re-encryption to reduce the computation and communication time. The proposed scheme is secured under Decisional Bilinear Diffie-Hellman (DBDH) problem.","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70820427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.5455/jjcit.71-1636268309
Mausam Das, Z. Wang
{"title":"ED25519: A New Secure Compatible Elliptic Curve for Mobile Wireless Network Security","authors":"Mausam Das, Z. Wang","doi":"10.5455/jjcit.71-1636268309","DOIUrl":"https://doi.org/10.5455/jjcit.71-1636268309","url":null,"abstract":"","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70820582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.5455/jjcit.71-1661691447
Mohammad Batah, M. Alzyoud, Raed Alazaidah, Malek Toubat, Haneen Alzoubi, Areej Olaiyat
According to recent studies and statistics, Cervical Cancer (CC) is one of the most common causes of death worldwide, and mainly in the developing countries. CC has a mortality rate around 60%, in less developing countries and the percentages could go even higher, due to poor screening processes, lack of sensitization, and several other reasons. Therefore, this paper aims to utilize the high capabilities of machine learning techniques in the early prediction of CC. In specific, three well-known feature selection and ranking methods have been used to identify the most significant features that help in the diagnosis process. Also, eighteen different classifiers that belong to six learning strategies have been trained and extensively evaluated against a primary data which consists of five hundred images. Moreover, an investigation regarding the problem of imbalance class distribution which is common in medical dataset is conducted. The results revealed that LWNB and RandomForest classifiers showed the best performance in general, and considering four different evaluation metrics. Also, LWNB and Logistic classifiers were the best choices to handle the problem of imbalance class distribution which is common in medical diagnosis task. The final conclusion could be made is that using an ensemble model which consists of several classifiers such as LWNB, RandomForest, and Logistic is the best solution to handle this type of problems.
{"title":"EARLY PREDICTION OF CERVICAL CANCER USING MACHINE LEARNING TECHNIQUES","authors":"Mohammad Batah, M. Alzyoud, Raed Alazaidah, Malek Toubat, Haneen Alzoubi, Areej Olaiyat","doi":"10.5455/jjcit.71-1661691447","DOIUrl":"https://doi.org/10.5455/jjcit.71-1661691447","url":null,"abstract":"According to recent studies and statistics, Cervical Cancer (CC) is one of the most common causes of death worldwide, and mainly in the developing countries. CC has a mortality rate around 60%, in less developing countries and the percentages could go even higher, due to poor screening processes, lack of sensitization, and several other reasons. Therefore, this paper aims to utilize the high capabilities of machine learning techniques in the early prediction of CC. In specific, three well-known feature selection and ranking methods have been used to identify the most significant features that help in the diagnosis process. Also, eighteen different classifiers that belong to six learning strategies have been trained and extensively evaluated against a primary data which consists of five hundred images. Moreover, an investigation regarding the problem of imbalance class distribution which is common in medical dataset is conducted. The results revealed that LWNB and RandomForest classifiers showed the best performance in general, and considering four different evaluation metrics. Also, LWNB and Logistic classifiers were the best choices to handle the problem of imbalance class distribution which is common in medical diagnosis task. The final conclusion could be made is that using an ensemble model which consists of several classifiers such as LWNB, RandomForest, and Logistic is the best solution to handle this type of problems.","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70821073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.5455/jjcit.71-1640174252
Meaad Alrehaili, F. Assiri, Kouther Omari
The world is currently facing the coronavirus disease 2019 (COVID-19 pandemic). Forecasting the progression of that pandemic is integral to planning the necessary next steps by governments and organizations. Recent studies have examined the factors that may impact COVID-19 forecasting and others have built models for predicting the numbers of active cases, recovered cases and deaths. The aim of this study was to improve the forecasting predictions by developing an ensemble machine-learning model that can be utilized in addition to the Naïve Bayes classifier, which is one of the simplest and fastest probabilistic classifiers. The first ensemble model combined gradient boosting and random forest classifiers and the second combined support vector machine and random-forest classifiers. The numbers of confirmed, recovered and death cases will be predicted for a period of 10 days. The results will be compared to the findings of previous studies. The results showed that the ensemble algorithm that combined gradient boosting and random-forest classifiers achieved the best performance, with 99% accuracy in all cases.
{"title":"DEVELOPMENT OF ENSEMBLE MACHINE LEARNING MODEL TO IMPROVE COVID-19 OUTBREAK FORECASTING","authors":"Meaad Alrehaili, F. Assiri, Kouther Omari","doi":"10.5455/jjcit.71-1640174252","DOIUrl":"https://doi.org/10.5455/jjcit.71-1640174252","url":null,"abstract":"The world is currently facing the coronavirus disease 2019 (COVID-19 pandemic). Forecasting the progression of that pandemic is integral to planning the necessary next steps by governments and organizations. Recent studies have examined the factors that may impact COVID-19 forecasting and others have built models for predicting the numbers of active cases, recovered cases and deaths. The aim of this study was to improve the forecasting predictions by developing an ensemble machine-learning model that can be utilized in addition to the Naïve Bayes classifier, which is one of the simplest and fastest probabilistic classifiers. The first ensemble model combined gradient boosting and random forest classifiers and the second combined support vector machine and random-forest classifiers. The numbers of confirmed, recovered and death cases will be predicted for a period of 10 days. The results will be compared to the findings of previous studies. The results showed that the ensemble algorithm that combined gradient boosting and random-forest classifiers achieved the best performance, with 99% accuracy in all cases.","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70820757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.5455/jjcit.71-1642508824
N. Yassin
The huge advance in information technology and communication resulted in extreme usage of digital networks that makes information security playing an important role as never before. Steganography is the art of hiding secret message bits into different multimedia data using either spatial domain or frequency domain to provide security for the transferred information against unauthorized access. Most of the techniques that apply Pixel Value Differencing approach (PVD) depend on sequential embedding manner, which lacks security. In the proposed method, a complex chaotic map is used to randomly choose the coefficients pairs for embedding the secret message. First, the cover image is transformed using IWT. Then, the embedding process starts in the highest frequency band of IWT and continues to the next sub-bands. Adaptive embedding is performed according to intensity variation between pixel pairs using PVD and Least Significant Bit substitution (LSB). Non-sequential embedding performed by using chaotic map lets the method more secure. The experimental results show that the proposed technique achieves high PSNR with improved capacity compared to other techniques.
{"title":"Data Hiding Technique for Color Images using Pixel Value Differencing and Chaotic Map","authors":"N. Yassin","doi":"10.5455/jjcit.71-1642508824","DOIUrl":"https://doi.org/10.5455/jjcit.71-1642508824","url":null,"abstract":"The huge advance in information technology and communication resulted in extreme usage of digital networks that makes information security playing an important role as never before. Steganography is the art of hiding secret message bits into different multimedia data using either spatial domain or frequency domain to provide security for the transferred information against unauthorized access. Most of the techniques that apply Pixel Value Differencing approach (PVD) depend on sequential embedding manner, which lacks security. In the proposed method, a complex chaotic map is used to randomly choose the coefficients pairs for embedding the secret message. First, the cover image is transformed using IWT. Then, the embedding process starts in the highest frequency band of IWT and continues to the next sub-bands. Adaptive embedding is performed according to intensity variation between pixel pairs using PVD and Least Significant Bit substitution (LSB). Non-sequential embedding performed by using chaotic map lets the method more secure. The experimental results show that the proposed technique achieves high PSNR with improved capacity compared to other techniques.","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"112 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70820825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.5455/jjcit.71-1652725746
T. Ahammad
Stroke is a life-threatening condition causing the second-leading number of deaths worldwide. It is a challenging problem in the public health domain of the 21st century for healthcare professionals and researchers. So, proper monitoring of stroke can prevent and reduce its severity. Risk factor analysis is one of the promising approaches for identifying the presence of stroke disease. Numerous researches have focused on forecasting strokes for patients. The majority had a good accuracy ratio, around 90%, on the publicly available dataset. Combining several preprocessing tasks can considerably increase the quality of classifiers, an area of research need. Additionally, the researchers should pinpoint the major risk factors for stroke disease and use advanced classifiers to forecast the likelihood of stroke. This article presents an enhanced approach for identifying the potential risk factors and predicting the incidence of stroke on a publicly available clinical dataset. The method considers and resolves significant gaps in the previous studies. It incorporates ten classification models, including advanced boosting classifiers, to detect the presence of stroke. The performance of the classifiers is analyzed on all possible subsets of attribute/feature selections concerning five metrics to find the best-performing algorithms. The experimental results demonstrate that the proposed approach achieved the best accuracy on all feature classifications. Overall, this study's main achievement is obtaining a higher percentage (97% accuracy using boosting classifiers) of stroke prognosis than state-of-the-art approaches to stroke dataset. Hence, physicians can use gradient and ensemble boosting-tree-based models that are most suitable for predicting patients' strokes in the real world. Moreover, this investigation also reveals that age, heart disease, glucose level, hypertension, and marital status are the most significant risk factors. At the same time, the remaining attributes are also essential to obtaining the best performance.
{"title":"Risk factors identification for stroke prognosis using machine learning algorithms","authors":"T. Ahammad","doi":"10.5455/jjcit.71-1652725746","DOIUrl":"https://doi.org/10.5455/jjcit.71-1652725746","url":null,"abstract":"Stroke is a life-threatening condition causing the second-leading number of deaths worldwide. It is a challenging problem in the public health domain of the 21st century for healthcare professionals and researchers. So, proper monitoring of stroke can prevent and reduce its severity. Risk factor analysis is one of the promising approaches for identifying the presence of stroke disease. Numerous researches have focused on forecasting strokes for patients. The majority had a good accuracy ratio, around 90%, on the publicly available dataset. Combining several preprocessing tasks can considerably increase the quality of classifiers, an area of research need. Additionally, the researchers should pinpoint the major risk factors for stroke disease and use advanced classifiers to forecast the likelihood of stroke. This article presents an enhanced approach for identifying the potential risk factors and predicting the incidence of stroke on a publicly available clinical dataset. The method considers and resolves significant gaps in the previous studies. It incorporates ten classification models, including advanced boosting classifiers, to detect the presence of stroke. The performance of the classifiers is analyzed on all possible subsets of attribute/feature selections concerning five metrics to find the best-performing algorithms. The experimental results demonstrate that the proposed approach achieved the best accuracy on all feature classifications. Overall, this study's main achievement is obtaining a higher percentage (97% accuracy using boosting classifiers) of stroke prognosis than state-of-the-art approaches to stroke dataset. Hence, physicians can use gradient and ensemble boosting-tree-based models that are most suitable for predicting patients' strokes in the real world. Moreover, this investigation also reveals that age, heart disease, glucose level, hypertension, and marital status are the most significant risk factors. At the same time, the remaining attributes are also essential to obtaining the best performance.","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70820885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.5455/jjcit.71-1652950714
L. Wakrim, Asma Khabba, Jamal Amadid, S. Ibnyaich
we suggest in this study a very compact triple band PIFA antenna for mobile and wireless applications.by using the binary genetic algorithm and a semi-defected ground plane. This antenna with the dimension of 38×40×1.9 mm3 is dedicated to LTE Band 11 (1427.9-1495.9 MHz), HIPERLAN/2 (5.15-5.35 GHz), WLAN (5.15-5.35 GHz) and 5G Sub-6GHz applications. To accomplish triple-band operation with acceptable performance purpose of using the genetic algorithm is to dictate the form of the ground plane of the antenna. The simulation results showed that the developed PIFA antenna has optimal operation on three frequencies. The first resonance frequency is 1.32 GHz with a bandwidth (S11 < -10 dB) from 1.28 GHz to 1.38 GHz. The middle and higher bands are centred respectively at 3.12 GHz and 5.2 GHz, with a bandwidth from 3.05 to 3.17 GHz and 4.93 to 5.44 GHz, respectively.
{"title":"A SEMI-DEFECTED GROUND PLANE AND A BINARY GENETIC ALGORITHM FOR DESIGNING A VERY COMPACT TRIPLE-BAND PIFA ANTENNA","authors":"L. Wakrim, Asma Khabba, Jamal Amadid, S. Ibnyaich","doi":"10.5455/jjcit.71-1652950714","DOIUrl":"https://doi.org/10.5455/jjcit.71-1652950714","url":null,"abstract":"we suggest in this study a very compact triple band PIFA antenna for mobile and wireless applications.by using the binary genetic algorithm and a semi-defected ground plane. This antenna with the dimension of 38×40×1.9 mm3 is dedicated to LTE Band 11 (1427.9-1495.9 MHz), HIPERLAN/2 (5.15-5.35 GHz), WLAN (5.15-5.35 GHz) and 5G Sub-6GHz applications. To accomplish triple-band operation with acceptable performance purpose of using the genetic algorithm is to dictate the form of the ground plane of the antenna. The simulation results showed that the developed PIFA antenna has optimal operation on three frequencies. The first resonance frequency is 1.32 GHz with a bandwidth (S11 < -10 dB) from 1.28 GHz to 1.38 GHz. The middle and higher bands are centred respectively at 3.12 GHz and 5.2 GHz, with a bandwidth from 3.05 to 3.17 GHz and 4.93 to 5.44 GHz, respectively.","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70820906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.5455/jjcit.71-1655723854
Zahra Berradi, M. Lazaar, O. Mahboub, Halim Berradi, Hicham Omara
Deep Learning is a promising domain. It has different applications in different areas of life, and its application on the stock market is widely used due to its efficiency. Long Short-Term Memory (LSTM) proved its efficiency in dealing with time series data due to the unique hidden unit structure. This paper integrated LSTM with Attention Mechanism and sentiment analysis to forecast the closing price of two stocks, namely APPL and TSLA, from the NASDAQ stock market. We compared our hybrid model with LSTM, LSTM with sentiment analysis, and LSTM with Attention Mechanism. Three benchmarks are used to measure the performance of the models, the first one is Mean Square Error (MSE), the second one is Root Mean Square Error (RMSE), and the third one is Mean Absolute Error (MAE). The results show that the hybridization is more accurate compared to only LSTM model.
{"title":"COMBINATION OF DEEP LEARNING MODELS TO FORECAST STOCK PRICE OF AAPL AND TSLA","authors":"Zahra Berradi, M. Lazaar, O. Mahboub, Halim Berradi, Hicham Omara","doi":"10.5455/jjcit.71-1655723854","DOIUrl":"https://doi.org/10.5455/jjcit.71-1655723854","url":null,"abstract":"Deep Learning is a promising domain. It has different applications in different areas of life, and its application on the stock market is widely used due to its efficiency. Long Short-Term Memory (LSTM) proved its efficiency in dealing with time series data due to the unique hidden unit structure. This paper integrated LSTM with Attention Mechanism and sentiment analysis to forecast the closing price of two stocks, namely APPL and TSLA, from the NASDAQ stock market. We compared our hybrid model with LSTM, LSTM with sentiment analysis, and LSTM with Attention Mechanism. Three benchmarks are used to measure the performance of the models, the first one is Mean Square Error (MSE), the second one is Root Mean Square Error (RMSE), and the third one is Mean Absolute Error (MAE). The results show that the hybridization is more accurate compared to only LSTM model.","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70821013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}