Pub Date : 2023-01-01DOI: 10.5455/jjcit.71-1693392249
Abdelhak Mesbah, Ibtihel Baddari, Mohamed Raihla
This study aims to compare the longitudinal performance between machine learning and deep learning classifiers for Android malware detection, employing different levels of feature abstraction. Using a dataset of 200k Android apps labeled by date within a 10-year range (2013-2022), we propose the LongCGDroid, an image-based effective approach for Android malware detection. We use the semantic Call Graph API representation that is derived from the Control Flow Graph and Data Flow Graph to extract abstracted API calls. Thus, we evaluate the longitudinal performance of LongCGDroid against API changes. Different models are used, machine learning models (LR, RF, KNN, SVM) and deep learning models (CNN, RNN). Empirical experiments demonstrate a progressive decline in performance for all classifiers when evaluated on samples from later periods. Whereas, the deep learning CNN model under the class abstraction maintains a certain stability over time. In comparison with eight state-of-the-art approaches, LongCGDroid achieves higher accuracy.
{"title":"LongCGDroid: Android malware detection through longitudinal study for machine learning and deep learning","authors":"Abdelhak Mesbah, Ibtihel Baddari, Mohamed Raihla","doi":"10.5455/jjcit.71-1693392249","DOIUrl":"https://doi.org/10.5455/jjcit.71-1693392249","url":null,"abstract":"This study aims to compare the longitudinal performance between machine learning and deep learning classifiers for Android malware detection, employing different levels of feature abstraction. Using a dataset of 200k Android apps labeled by date within a 10-year range (2013-2022), we propose the LongCGDroid, an image-based effective approach for Android malware detection. We use the semantic Call Graph API representation that is derived from the Control Flow Graph and Data Flow Graph to extract abstracted API calls. Thus, we evaluate the longitudinal performance of LongCGDroid against API changes. Different models are used, machine learning models (LR, RF, KNN, SVM) and deep learning models (CNN, RNN). Empirical experiments demonstrate a progressive decline in performance for all classifiers when evaluated on samples from later periods. Whereas, the deep learning CNN model under the class abstraction maintains a certain stability over time. In comparison with eight state-of-the-art approaches, LongCGDroid achieves higher accuracy.","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135501384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.5455/jjcit.71-1676205770
Mohammed Baker, Kamal H. Jihad, Y. Taher
Social media has become an excellent way to discover people’s thoughts about various topics and situations. In recent years, many studies have focused on social media during crises, including natural disasters or wars caused by individuals. This study examines how people expressed their feelings on Twitter during the Russian aggression on Ukraine. This study met two goals: the collected data was unique, and it used Machine Learning (ML) to classify the tweets based on their effect on people’s feelings. The first goal was to find the most relevant hashtags about aggression to locate the data set. The second goal was to use several well-known ML models to organize the tweets into groups. The experimental results have shown that most of the performed ML classifiers have higher accuracy with a balanced dataset. However, the findings of the demonstrated experiments using data balancing strategies would not necessarily indicate that all classes would perform better. Therefore, it is essential to highlight the importance of comparing and contrasting the data balancing strategies employed in Sentiment Analysis (SA) and ML studies, including more classifiers and a more comprehensive range of use cases.
{"title":"Prediction of People Sentiments on Twitter using Machine Learning Classifiers During Russian Aggression in Ukraine","authors":"Mohammed Baker, Kamal H. Jihad, Y. Taher","doi":"10.5455/jjcit.71-1676205770","DOIUrl":"https://doi.org/10.5455/jjcit.71-1676205770","url":null,"abstract":"Social media has become an excellent way to discover people’s thoughts about various topics and situations. In recent years, many studies have focused on social media during crises, including natural disasters or wars caused by individuals. This study examines how people expressed their feelings on Twitter during the Russian aggression on Ukraine. This study met two goals: the collected data was unique, and it used Machine Learning (ML) to classify the tweets based on their effect on people’s feelings. The first goal was to find the most relevant hashtags about aggression to locate the data set. The second goal was to use several well-known ML models to organize the tweets into groups. The experimental results have shown that most of the performed ML classifiers have higher accuracy with a balanced dataset. However, the findings of the demonstrated experiments using data balancing strategies would not necessarily indicate that all classes would perform better. Therefore, it is essential to highlight the importance of comparing and contrasting the data balancing strategies employed in Sentiment Analysis (SA) and ML studies, including more classifiers and a more comprehensive range of use cases.","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70821491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.5455/jjcit.71-1683995072
Usha Nsssn, D. R.
Stock investments play a crucial role in deciding the global economic growth of the country. Investors can optimize profit and avoid risk through accurate stock value prediction models, which motivates researchers to work on various aspects of correlated features and predictive models for stock value prediction. The existing stock value prediction models used data like Twitter, microblogs, price history, and Google trends. On the other hand, Domain-specific dictionary-based deep learning evolved as a competitive model for alternative models in stock value prediction. But accuracy of these models depends on the quality of the input, the correlation among the features, and the correctness of the sentiment scores generated for the dictionary terms. Financial news sentiment analysis for stock value prediction with dictionary-based learning needs attention in improving the quality of the input and dictionary term’s sentiment score generation. The present research aims to develop a Blended soft computing model for stock value prediction (BSCM) with cooperative fusion and dictionary-based deep learning. In the current work, six Indian stocks that cover uptrend, sideways, and downtrends characteristics are considered with stock price histories and news headlines from 8th August 2016 to 31st March 2023, i.e., 2427 days. The number of records in the price history dataset is 14,562, and the news headlines dataset is 46,213. The performance of the stock value prediction can be improved by taking advantage of multi-source information and context-aware learning. The present research aims to achieve three objectives: 1. Apply cooperative fusion to combine the news headlines and price history of stocks collected from multiple sources to improve the quality of the input with correlated features. 2. Build a dictionary, FNSentiment, with a novel strategy. 3. Predict stock values using FNSentiment and News Sentiment Prediction Model (NSPM) integration. In the experimentation, the proposed model outperformed the state-of-the-art models with an accuracy of 91.11, RMSE of 10.35, MAPE of 0.02, and MAE of 2.74.
股票投资在决定一个国家的全球经济增长中起着至关重要的作用。投资者可以通过准确的股票价值预测模型来实现利润的优化和风险的规避,这就促使研究人员对股票价值预测的相关特征和预测模型进行多方面的研究。现有的股票价值预测模型使用Twitter、微博、价格历史和谷歌趋势等数据。另一方面,基于特定领域词典的深度学习在股票价值预测中成为一种有竞争力的模型。但这些模型的准确性取决于输入的质量、特征之间的相关性以及为词典术语生成的情感得分的正确性。基于字典学习的财经新闻情感分析股票价值预测需要注意提高输入质量和字典词的情感评分生成。本研究旨在建立一种基于协同融合和基于字典的深度学习的股票价值预测混合软计算模型。在目前的工作中,我们考虑了2016年8月8日至2023年3月31日(即2427天)的股价历史和新闻头条,涵盖了上涨、横盘和下跌趋势特征的6只印度股票。价格历史数据集中的记录数量为14,562,新闻标题数据集中的记录数量为46,213。利用多源信息和上下文感知学习可以提高股票价值预测的性能。本研究旨在达到三个目标:1.研究目标:采用协同融合的方法,将从多个来源收集的新闻标题和股票价格历史进行组合,提高具有相关特征的输入质量。2. 用一种新颖的策略建立一个字典,FNSentiment。3.利用FNSentiment和News Sentiment Prediction Model (NSPM)集成预测股票价值。在实验中,该模型的准确率为91.11,RMSE为10.35,MAPE为0.02,MAE为2.74。
{"title":"A Blended Soft Computing Model for Stock Value Prediction","authors":"Usha Nsssn, D. R.","doi":"10.5455/jjcit.71-1683995072","DOIUrl":"https://doi.org/10.5455/jjcit.71-1683995072","url":null,"abstract":"Stock investments play a crucial role in deciding the global economic growth of the country. Investors can optimize profit and avoid risk through accurate stock value prediction models, which motivates researchers to work on various aspects of correlated features and predictive models for stock value prediction. The existing stock value prediction models used data like Twitter, microblogs, price history, and Google trends. On the other hand, Domain-specific dictionary-based deep learning evolved as a competitive model for alternative models in stock value prediction. But accuracy of these models depends on the quality of the input, the correlation among the features, and the correctness of the sentiment scores generated for the dictionary terms. Financial news sentiment analysis for stock value prediction with dictionary-based learning needs attention in improving the quality of the input and dictionary term’s sentiment score generation. The present research aims to develop a Blended soft computing model for stock value prediction (BSCM) with cooperative fusion and dictionary-based deep learning. In the current work, six Indian stocks that cover uptrend, sideways, and downtrends characteristics are considered with stock price histories and news headlines from 8th August 2016 to 31st March 2023, i.e., 2427 days. The number of records in the price history dataset is 14,562, and the news headlines dataset is 46,213. The performance of the stock value prediction can be improved by taking advantage of multi-source information and context-aware learning. The present research aims to achieve three objectives: 1. Apply cooperative fusion to combine the news headlines and price history of stocks collected from multiple sources to improve the quality of the input with correlated features. 2. Build a dictionary, FNSentiment, with a novel strategy. 3. Predict stock values using FNSentiment and News Sentiment Prediction Model (NSPM) integration. In the experimentation, the proposed model outperformed the state-of-the-art models with an accuracy of 91.11, RMSE of 10.35, MAPE of 0.02, and MAE of 2.74.","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70821738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.5455/jjcit.71-1673098290
H. Morshedlou, Reza Shoar
Due to limited resource capacity in the edge network and a high volume of tasks offloaded to edge servers, edge resources may be unable to provide the required capacity for serving all tasks. As a result, some tasks should be moved to the cloud, which may cause additional delays. This may lead to dissatisfaction among users of the transferred tasks. In this paper, a new agent-based approach to decision-making is presented about which tasks should be transferred to the cloud and which ones should be served locally. This approach tries to pair tasks with resources such that a paired resource is the most preferred resource by the user or task among all available resources. We demonstrate that reaching a Nash Equilibrium point can satisfy the aforementioned condition. A game-theoretic analysis is included to demonstrate that the presented approach increases the average utility of the user and their level of satisfaction.
{"title":"AGENT BASED APPROACH FOR TASK OFFLOADING IN EDGE COMPUTING","authors":"H. Morshedlou, Reza Shoar","doi":"10.5455/jjcit.71-1673098290","DOIUrl":"https://doi.org/10.5455/jjcit.71-1673098290","url":null,"abstract":"Due to limited resource capacity in the edge network and a high volume of tasks offloaded to edge servers, edge resources may be unable to provide the required capacity for serving all tasks. As a result, some tasks should be moved to the cloud, which may cause additional delays. This may lead to dissatisfaction among users of the transferred tasks. In this paper, a new agent-based approach to decision-making is presented about which tasks should be transferred to the cloud and which ones should be served locally. This approach tries to pair tasks with resources such that a paired resource is the most preferred resource by the user or task among all available resources. We demonstrate that reaching a Nash Equilibrium point can satisfy the aforementioned condition. A game-theoretic analysis is included to demonstrate that the presented approach increases the average utility of the user and their level of satisfaction.","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70821155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.5455/jjcit.71-1667052517
Khadija Abouhssous, L. Wakrim, A. Zugari, A. Zakriti
This paper presents a design and optimization approach for a tri-band miniature planar rectangular patch antenna structure for wireless mobile applications. The tri-band operation while maintaining a compact size has been achieved by introducing a defected ground structure (DGS) to control the surface current distribution on the patch antenna and consequently achieve multi-band operation. The geometry of the patch and the position of the DGS were optimized by a genetic algorithm to achieve the desired performance using a simple and miniature design with an area of 16 mm × 20 mm × 1.6 mm, an 82% reduction in the area occupied by a conventional single-band structure used in the optimization process. The proposed GA-optimised antenna provided tri-band operation with bandwidths for |?11| > 6 from 3.2 - 3.5 GHz, 5.5 - 5.9 GHz and 6.3 - 7.1 GHz. At the centre frequencies of 3.4, 5.7 and 6.7 GHz, the peak gains were 0.7, 1.76 and 2.93 dB, respectively. The optimally designed antenna is etched on an FR-4 substrate. Simulation and measurement results show good agreement, making the proposed structure a suitable candidate for mobile applications requiring small and multifunctional telecommunication devices.
{"title":"A THREE-BAND PATCH ANTENNA USING A DEFECTED GROUND STRUCTURE OPTIMIZED BY A GENETIC ALGORITHM FOR THE MODERN WIRELESS MOBILE APPLICATIONS","authors":"Khadija Abouhssous, L. Wakrim, A. Zugari, A. Zakriti","doi":"10.5455/jjcit.71-1667052517","DOIUrl":"https://doi.org/10.5455/jjcit.71-1667052517","url":null,"abstract":"This paper presents a design and optimization approach for a tri-band miniature planar rectangular patch antenna structure for wireless mobile applications. The tri-band operation while maintaining a compact size has been achieved by introducing a defected ground structure (DGS) to control the surface current distribution on the patch antenna and consequently achieve multi-band operation. The geometry of the patch and the position of the DGS were optimized by a genetic algorithm to achieve the desired performance using a simple and miniature design with an area of 16 mm × 20 mm × 1.6 mm, an 82% reduction in the area occupied by a conventional single-band structure used in the optimization process. The proposed GA-optimised antenna provided tri-band operation with bandwidths for |?11| > 6 from 3.2 - 3.5 GHz, 5.5 - 5.9 GHz and 6.3 - 7.1 GHz. At the centre frequencies of 3.4, 5.7 and 6.7 GHz, the peak gains were 0.7, 1.76 and 2.93 dB, respectively. The optimally designed antenna is etched on an FR-4 substrate. Simulation and measurement results show good agreement, making the proposed structure a suitable candidate for mobile applications requiring small and multifunctional telecommunication devices.","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70821203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.5455/jjcit.71-1669807150
S. V., Swapna L
The Internet of Things (IoT) is a collection of interconnected intelligent devices that exists within the larger network known as the Internet. With the increasing popularity of IoT devices, massive data is generated day by day. The collected data need to be continuously uploaded to the cloud server. Besides, the transmission of data in the cloud environment is performed via the internet, which faces numerous threats. However, the security issue always lacks an effective big data communication. Therefore, a novel technique called Orthogonal Regressed Steepest Descent Deep Structured Perceptive Neural Learning based Secured Data Communication (ORSDDSPNL-SDC) is introduced with higher accuracy and lesser time consumption. The ORSDDSPNL-SDC technique comprises three phases, namely registration, user authentication, and secure data communication. In the ORSDDSPNL-SDC technique, the registration phase is carried out for creating the new ID, and password for each user in the cloud. The IoT device's data is then sent to a cloud server by the cloud user for storage. After that, the orthogonal regressed steepest descent multilayer deep perceptive neural learning is applied to examine the user_ ID with already registered ID based on Szymkiewicz–Simpson coefficient. Then the Maxout activation function is to classify the user as authorized or unauthorized. Finally, the steepest descent function is applied for minimizing the classification error and increasing the classification accuracy. In this way, the authorized or unauthorized user is identified. Then the secured communication is performed with the authorized cloud users. Experimental evaluation is carried out on the factors such as classification accuracy, classification time and error rate, and space complexity with respect to a number of users. The qualitative results and discussion indicate that the proposed ORSDDSPNL-SDC offers elevated performance with regard to achieving higher classification accuracy and minimum error as well as computation time when compared to the existing methods.
{"title":"ORTHOGONAL REGRESSED STEEPEST DESCENT DEEP PERCEPTIVE NEURAL LEARNING FOR IoT- AWARE SECURED BIG DATA COMMUNICATION","authors":"S. V., Swapna L","doi":"10.5455/jjcit.71-1669807150","DOIUrl":"https://doi.org/10.5455/jjcit.71-1669807150","url":null,"abstract":"The Internet of Things (IoT) is a collection of interconnected intelligent devices that exists within the larger network known as the Internet. With the increasing popularity of IoT devices, massive data is generated day by day. The collected data need to be continuously uploaded to the cloud server. Besides, the transmission of data in the cloud environment is performed via the internet, which faces numerous threats. However, the security issue always lacks an effective big data communication. Therefore, a novel technique called Orthogonal Regressed Steepest Descent Deep Structured Perceptive Neural Learning based Secured Data Communication (ORSDDSPNL-SDC) is introduced with higher accuracy and lesser time consumption. The ORSDDSPNL-SDC technique comprises three phases, namely registration, user authentication, and secure data communication. In the ORSDDSPNL-SDC technique, the registration phase is carried out for creating the new ID, and password for each user in the cloud. The IoT device's data is then sent to a cloud server by the cloud user for storage. After that, the orthogonal regressed steepest descent multilayer deep perceptive neural learning is applied to examine the user_ ID with already registered ID based on Szymkiewicz–Simpson coefficient. Then the Maxout activation function is to classify the user as authorized or unauthorized. Finally, the steepest descent function is applied for minimizing the classification error and increasing the classification accuracy. In this way, the authorized or unauthorized user is identified. Then the secured communication is performed with the authorized cloud users. Experimental evaluation is carried out on the factors such as classification accuracy, classification time and error rate, and space complexity with respect to a number of users. The qualitative results and discussion indicate that the proposed ORSDDSPNL-SDC offers elevated performance with regard to achieving higher classification accuracy and minimum error as well as computation time when compared to the existing methods.","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70820974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.5455/jjcit.71-1667559201
Safae Berrichi, Naoual Nassiri, A. Mazroui, A. Lakhouaja
Text readability is one of the main research areas widely developed in several languages but highly limited when dealing with the Arabic language. The main challenge in this area is to identify an optimal set of features that represent texts and allow us to evaluate their readability level. To address this challenge, we propose in this study various feature selection methods that can significantly retrieve the set of discriminating features representing Arabic texts. The second aim of this paper is to evaluate different sentence embedding approaches (ArabicBert, AraBert, and XLM-R) and compare their performances to those obtained using the selected linguistic features. We performed experiments with both SVM and Random Forest classifiers on two different corpora dedicated to learning Arabic as a foreign language (L2). The obtained results show that reducing the number of features improves the performance of the readability prediction models by more than 25% and 16% for the two adopted corpora, respectively. In addition, the fine-tuned Arabic-BERT model performs better than the other sentence embedding methods, but provided less improvement than the feature-based models. Combining these methods with the most discriminating features produced the best performance.
{"title":"Interpreting the Relevance of Readability Prediction Features","authors":"Safae Berrichi, Naoual Nassiri, A. Mazroui, A. Lakhouaja","doi":"10.5455/jjcit.71-1667559201","DOIUrl":"https://doi.org/10.5455/jjcit.71-1667559201","url":null,"abstract":"Text readability is one of the main research areas widely developed in several languages but highly limited when dealing with the Arabic language. The main challenge in this area is to identify an optimal set of features that represent texts and allow us to evaluate their readability level. To address this challenge, we propose in this study various feature selection methods that can significantly retrieve the set of discriminating features representing Arabic texts. The second aim of this paper is to evaluate different sentence embedding approaches (ArabicBert, AraBert, and XLM-R) and compare their performances to those obtained using the selected linguistic features. We performed experiments with both SVM and Random Forest classifiers on two different corpora dedicated to learning Arabic as a foreign language (L2). The obtained results show that reducing the number of features improves the performance of the readability prediction models by more than 25% and 16% for the two adopted corpora, respectively. In addition, the fine-tuned Arabic-BERT model performs better than the other sentence embedding methods, but provided less improvement than the feature-based models. Combining these methods with the most discriminating features produced the best performance.","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70821374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.5455/jjcit.71-1666660323
M. Beheitt, M. Hajhmida
Text generation is one of the most challenging applications in artificial intelligence and natural language processing. In recent years, text generation has gotten much attention thanks to the advances in deep learning and language modeling approaches. However, writing poetry is a challenging activity for humans that necessitates creativity and a high level of linguistic ability. Therefore, automatic poem generation is an important research issue that has piqued the interest of the Natural Language Processing (NLP) community. Several researchers have examined automatic poem generation using deep learning approaches, but little has focused on Arabic poetry. In this work, we exhibit how we utilize various GPT-2 and GPT-3 models to automatically generate Arabic poems. BLEU scores and human evaluation are used to evaluate the results of four GPT-based models. Both BLEU scores and human evaluations indicate that fine-tuned GPT-2 outperforms GPT-3 and fine-tuned GPT-3 models, with GPT-3 model having the lowest value in terms of Poeticness. To the best of the authors' knowledge, this work is the first in literature that employs and fine-tunes GPT-3 to generate Arabic poems.
{"title":"Effectiveness of zero-shot models in automatic Arabic Poem generation","authors":"M. Beheitt, M. Hajhmida","doi":"10.5455/jjcit.71-1666660323","DOIUrl":"https://doi.org/10.5455/jjcit.71-1666660323","url":null,"abstract":"Text generation is one of the most challenging applications in artificial intelligence and natural language processing. In recent years, text generation has gotten much attention thanks to the advances in deep learning and language modeling approaches. However, writing poetry is a challenging activity for humans that necessitates creativity and a high level of linguistic ability. Therefore, automatic poem generation is an important research issue that has piqued the interest of the Natural Language Processing (NLP) community. Several researchers have examined automatic poem generation using deep learning approaches, but little has focused on Arabic poetry. In this work, we exhibit how we utilize various GPT-2 and GPT-3 models to automatically generate Arabic poems. BLEU scores and human evaluation are used to evaluate the results of four GPT-based models. Both BLEU scores and human evaluations indicate that fine-tuned GPT-2 outperforms GPT-3 and fine-tuned GPT-3 models, with GPT-3 model having the lowest value in terms of Poeticness. To the best of the authors' knowledge, this work is the first in literature that employs and fine-tunes GPT-3 to generate Arabic poems.","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70821140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.5455/jjcit.71-1678514473
Satheesh Nj, A. Ch
One of the major concerns for service providers and application developer are Quality of experience (QoE), where high traffic congestion on the Internet leads to the degradation of video quality. However, the effectiveness of video transmission is minimized due to the network based on packet loss, bandwidth, and delay. Because of bandwidth limitations, the videos transmitted are obtained in low quality. Meanwhile, various outcomes such as reduction in throughput, re-buffering, or mosaic are determined in packet loss which validated the video streaming obtained in reliable or unreliable mode. Therefore this paper proposes an Improved Fuzzy Weighted queueing based Crossover Fire Hawk (IFW-CFH) algorithm for effective real-time video transmission. The objective of the IFW-CFH approach is to reduce the delay, packet loss, and bandwidth to enhance the video quality via two key mechanisms namely congestion control mechanism as well as packet scheduling mechanism. During the generation of encoded video frames, the packaged packets to the local buffer are transmitted by the scheduler using our proposed IFW-CFH algorithm. Finally, the experimentation is conducted and the results show that the proposed method minimized transmission delay, packet loss, and bandwidth by 13.8% for effective real-time video transmission compared to the existing methods.
{"title":"Enhancing Media Streaming in Wireless Networks using IFW-CFH Algorithm","authors":"Satheesh Nj, A. Ch","doi":"10.5455/jjcit.71-1678514473","DOIUrl":"https://doi.org/10.5455/jjcit.71-1678514473","url":null,"abstract":"One of the major concerns for service providers and application developer are Quality of experience (QoE), where high traffic congestion on the Internet leads to the degradation of video quality. However, the effectiveness of video transmission is minimized due to the network based on packet loss, bandwidth, and delay. Because of bandwidth limitations, the videos transmitted are obtained in low quality. Meanwhile, various outcomes such as reduction in throughput, re-buffering, or mosaic are determined in packet loss which validated the video streaming obtained in reliable or unreliable mode. Therefore this paper proposes an Improved Fuzzy Weighted queueing based Crossover Fire Hawk (IFW-CFH) algorithm for effective real-time video transmission. The objective of the IFW-CFH approach is to reduce the delay, packet loss, and bandwidth to enhance the video quality via two key mechanisms namely congestion control mechanism as well as packet scheduling mechanism. During the generation of encoded video frames, the packaged packets to the local buffer are transmitted by the scheduler using our proposed IFW-CFH algorithm. Finally, the experimentation is conducted and the results show that the proposed method minimized transmission delay, packet loss, and bandwidth by 13.8% for effective real-time video transmission compared to the existing methods.","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70821533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.5455/jjcit.71-1674157604
Dalia Elwi, O. Elnasr, A. Tolba, S. Elmougy
Bitcoin becomes the focus of scientific research in the modern era. Blockchain is the underlying technology of Bitcoin because of its decentralization, transparency, trust-less, and immutability features. However, blockchain can be considered the cause of Bitcoin scalability issues, especially storage. Nodes in the Bitcoin network need to store the full blockchain to validate transactions. Over time, the blockchain size will be bulky. So, the full nodes will prefer to leave the network. This leads to the blockchain being centralized and trusted, and the security will be adversely affected. This paper proposes a Stateful Layered Chain Model based on storing accounts’ balances to reduce the Bitcoin blockchain size. This model changes the structure of the traditional blockchain from blocks to layers. The experimental results demonstrated that the proposed model reduces the blockchain size by about 50.6 %. Implicitly, the transaction throughput can also be doubled.
{"title":"Stateful Layered Chain Model to Improve the Scalability of Bitcoin","authors":"Dalia Elwi, O. Elnasr, A. Tolba, S. Elmougy","doi":"10.5455/jjcit.71-1674157604","DOIUrl":"https://doi.org/10.5455/jjcit.71-1674157604","url":null,"abstract":"Bitcoin becomes the focus of scientific research in the modern era. Blockchain is the underlying technology of Bitcoin because of its decentralization, transparency, trust-less, and immutability features. However, blockchain can be considered the cause of Bitcoin scalability issues, especially storage. Nodes in the Bitcoin network need to store the full blockchain to validate transactions. Over time, the blockchain size will be bulky. So, the full nodes will prefer to leave the network. This leads to the blockchain being centralized and trusted, and the security will be adversely affected. This paper proposes a Stateful Layered Chain Model based on storing accounts’ balances to reduce the Bitcoin blockchain size. This model changes the structure of the traditional blockchain from blocks to layers. The experimental results demonstrated that the proposed model reduces the blockchain size by about 50.6 %. Implicitly, the transaction throughput can also be doubled.","PeriodicalId":36757,"journal":{"name":"Jordanian Journal of Computers and Information Technology","volume":"1 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70821399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}