Pub Date : 2023-08-25DOI: 10.59164/univers.v24i24.2862
Assoc. Prof. Dr. Ramiz Zekaj
Instituti Shqiptar i Mendimit dhe Qytetërimit Islam (AIITC) është një ndër institucionet pionierë në promovimin e kulturës, letërsisë dhe shkencave islame në Shqipërinë e pas viteve 90’. I themeluar në vitin 1996, ai ka qenë një urë e rëndësishme e ndërlidhjes së shqiptarëve të Shqipërisë dhe vendeve të tjera fqinje me kulturës islame, e cila dëshmoi goditje të ashpra frontale përgjatë shek. XX, veçanërisht gjatë periudhës së sistemit totalitar komunist, që kulmoi me ndalimin e praktikimin e fesë në vitin 1996. Në këtë studim do të përpiqemi që të hedhim dritë mbi kontributin e Institutit në ringjalljen e Islamit në Shqipëri, nëpërmjet aktiviteteve të tij botuese, që kanë rezultuar në prezantimin e myslimanëve të Shqipërisë dhe jo vetëm me disa prej veprave bazike të traditës islame. Megjithatë, është përtej mundësive të këtij studime që të trajtohen dhe përfshihen të gjitha librat e botuara nga Instituti për shkak se volumit të madh të tyre, që arrin në mbi dyqind tituj të botuar.
伊斯兰教祈祷团和伊斯兰教祈祷研究所(AIITC)是 90 年代后促进文化、宗教和伊斯兰教发展的先驱机构。在 1996 年,我们开始了对西伯利亚文化的宣传和销售,并将其作为文化的一部分。XX, veçanërisht gjatë periudhës së sistemit totalitar komunist, që kulmoi me ndalimin e praktikimin e fesë n'vitin 1996.该研究为什叶派伊斯兰教研究所的活动做出了贡献、我们将在西藏伊斯兰教祈祷团和传统伊斯兰教祈祷团之间建立联系。该研究所的研究成果将为您的学习和工作提供帮助。
{"title":"Kontributi i Institutit Shqiptar të Mendimit dhe Qytetërimit Islam (AIITC) në promovimin e kulturës islame, arsimit dhe shkencës shqiptare përgjatë 1996-2022","authors":"Assoc. Prof. Dr. Ramiz Zekaj","doi":"10.59164/univers.v24i24.2862","DOIUrl":"https://doi.org/10.59164/univers.v24i24.2862","url":null,"abstract":"Instituti Shqiptar i Mendimit dhe Qytetërimit Islam (AIITC) është një ndër institucionet pionierë në promovimin e kulturës, letërsisë dhe shkencave islame në Shqipërinë e pas viteve 90’. I themeluar në vitin 1996, ai ka qenë një urë e rëndësishme e ndërlidhjes së shqiptarëve të Shqipërisë dhe vendeve të tjera fqinje me kulturës islame, e cila dëshmoi goditje të ashpra frontale përgjatë shek. XX, veçanërisht gjatë periudhës së sistemit totalitar komunist, që kulmoi me ndalimin e praktikimin e fesë në vitin 1996. Në këtë studim do të përpiqemi që të hedhim dritë mbi kontributin e Institutit në ringjalljen e Islamit në Shqipëri, nëpërmjet aktiviteteve të tij botuese, që kanë rezultuar në prezantimin e myslimanëve të Shqipërisë dhe jo vetëm me disa prej veprave bazike të traditës islame. Megjithatë, është përtej mundësive të këtij studime që të trajtohen dhe përfshihen të gjitha librat e botuara nga Instituti për shkak se volumit të madh të tyre, që arrin në mbi dyqind tituj të botuar.","PeriodicalId":14652,"journal":{"name":"J. Univers. Comput. Sci.","volume":"165 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134982980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-25DOI: 10.59164/univers.v24i24.2870
Prof. dr. Fahrush Rexhepi
Gjatë përhapjes dhe zhvillimit të kulturës islame ndër shqiptarët, ulemaja shqiptare ka dhënë kontribut të çmuar në kultivimin dhe ruajtjen e kulturës dhe traditës fetare islame te shqiptarët. Ata, përveç aktiviteteve të ndryshme fetare, kontribut të veçantë kanë dhënë duke përkthyer fillimisht e pastaj duke shkruar libra e tekste të ndryshme islame në gjuhën shqipe. Në fillim këta krijues kanë shkruar dhe krijuar vepra në gjuhën shqipe, me alfabet arab –osman (nga fundi i shek. XVII deri në shek. XIX). Ndërsa gjatë shekullit që kemi lënë pas, veçmas nga vitet e 50-ta e deri në ditët e sotme, panë dritën e botimit një numër i konsiderueshëm i librave të fushës së Sires (jetëshkrimit të Pejgamberit a.s.). Ata duke parë rëndësinë që ka biografia e Muhamedit a.s. për besimtarët shqiptar islam si dhe shëmbëlltyra e tij, si model për mbarë botën dhe njerëzimin, me zell të madh kanë shkruar e botuar libra e shkrime për figurën dhe personalitetin e tij. Duke qenë se nga viti 1996, e ligjëroj dhe e mbulojë këtë fushë në Fakultetin e Studimeve Islame, dhe duke qenë se për këtë lëmi kam filluar të publikoj nëpër revista të ndryshme qysh herët, e pashë të arsyeshme që për këtë konferencë shkencore të paraqitem, me temën: Literatura Islame në Gjuhën Shqipe 1945 - 2022, pikërisht t’i rrekem kësaj teme por me një trajtim dhe qasje tjetër. Mendojmë se pozita dhe niveli i sotëm arsimor dhe intelektual i besimtarëve myslimanë shqiptar, e sheh të nevojshme njohjen direkte të historisë së lindjes dhe shtrirjes së fesë islame, njohjen me jetën dhe predikimet e profetit Muhamed a.s., me botimet e autorëve të shquar lindorë e perëndimorë kushtuar biografisë së Pejgamberit a.s.. Por, mbi të gjitha për shqiptarët e besimit islam kanë interes të veçantë botimet në gjuhën amtare kushtuar jetës dhe historisë së Pejgamberit a.s.
作为对伊斯兰教文化和传统的限制,伊斯兰教祈祷团为伊斯兰教文化和传统的传承做出了贡献。作为一项文化教育活动,Ata 为公爵的文化教育活动做出了贡献。阿拉伯语是阿拉伯语的简称(从第 XVII 册到第 XIX 册)。在过去的 50 年中,该书在《西塞罗》(Pejgamberit et Pejgamberit a.s.)一书中被广泛使用。公爵是《穆罕默德传》的作者,也是伊斯兰教和宗教的典范,他的作品既有人物形象,也有个性特征。Duke 于 1996 年毕业于伊斯兰研究学院、在《伊斯兰文学》杂志出版前,该公爵还参加了在纽约举行的学术会议:Literatura Islame në Gjuhën Shqipe 1945 - 2022, pikërisht t'i rrekem kësaj teme por me një trajtim dhe qasje tjetër.该书从历史学和宗教学的角度,对穆斯林的历史和宗教信仰进行了深入探讨、我也是作者之一,但我不知道他是谁。在伊斯兰教的影响下,我也想成为一名传记作家。
{"title":"Botimi dhe përkthimi i librave e teksteve më të rëndësishme të sires në gjuhën shqipe","authors":"Prof. dr. Fahrush Rexhepi","doi":"10.59164/univers.v24i24.2870","DOIUrl":"https://doi.org/10.59164/univers.v24i24.2870","url":null,"abstract":"Gjatë përhapjes dhe zhvillimit të kulturës islame ndër shqiptarët, ulemaja shqiptare ka dhënë kontribut të çmuar në kultivimin dhe ruajtjen e kulturës dhe traditës fetare islame te shqiptarët. Ata, përveç aktiviteteve të ndryshme fetare, kontribut të veçantë kanë dhënë duke përkthyer fillimisht e pastaj duke shkruar libra e tekste të ndryshme islame në gjuhën shqipe. Në fillim këta krijues kanë shkruar dhe krijuar vepra në gjuhën shqipe, me alfabet arab –osman (nga fundi i shek. XVII deri në shek. XIX). Ndërsa gjatë shekullit që kemi lënë pas, veçmas nga vitet e 50-ta e deri në ditët e sotme, panë dritën e botimit një numër i konsiderueshëm i librave të fushës së Sires (jetëshkrimit të Pejgamberit a.s.). Ata duke parë rëndësinë që ka biografia e Muhamedit a.s. për besimtarët shqiptar islam si dhe shëmbëlltyra e tij, si model për mbarë botën dhe njerëzimin, me zell të madh kanë shkruar e botuar libra e shkrime për figurën dhe personalitetin e tij. Duke qenë se nga viti 1996, e ligjëroj dhe e mbulojë këtë fushë në Fakultetin e Studimeve Islame, dhe duke qenë se për këtë lëmi kam filluar të publikoj nëpër revista të ndryshme qysh herët, e pashë të arsyeshme që për këtë konferencë shkencore të paraqitem, me temën: Literatura Islame në Gjuhën Shqipe 1945 - 2022, pikërisht t’i rrekem kësaj teme por me një trajtim dhe qasje tjetër. Mendojmë se pozita dhe niveli i sotëm arsimor dhe intelektual i besimtarëve myslimanë shqiptar, e sheh të nevojshme njohjen direkte të historisë së lindjes dhe shtrirjes së fesë islame, njohjen me jetën dhe predikimet e profetit Muhamed a.s., me botimet e autorëve të shquar lindorë e perëndimorë kushtuar biografisë së Pejgamberit a.s.. Por, mbi të gjitha për shqiptarët e besimit islam kanë interes të veçantë botimet në gjuhën amtare kushtuar jetës dhe historisë së Pejgamberit a.s.","PeriodicalId":14652,"journal":{"name":"J. Univers. Comput. Sci.","volume":"820 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134984281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reputation generation systems are decision-making tools used in different domains including e-commerce, tourism, social media events, etc. Such systems generate a numerical reputation score by analyzing and mining massive amounts of various types of user data, including textual opinions, social interactions, shared images, etc. Over the past few years, users have been sharing millions of tweets related to cryptocurrencies. Yet, no system in the literature was designed to handle the unique features of this domain with the goal of automatically generating reputation and supporting investors’ and users’ decision-making. Therefore, we propose the first financially oriented reputation system that generates a single numerical value from user-generated content on Twitter toward cryptocurrencies. The system processes the textual opinions by applying a sentiment polarity extractor based on the fine-tuned auto-regressive language model named XLNet. Also, the system proposes a technique to enhance sentiment identification by detecting sarcastic opinions through examining the contrast of sentiment between the textual content, images, and emojis. Furthermore, other features are considered, such as the popularity of the opinions based on the social network interactions (likes and shares), the intensity of the entity’s demand within the opinions, and news influence on the entity. A survey experiment has been conducted by gathering numerical scores from 827 Twitter users interested in cryptocurrencies. Each selected user assigns 3 numerical assessment scores toward three cryptocurrencies. The average of those scores is considered ground truth. The experiment results show the efficacy of our model in generating a reliable numerical reputation value compared with the ground truth, which proves that the proposed system may be applied in practice as a trusted decision-making tool.
{"title":"Aggregating Users' Online Opinions Attributes and News Influence for Cryptocurrencies Reputation Generation","authors":"Achraf Boumhidi, Abdessamad Benlahbib, E. Nfaoui","doi":"10.3897/jucs.85610","DOIUrl":"https://doi.org/10.3897/jucs.85610","url":null,"abstract":"Reputation generation systems are decision-making tools used in different domains including e-commerce, tourism, social media events, etc. Such systems generate a numerical reputation score by analyzing and mining massive amounts of various types of user data, including textual opinions, social interactions, shared images, etc. Over the past few years, users have been sharing millions of tweets related to cryptocurrencies. Yet, no system in the literature was designed to handle the unique features of this domain with the goal of automatically generating reputation and supporting investors’ and users’ decision-making. Therefore, we propose the first financially oriented reputation system that generates a single numerical value from user-generated content on Twitter toward cryptocurrencies. The system processes the textual opinions by applying a sentiment polarity extractor based on the fine-tuned auto-regressive language model named XLNet. Also, the system proposes a technique to enhance sentiment identification by detecting sarcastic opinions through examining the contrast of sentiment between the textual content, images, and emojis. Furthermore, other features are considered, such as the popularity of the opinions based on the social network interactions (likes and shares), the intensity of the entity’s demand within the opinions, and news influence on the entity. A survey experiment has been conducted by gathering numerical scores from 827 Twitter users interested in cryptocurrencies. Each selected user assigns 3 numerical assessment scores toward three cryptocurrencies. The average of those scores is considered ground truth. The experiment results show the efficacy of our model in generating a reliable numerical reputation value compared with the ground truth, which proves that the proposed system may be applied in practice as a trusted decision-making tool.","PeriodicalId":14652,"journal":{"name":"J. Univers. Comput. Sci.","volume":"1 1","pages":"546-568"},"PeriodicalIF":0.0,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90142466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Ruiz, María Ángeles Verdejo-Espinosa, Alicia Montoro-Lendínez, M. Espinilla
Nowadays, it is becoming increasingly important to understand the multiple configuration factors of BLE anchors in indoor location systems. This task becomes particularly crucial in the context of activity recognition in multi-occupancy smart environments. Knowing the impact of the configuration of BLE anchors in an indoor location system allows us to distinguish the interactions performed by each inhabitant in a smart environment according to their proximity to each sensor. This paper proposes a new methodology, OBLEA, that determines the optimisation of Bluetooth Low Energy (BLE) anchors in indoor location systems, considering multiple BLE variables to increase flexibility and facilitate transferability to other environments. Concretely, we present a model based on a data-driven approach that considers configurations to obtain the best performing configuration with a minimum number of anchors. This methodology includes a flexible framework for the indoor space, the architecture to be deployed, which considers the RSSI value of the BLE anchors, and finally, optimisation and inference for indoor location. As a case study, OBLEA is applied to determine the location of ageing inhabitants in a nursing home in Alcaudete, Jaén (Spain). Results show the extracted knowledge related to the optimisation of BLE anchors involved in the case study.
{"title":"OBLEA: A New Methodology to Optimise Bluetooth Low Energy Anchors in Multi-occupancy Location Systems","authors":"J. Ruiz, María Ángeles Verdejo-Espinosa, Alicia Montoro-Lendínez, M. Espinilla","doi":"10.3897/jucs.96878","DOIUrl":"https://doi.org/10.3897/jucs.96878","url":null,"abstract":"Nowadays, it is becoming increasingly important to understand the multiple configuration factors of BLE anchors in indoor location systems. This task becomes particularly crucial in the context of activity recognition in multi-occupancy smart environments. Knowing the impact of the configuration of BLE anchors in an indoor location system allows us to distinguish the interactions performed by each inhabitant in a smart environment according to their proximity to each sensor. This paper proposes a new methodology, OBLEA, that determines the optimisation of Bluetooth Low Energy (BLE) anchors in indoor location systems, considering multiple BLE variables to increase flexibility and facilitate transferability to other environments. Concretely, we present a model based on a data-driven approach that considers configurations to obtain the best performing configuration with a minimum number of anchors. This methodology includes a flexible framework for the indoor space, the architecture to be deployed, which considers the RSSI value of the BLE anchors, and finally, optimisation and inference for indoor location. As a case study, OBLEA is applied to determine the location of ageing inhabitants in a nursing home in Alcaudete, Jaén (Spain). Results show the extracted knowledge related to the optimisation of BLE anchors involved in the case study.","PeriodicalId":14652,"journal":{"name":"J. Univers. Comput. Sci.","volume":"30 1","pages":"627-646"},"PeriodicalIF":0.0,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77127462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Soukaina Benabdelouahab, José A. García-Berná, Chaimae Moumouh, J. M. C. D. Gea, J. E. Bouhdidi, Yacine El Younoussi, J. Alemán
Due to the substantial development of information and communications technology, the use of E-learning in higher education has become essential to boost teaching methods and enhance students' learning skills and competencies. E-learning in Software Engineering turns out to be increasingly interesting for scholars. In fact, researchers have worked to enhance modern Software Engineering education techniques to meet the required educational objectives. The aim of this article is to analyse the scientific production on E-learning Software Engineering education by conducting a bibliometric analysis of 10,603 publications, dating from 1954 to 2020 and available in the Scopus database. The results reveal some scientific production information, such as the temporal evolution of the publications, the most prolific authors, institutions and countries, as well as the languages used. Besides, the paper evaluates additional bibliometric parameters, including the authors' production, journal productivity, and scientific cooperation, among other bibliometric parameters. The subject of the current study has not been treated by any previous bibliometric studies. Our research is deeper and more specific; it covers a long period of 66 years and a large number of publications, thanks to the chosen search string containing the different spellings of the used terms. In addition, the literature is analysed using several tools such as Microsoft Excel, VOSviewer, and Python. The research findings can be used to identify the current state of E-learning Software Engineering Education, as well as to identify various research trends and the general direction of E-learning research.
{"title":"A Bibliometric Study on E-Learning Software Engineering Education","authors":"Soukaina Benabdelouahab, José A. García-Berná, Chaimae Moumouh, J. M. C. D. Gea, J. E. Bouhdidi, Yacine El Younoussi, J. Alemán","doi":"10.3897/jucs.87550","DOIUrl":"https://doi.org/10.3897/jucs.87550","url":null,"abstract":"Due to the substantial development of information and communications technology, the use of E-learning in higher education has become essential to boost teaching methods and enhance students' learning skills and competencies. E-learning in Software Engineering turns out to be increasingly interesting for scholars. In fact, researchers have worked to enhance modern Software Engineering education techniques to meet the required educational objectives. The aim of this article is to analyse the scientific production on E-learning Software Engineering education by conducting a bibliometric analysis of 10,603 publications, dating from 1954 to 2020 and available in the Scopus database. The results reveal some scientific production information, such as the temporal evolution of the publications, the most prolific authors, institutions and countries, as well as the languages used. Besides, the paper evaluates additional bibliometric parameters, including the authors' production, journal productivity, and scientific cooperation, among other bibliometric parameters. The subject of the current study has not been treated by any previous bibliometric studies. Our research is deeper and more specific; it covers a long period of 66 years and a large number of publications, thanks to the chosen search string containing the different spellings of the used terms. In addition, the literature is analysed using several tools such as Microsoft Excel, VOSviewer, and Python. The research findings can be used to identify the current state of E-learning Software Engineering Education, as well as to identify various research trends and the general direction of E-learning research. ","PeriodicalId":14652,"journal":{"name":"J. Univers. Comput. Sci.","volume":"20 1","pages":"510-545"},"PeriodicalIF":0.0,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85281768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Plants are a big part of the ecosystem. Plants are also used by humans for various purposes. Cotton is one of these important plants and is very critical for humans. Cotton production is one of the most important sources of income for many countries and farmers in the world. Cotton can get diseases like other plants and living things. Detecting these diseases is critical. In this study, a model is developed for disease detection from leaves of cotton. This model determines whether the cotton is healthy or diseased through the photograph. It is a deep convolutional neural network model. While establishing the model, care is taken to ensure that it is a problem-specific model. The grey wolf optimization algorithm is used to ensure that the model architecture is optimal. So, this algorithm will find the most efficient architecture. The proposed model has been compared with the ResNet50, VGG19, and InceptionV3 models that are frequently used in the literature. According to the results obtained, the proposed model has an accuracy value of 1.0. Other models had accuracy values of 0.726, 0.934, and 0.943, respectively. The proposed model is more successful than other models.
{"title":"A novel deep learning model with the Grey Wolf Optimization algorithm for cotton disease detection","authors":"Burak Gülmez","doi":"10.3897/jucs.94183","DOIUrl":"https://doi.org/10.3897/jucs.94183","url":null,"abstract":" Plants are a big part of the ecosystem. Plants are also used by humans for various purposes. Cotton is one of these important plants and is very critical for humans. Cotton production is one of the most important sources of income for many countries and farmers in the world. Cotton can get diseases like other plants and living things. Detecting these diseases is critical. In this study, a model is developed for disease detection from leaves of cotton. This model determines whether the cotton is healthy or diseased through the photograph. It is a deep convolutional neural network model. While establishing the model, care is taken to ensure that it is a problem-specific model. The grey wolf optimization algorithm is used to ensure that the model architecture is optimal. So, this algorithm will find the most efficient architecture. The proposed model has been compared with the ResNet50, VGG19, and InceptionV3 models that are frequently used in the literature. According to the results obtained, the proposed model has an accuracy value of 1.0. Other models had accuracy values of 0.726, 0.934, and 0.943, respectively. The proposed model is more successful than other models. ","PeriodicalId":14652,"journal":{"name":"J. Univers. Comput. Sci.","volume":"7 1","pages":"595-626"},"PeriodicalIF":0.0,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85354519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The inference of politically-oriented information from text data is a popular research topic in Natural Language Processing (NLP) at both text- and author-level. In recent years, studies of this kind have been implemented with the aid of text representations ranging from simple count-based models (e.g., bag-of-words) to sequence-based models built from transformers (e.g., BERT). Despite considerable success, however, we may still ask whether results may be improved further by combining these models with additional text representations. To shed light on this issue, the present work describes a series of experiments to compare a number of strategies for political bias and ideology inference from text data using sequence-based BERT models, syntax-and semantics-driven features, and examines which of these representations (or their combinations) improve overall model accuracy. Results suggest that one particular strategy - namely, the combination of BERT language models with syntactic dependencies - significantly outperforms well-known count- and sequence-based text classifiers alike. In particular, the combined model has been found to improve accuracy across all tasks under consideration, outperforming the SemEval hyperpartisan news detection top-performing system by up to 6%, and outperforming the use of BERT alone by up to 21%, making a potentially strong case for the use of heterogeneous text representations in the present tasks.
{"title":"Politically-oriented information inference from text","authors":"S. C. D. Silva, Ivandré Paraboni","doi":"10.3897/jucs.96652","DOIUrl":"https://doi.org/10.3897/jucs.96652","url":null,"abstract":"The inference of politically-oriented information from text data is a popular research topic in Natural Language Processing (NLP) at both text- and author-level. In recent years, studies of this kind have been implemented with the aid of text representations ranging from simple count-based models (e.g., bag-of-words) to sequence-based models built from transformers (e.g., BERT). Despite considerable success, however, we may still ask whether results may be improved further by combining these models with additional text representations. To shed light on this issue, the present work describes a series of experiments to compare a number of strategies for political bias and ideology inference from text data using sequence-based BERT models, syntax-and semantics-driven features, and examines which of these representations (or their combinations) improve overall model accuracy. Results suggest that one particular strategy - namely, the combination of BERT language models with syntactic dependencies - significantly outperforms well-known count- and sequence-based text classifiers alike. In particular, the combined model has been found to improve accuracy across all tasks under consideration, outperforming the SemEval hyperpartisan news detection top-performing system by up to 6%, and outperforming the use of BERT alone by up to 21%, making a potentially strong case for the use of heterogeneous text representations in the present tasks.","PeriodicalId":14652,"journal":{"name":"J. Univers. Comput. Sci.","volume":"32 1","pages":"569-594"},"PeriodicalIF":0.0,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79000515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Domínguez, Jónathan Heras, Eloy J. Mata, Vico Pascual, Lucas Fernández-Cedrón, Marcos Martínez-Lanchares, Jon Pellejero-Espinosa, Antonio Rubio-Loscertales, C. Tarragona-Pérez
In waste recycling plants, measuring the waste volume and weight at the beginning of the treatment process is key for a better management of resources. This task can be conducted by using orthophoto images, but it is necessary to remove from those images the objects, such as containers or trucks, that are not involved in the measurement process. This work proposes the application of deep learning for the semantic segmentation of those irrelevant objects. Several deep architectures are trained and compared, while three semi-supervised learning methods (PseudoLabeling, Distillation and Model Distillation) are proposed to take advantage of non-annotated images. In these experiments, the U-net++ architecture with an EfficientNetB3 backbone, trained with the set of labelled images, achieves the best overall multi Dice score of 91.23%. The application of semi-supervised learning methods further boosts the segmentation accuracy in a range between 1.31% and 2.59%, on average.
{"title":"Semi-Supervised Semantic Segmentation for Identification of Irrelevant Objects in a Waste Recycling Plant","authors":"C. Domínguez, Jónathan Heras, Eloy J. Mata, Vico Pascual, Lucas Fernández-Cedrón, Marcos Martínez-Lanchares, Jon Pellejero-Espinosa, Antonio Rubio-Loscertales, C. Tarragona-Pérez","doi":"10.2139/ssrn.4116055","DOIUrl":"https://doi.org/10.2139/ssrn.4116055","url":null,"abstract":"In waste recycling plants, measuring the waste volume and weight at the beginning of the treatment process is key for a better management of resources. This task can be conducted by using orthophoto images, but it is necessary to remove from those images the objects, such as containers or trucks, that are not involved in the measurement process. This work proposes the application of deep learning for the semantic segmentation of those irrelevant objects. Several deep architectures are trained and compared, while three semi-supervised learning methods (PseudoLabeling, Distillation and Model Distillation) are proposed to take advantage of non-annotated images. In these experiments, the U-net++ architecture with an EfficientNetB3 backbone, trained with the set of labelled images, achieves the best overall multi Dice score of 91.23%. The application of semi-supervised learning methods further boosts the segmentation accuracy in a range between 1.31% and 2.59%, on average.","PeriodicalId":14652,"journal":{"name":"J. Univers. Comput. Sci.","volume":"25 1","pages":"419-431"},"PeriodicalIF":0.0,"publicationDate":"2023-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89578506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In conjunction with the global concern regarding the spread of fake news on social media, there is a large flow of research to address this phenomenon. The wide growth in social media and online forums has made it easy for legitimate news to merge with comprehensive misleading news, negatively affecting people’s perceptions and misleading them. As such, this study aims to use deep learning, pre-trained models, and machine learning to predict Arabic and English fake news based on three public and available datasets: the Fake-or-Real dataset, the AraNews dataset, and the Sentimental LIAR dataset. Based on GloVe (Global Vectors) and FastText pre-trained models, A hybrid network has been proposed to improve the prediction of fake news. In this proposed network, CNN (Convolution Neural Network) was used to identify the most important features. In contrast, BiGRU (Bidirectional Gated Recurrent Unit) was used to measure the long-term dependency of sequences. Finally, multi-layer perceptron (MLP) is applied to classify the article news as fake or real. On the other hand, an Improved Random Forest Model is built based on the embedding values extracted from BERT (Bidirectional Encoder Representations from Transformers) pre-trained model and the relevant speaker-based features. These relevant features are identified by a fuzzy model based on feature selection methods. Accuracy was used as a measure of the quality of our proposed models, whereby the prediction accuracy reached 0.9935, 0.9473, and 0.7481 for the Fake-or-Real dataset, AraNews dataset, and Sentimental LAIR dataset respectively. The proposed models showed a significant improvement in the accuracy of predicting Arabic and English fake news compared to previous studies that used the same datasets.
随着全球对社交媒体上假新闻传播的关注,有大量的研究来解决这一现象。社交媒体和网络论坛的广泛发展,使得合法的新闻很容易与全面的误导性新闻融合在一起,对人们的认知产生负面影响,误导人们。因此,本研究旨在使用深度学习、预训练模型和机器学习来预测基于三个公开和可用数据集的阿拉伯语和英语假新闻:假或真数据集、AraNews数据集和感伤骗子数据集。基于GloVe (Global Vectors)和FastText预训练模型,提出了一种改进假新闻预测的混合网络。在该网络中,使用CNN(卷积神经网络)来识别最重要的特征。相比之下,BiGRU(双向门控循环单元)用于测量序列的长期依赖性。最后,应用多层感知器(MLP)对文章新闻进行真假分类。另一方面,基于BERT (Bidirectional Encoder Representations from Transformers)预训练模型提取的嵌入值和相关的基于说话人的特征,构建改进的随机森林模型。通过基于特征选择方法的模糊模型识别这些相关特征。准确度被用来衡量我们提出的模型的质量,其中,Fake-or-Real数据集、AraNews数据集和Sentimental LAIR数据集的预测准确度分别达到0.9935、0.9473和0.7481。与之前使用相同数据集的研究相比,所提出的模型在预测阿拉伯语和英语假新闻的准确性方面有显著提高。
{"title":"Developed Models Based on Transfer Learning for Improving Fake News Predictions","authors":"Tahseen A. Wotaifi, B. N. Dhannoon","doi":"10.3897/jucs.94081","DOIUrl":"https://doi.org/10.3897/jucs.94081","url":null,"abstract":" In conjunction with the global concern regarding the spread of fake news on social media, there is a large flow of research to address this phenomenon. The wide growth in social media and online forums has made it easy for legitimate news to merge with comprehensive misleading news, negatively affecting people’s perceptions and misleading them. As such, this study aims to use deep learning, pre-trained models, and machine learning to predict Arabic and English fake news based on three public and available datasets: the Fake-or-Real dataset, the AraNews dataset, and the Sentimental LIAR dataset. Based on GloVe (Global Vectors) and FastText pre-trained models, A hybrid network has been proposed to improve the prediction of fake news. In this proposed network, CNN (Convolution Neural Network) was used to identify the most important features. In contrast, BiGRU (Bidirectional Gated Recurrent Unit) was used to measure the long-term dependency of sequences. Finally, multi-layer perceptron (MLP) is applied to classify the article news as fake or real. On the other hand, an Improved Random Forest Model is built based on the embedding values extracted from BERT (Bidirectional Encoder Representations from Transformers) pre-trained model and the relevant speaker-based features. These relevant features are identified by a fuzzy model based on feature selection methods. Accuracy was used as a measure of the quality of our proposed models, whereby the prediction accuracy reached 0.9935, 0.9473, and 0.7481 for the Fake-or-Real dataset, AraNews dataset, and Sentimental LAIR dataset respectively. The proposed models showed a significant improvement in the accuracy of predicting Arabic and English fake news compared to previous studies that used the same datasets. ","PeriodicalId":14652,"journal":{"name":"J. Univers. Comput. Sci.","volume":"25 1","pages":"491-507"},"PeriodicalIF":0.0,"publicationDate":"2023-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85927923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract: To identify autoimmune diseases in humans, analysis of HEp-2 staining patterns at cell level is the gold standard for clinical practice research communities. An automated procedure is a complicated task due to variations in cell densities, sizes, shapes and patterns, overfitting of features, large-scale data volume, stained cells and poor quality of images. Several machine learning methods that analyse and classify HEp-2 cell microscope images currently exist. However, accuracy is still not at the level required for medical applications and computer aided diagnosis due to those challenges. The purpose of this work to automate classification procedure of HEp-2 stained cells from microscopic images and improve the accuracy of computer aided diagnosis. This work proposes Deep Convolutional Neural Networks (DCNNs) technique to classify HEp-2 cell patterns at cell level into six classes based on employing the level-set method via edge detection technique to segment HEp-2 cell shape. The DCNNs are designed to identify cell-shape and fundamental distance features related with HEp-2 cell types. This paper is investigated the effectiveness of our proposed method over benchmarked dataset. The result shows that the proposed method is highly superior comparing with other methods in benchmarked dataset and state-of-the-art methods. The result demonstrates that the proposed method has an excellent adaptability across variations in cell densities, sizes, shapes and patterns, overfitting features, large-scale data volume, and stained cells under different lab environments. The accurate classification of HEp-2 staining pattern at cell level helps increasing the accuracy of computer aided diagnosis for diagnosis process in the future.
{"title":"Automated Classification of Cell Level of HEp-2 Microscopic Images Using Deep Convolutional Neural Networks-Based Diameter Distance Features","authors":"Mitchell Jensen, Khamael Al-Dulaimi, Khairiyah Saeed Abduljabbar, Jasmine Banks","doi":"10.3897/jucs.96293","DOIUrl":"https://doi.org/10.3897/jucs.96293","url":null,"abstract":"Abstract: To identify autoimmune diseases in humans, analysis of HEp-2 staining patterns at cell level is the gold standard for clinical practice research communities. An automated procedure is a complicated task due to variations in cell densities, sizes, shapes and patterns, overfitting of features, large-scale data volume, stained cells and poor quality of images. Several machine learning methods that analyse and classify HEp-2 cell microscope images currently exist. However, accuracy is still not at the level required for medical applications and computer aided diagnosis due to those challenges. The purpose of this work to automate classification procedure of HEp-2 stained cells from microscopic images and improve the accuracy of computer aided diagnosis. This work proposes Deep Convolutional Neural Networks (DCNNs) technique to classify HEp-2 cell patterns at cell level into six classes based on employing the level-set method via edge detection technique to segment HEp-2 cell shape. The DCNNs are designed to identify cell-shape and fundamental distance features related with HEp-2 cell types. This paper is investigated the effectiveness of our proposed method over benchmarked dataset. The result shows that the proposed method is highly superior comparing with other methods in benchmarked dataset and state-of-the-art methods. The result demonstrates that the proposed method has an excellent adaptability across variations in cell densities, sizes, shapes and patterns, overfitting features, large-scale data volume, and stained cells under different lab environments. The accurate classification of HEp-2 staining pattern at cell level helps increasing the accuracy of computer aided diagnosis for diagnosis process in the future.","PeriodicalId":14652,"journal":{"name":"J. Univers. Comput. Sci.","volume":"12 1","pages":"432-445"},"PeriodicalIF":0.0,"publicationDate":"2023-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73943486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}