Tri Kushartadi, A. Mulyono, Azhari Haris Al Hamdi, Muhammad Afif Rizki, Muhammad Anwar Sadat Faidar, Wirawan Dwi Harsanto, M. Suryanegara, M. Asvial
The estimated global population for 2050 is 9 billion, which implies an increase in food demand. Agriculture is the primary source of food production worldwide, and improving its efficiency and productivity through an integration with information and communication technology system, so-called “smart farming”, is a promising approach to optimizing food supply. This research employed bibliometric analysis techniques to investigate smart farming trends, identify their potential benefits, and analyze their research insight. Data were collected from 1141 publications in the Scopus database in the period 1997–2021 and were extracted using VOS Viewer, which quantified the connections between the articles using the co-citation unit, resulting in a mapping of 10 clusters, ranging from agriculture to soil moisture. Finally, the analysis further focuses on the three major themes of smart farming, namely the IoT; blockchain and agricultural robots; and smart agriculture, crops, and irrigation.
{"title":"Theme Mapping and Bibliometric Analysis of Two Decades of Smart Farming","authors":"Tri Kushartadi, A. Mulyono, Azhari Haris Al Hamdi, Muhammad Afif Rizki, Muhammad Anwar Sadat Faidar, Wirawan Dwi Harsanto, M. Suryanegara, M. Asvial","doi":"10.3390/info14070396","DOIUrl":"https://doi.org/10.3390/info14070396","url":null,"abstract":"The estimated global population for 2050 is 9 billion, which implies an increase in food demand. Agriculture is the primary source of food production worldwide, and improving its efficiency and productivity through an integration with information and communication technology system, so-called “smart farming”, is a promising approach to optimizing food supply. This research employed bibliometric analysis techniques to investigate smart farming trends, identify their potential benefits, and analyze their research insight. Data were collected from 1141 publications in the Scopus database in the period 1997–2021 and were extracted using VOS Viewer, which quantified the connections between the articles using the co-citation unit, resulting in a mapping of 10 clusters, ranging from agriculture to soil moisture. Finally, the analysis further focuses on the three major themes of smart farming, namely the IoT; blockchain and agricultural robots; and smart agriculture, crops, and irrigation.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89027112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abbreviations are often overlooked, since their frequency and acceptance are almost second nature in everyday communication. Business names, handwritten notes, online messaging, professional domains, and different languages all have their own set of abbreviations. The abundance and frequent introduction of new abbreviations cause multiple areas of overlaps and ambiguity, which mean documents often lose their clarity. We reverse engineered the process of creating these ad hoc abbreviations and revealed some preliminary statistics on what makes them easier or harder to define. In addition, we generated candidate definitions for which it proved difficult for a word sense disambiguation model to select the correct definition.
{"title":"Findings on Ad Hoc Contractions","authors":"Sing Choi, K. Taghva","doi":"10.3390/info14070391","DOIUrl":"https://doi.org/10.3390/info14070391","url":null,"abstract":"Abbreviations are often overlooked, since their frequency and acceptance are almost second nature in everyday communication. Business names, handwritten notes, online messaging, professional domains, and different languages all have their own set of abbreviations. The abundance and frequent introduction of new abbreviations cause multiple areas of overlaps and ambiguity, which mean documents often lose their clarity. We reverse engineered the process of creating these ad hoc abbreviations and revealed some preliminary statistics on what makes them easier or harder to define. In addition, we generated candidate definitions for which it proved difficult for a word sense disambiguation model to select the correct definition.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75891602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kirill Tyshchuk, Polina Karpikova, Andrew Spiridonov, Anastasiia Prutianova, Anton Razzhigaev, A. Panchenko
Embeddings, i.e., vector representations of objects, such as texts, images, or graphs, play a key role in deep learning methodologies nowadays. Prior research has shown the importance of analyzing the isotropy of textual embeddings for transformer-based text encoders, such as the BERT model. Anisotropic word embeddings do not use the entire space, instead concentrating on a narrow cone in such a pretrained vector space, negatively affecting the performance of applications, such as textual semantic similarity. Transforming a vector space to optimize isotropy has been shown to be beneficial for improving performance in text processing tasks. This paper is the first comprehensive investigation of the distribution of multimodal embeddings using the example of OpenAI’s CLIP pretrained model. We aimed to deepen the understanding of the embedding space of multimodal embeddings, which has previously been unexplored in this respect, and study the impact on various end tasks. Our initial efforts were focused on measuring the alignment of image and text embedding distributions, with an emphasis on their isotropic properties. In addition, we evaluated several gradient-free approaches to enhance these properties, establishing their efficiency in improving the isotropy/alignment of the embeddings and, in certain cases, the zero-shot classification accuracy. Significantly, our analysis revealed that both CLIP and BERT models yielded embeddings situated within a cone immediately after initialization and preceding training. However, they were mostly isotropic in the local sense. We further extended our investigation to the structure of multilingual CLIP text embeddings, confirming that the observed characteristics were language-independent. By computing the few-shot classification accuracy and point-cloud metrics, we provide evidence of a strong correlation among multilingual embeddings. Embeddings transformation using the methods described in this article makes it easier to visualize embeddings. At the same time, multiple experiments that we conducted showed that, in regard to the transformed embeddings, the downstream tasks performance does not drop substantially (and sometimes is even improved). This means that one could obtain an easily visualizable embedding space, without substantially losing the quality of downstream tasks.
{"title":"On Isotropy of Multimodal Embeddings","authors":"Kirill Tyshchuk, Polina Karpikova, Andrew Spiridonov, Anastasiia Prutianova, Anton Razzhigaev, A. Panchenko","doi":"10.3390/info14070392","DOIUrl":"https://doi.org/10.3390/info14070392","url":null,"abstract":"Embeddings, i.e., vector representations of objects, such as texts, images, or graphs, play a key role in deep learning methodologies nowadays. Prior research has shown the importance of analyzing the isotropy of textual embeddings for transformer-based text encoders, such as the BERT model. Anisotropic word embeddings do not use the entire space, instead concentrating on a narrow cone in such a pretrained vector space, negatively affecting the performance of applications, such as textual semantic similarity. Transforming a vector space to optimize isotropy has been shown to be beneficial for improving performance in text processing tasks. This paper is the first comprehensive investigation of the distribution of multimodal embeddings using the example of OpenAI’s CLIP pretrained model. We aimed to deepen the understanding of the embedding space of multimodal embeddings, which has previously been unexplored in this respect, and study the impact on various end tasks. Our initial efforts were focused on measuring the alignment of image and text embedding distributions, with an emphasis on their isotropic properties. In addition, we evaluated several gradient-free approaches to enhance these properties, establishing their efficiency in improving the isotropy/alignment of the embeddings and, in certain cases, the zero-shot classification accuracy. Significantly, our analysis revealed that both CLIP and BERT models yielded embeddings situated within a cone immediately after initialization and preceding training. However, they were mostly isotropic in the local sense. We further extended our investigation to the structure of multilingual CLIP text embeddings, confirming that the observed characteristics were language-independent. By computing the few-shot classification accuracy and point-cloud metrics, we provide evidence of a strong correlation among multilingual embeddings. Embeddings transformation using the methods described in this article makes it easier to visualize embeddings. At the same time, multiple experiments that we conducted showed that, in regard to the transformed embeddings, the downstream tasks performance does not drop substantially (and sometimes is even improved). This means that one could obtain an easily visualizable embedding space, without substantially losing the quality of downstream tasks.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83622158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper examines the disinformation and fact-checking activity of ten Ibero-American countries during the outbreak of monkeypox in 2022. Using a mixed-methods approach based on content analysis, the debunkings published by these organizations on their websites between 7 May and 10 September 2022 are studied. The countries with the highest number of debunkings are Spain and Bolivia, with two verification agencies, Maldita and Bolivia Verifica. The outbreak’s onset marked a peak in the spread of hoaxes, particularly following the declaration of the disease as a public health emergency. The identification of disinformants is challenging due to the diverse dissemination channels, although Twitter predominantly serves as the platform of choice. The preferred format for disinformation is image text, and the common theme links monkeypox to a side effect of the SARS-CoV-2 vaccine. Furthermore, the internationalization capacity of scientific hoaxes is demonstrated. Fact-checking agencies conduct adequate and thorough source verification, predominantly relying on official and expert sources. However, they employ limited digital tools that could expedite the verification process. Disinformation regarding monkeypox is closely related to COVID-19 hoaxes, either by resurrecting conspiracy theories or through the dissemination of speeches by well-known anti-vaccine activists who belong to healthcare collectives and were influential during the health pandemic.
{"title":"Monkeypox, Disinformation, and Fact-Checking: A Review of Ten Iberoamerican Countries in the Context of Public Health Emergency","authors":"Noemí Morejón-Llamas, F. Cristòfol","doi":"10.3390/info14070390","DOIUrl":"https://doi.org/10.3390/info14070390","url":null,"abstract":"This paper examines the disinformation and fact-checking activity of ten Ibero-American countries during the outbreak of monkeypox in 2022. Using a mixed-methods approach based on content analysis, the debunkings published by these organizations on their websites between 7 May and 10 September 2022 are studied. The countries with the highest number of debunkings are Spain and Bolivia, with two verification agencies, Maldita and Bolivia Verifica. The outbreak’s onset marked a peak in the spread of hoaxes, particularly following the declaration of the disease as a public health emergency. The identification of disinformants is challenging due to the diverse dissemination channels, although Twitter predominantly serves as the platform of choice. The preferred format for disinformation is image text, and the common theme links monkeypox to a side effect of the SARS-CoV-2 vaccine. Furthermore, the internationalization capacity of scientific hoaxes is demonstrated. Fact-checking agencies conduct adequate and thorough source verification, predominantly relying on official and expert sources. However, they employ limited digital tools that could expedite the verification process. Disinformation regarding monkeypox is closely related to COVID-19 hoaxes, either by resurrecting conspiracy theories or through the dissemination of speeches by well-known anti-vaccine activists who belong to healthcare collectives and were influential during the health pandemic.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88392798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Object classification is a crucial task in deep learning, which involves the identification and categorization of objects in images or videos. Although humans can easily recognize common objects, such as cars, animals, or plants, performing this task on a large scale can be time-consuming and error-prone. Therefore, automating this process using neural networks can save time and effort while achieving higher accuracy. Our study focuses on the classification step of human chromosome karyotyping, an important medical procedure that helps diagnose genetic disorders. Traditionally, this task is performed manually by expert cytologists, which is a time-consuming process that requires specialized medical skills. Therefore, automating it through deep learning can be immensely useful. To accomplish this, we implemented and adapted existing preprocessing and data augmentation techniques to prepare the chromosome images for classification. We used ResNet-50 convolutional neural network and Swin Transformer, coupled with an ensemble approach to classify the chromosomes, obtaining state-of-the-art performance in the tested dataset.
{"title":"Deep-Learning-Based Human Chromosome Classification: Data Augmentation and Ensemble","authors":"M. D’Angelo, L. Nanni","doi":"10.3390/info14070389","DOIUrl":"https://doi.org/10.3390/info14070389","url":null,"abstract":"Object classification is a crucial task in deep learning, which involves the identification and categorization of objects in images or videos. Although humans can easily recognize common objects, such as cars, animals, or plants, performing this task on a large scale can be time-consuming and error-prone. Therefore, automating this process using neural networks can save time and effort while achieving higher accuracy. Our study focuses on the classification step of human chromosome karyotyping, an important medical procedure that helps diagnose genetic disorders. Traditionally, this task is performed manually by expert cytologists, which is a time-consuming process that requires specialized medical skills. Therefore, automating it through deep learning can be immensely useful. To accomplish this, we implemented and adapted existing preprocessing and data augmentation techniques to prepare the chromosome images for classification. We used ResNet-50 convolutional neural network and Swin Transformer, coupled with an ensemble approach to classify the chromosomes, obtaining state-of-the-art performance in the tested dataset.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74663784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kwok Tai Chui, B. B. Gupta, Jiaqi Liu, Varsha Arya, N. Nedjah, Ammar Almomani, Priyanka Chaurasia
The smart city vision has driven the rapid development and advancement of interconnected technologies using the Internet of Things (IoT) and cyber-physical systems (CPS). In this paper, various aspects of IoT and CPS in recent years (from 2013 to May 2023) are surveyed. It first begins with industry standards which ensure cost-effective solutions and interoperability. With ever-growing big data, tremendous undiscovered knowledge can be mined to be transformed into useful applications. Machine learning algorithms are taking the lead to achieve various target applications with formulations such as classification, clustering, regression, prediction, and anomaly detection. Notably, attention has shifted from traditional machine learning algorithms to advanced algorithms, including deep learning, transfer learning, and data generation algorithms, to provide more accurate models. In recent years, there has been an increasing need for advanced security techniques and defense strategies to detect and prevent the IoT and CPS from being attacked. Research challenges and future directions are summarized. We hope that more researchers can conduct more studies on the IoT and on CPS.
{"title":"A Survey of Internet of Things and Cyber-Physical Systems: Standards, Algorithms, Applications, Security, Challenges, and Future Directions","authors":"Kwok Tai Chui, B. B. Gupta, Jiaqi Liu, Varsha Arya, N. Nedjah, Ammar Almomani, Priyanka Chaurasia","doi":"10.3390/info14070388","DOIUrl":"https://doi.org/10.3390/info14070388","url":null,"abstract":"The smart city vision has driven the rapid development and advancement of interconnected technologies using the Internet of Things (IoT) and cyber-physical systems (CPS). In this paper, various aspects of IoT and CPS in recent years (from 2013 to May 2023) are surveyed. It first begins with industry standards which ensure cost-effective solutions and interoperability. With ever-growing big data, tremendous undiscovered knowledge can be mined to be transformed into useful applications. Machine learning algorithms are taking the lead to achieve various target applications with formulations such as classification, clustering, regression, prediction, and anomaly detection. Notably, attention has shifted from traditional machine learning algorithms to advanced algorithms, including deep learning, transfer learning, and data generation algorithms, to provide more accurate models. In recent years, there has been an increasing need for advanced security techniques and defense strategies to detect and prevent the IoT and CPS from being attacked. Research challenges and future directions are summarized. We hope that more researchers can conduct more studies on the IoT and on CPS.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86358311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Faceted Search Systems (FSSs) have gained prominence as one of the dominant search approaches in vertical search systems. They provide facets to educate users about the information space and allow them to refine their search query and navigate back and forth between resources on a single results page. Despite the importance of this problem, it is rare to find studies dedicated solely to the investigation of facet ranking methods, nor to how this step, aside from other aspects of faceted search, affects the user’s search experience. The objective of this survey paper is to review the state of the art in research related to faceted search systems, with a focus on existing facet ranking approaches and the key challenges posed by this problem. In addition to that, this survey also investigates state-of-the-art FSS evaluation frameworks and the most commonly used techniques and metrics to evaluate facet ranking approaches. It also lays out criteria for dataset appropriateness and its needed structure to be used in evaluating facet ranking methods aside from other FSS aspects. This paper concludes by highlighting gaps in the current research and future research directions related to this area.
{"title":"A Comprehensive Survey of Facet Ranking Approaches Used in Faceted Search Systems","authors":"Esraa Ali, A. Caputo, Gareth J. F. Jones","doi":"10.3390/info14070387","DOIUrl":"https://doi.org/10.3390/info14070387","url":null,"abstract":"Faceted Search Systems (FSSs) have gained prominence as one of the dominant search approaches in vertical search systems. They provide facets to educate users about the information space and allow them to refine their search query and navigate back and forth between resources on a single results page. Despite the importance of this problem, it is rare to find studies dedicated solely to the investigation of facet ranking methods, nor to how this step, aside from other aspects of faceted search, affects the user’s search experience. The objective of this survey paper is to review the state of the art in research related to faceted search systems, with a focus on existing facet ranking approaches and the key challenges posed by this problem. In addition to that, this survey also investigates state-of-the-art FSS evaluation frameworks and the most commonly used techniques and metrics to evaluate facet ranking approaches. It also lays out criteria for dataset appropriateness and its needed structure to be used in evaluating facet ranking methods aside from other FSS aspects. This paper concludes by highlighting gaps in the current research and future research directions related to this area.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91395464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Effectively harnessing the power of social media data for disaster management requires sophisticated analysis methods and frameworks. This research focuses on understanding the contextual information present in social media posts during disasters and developing a taxonomy to effectively categorize and classify the diverse range of topics discussed. First, the existing literature on social media analysis in disaster management is explored, highlighting the limitations and gaps in current methodologies. Second, a dataset comprising real-time social media posts related to various disasters is collected and preprocessed to ensure data quality and reliability. Third, three well-established topic modeling techniques, namely Latent Dirichlet Allocation (LDA), Latent Semantic Analysis (LSA), and Non-Negative Matrix Factorization (NMF), are employed to extract and analyze the latent topics and themes present in the social media data. The contributions of this research lie in the development of a taxonomy that effectively categorizes and classifies disaster-related social media data, the identification of key latent topics and themes, and the extraction of valuable insights to support and enhance emergency management efforts. Overall, the findings of this research have the potential to transform the way emergency management and response are conducted by harnessing the power of social media data. By incorporating these insights into decision-making processes, emergency managers can make more informed and strategic choices, resulting in more efficient and effective emergency response strategies. This, in turn, leads to improved outcomes, better utilization of resources, and ultimately, the ability to save lives and mitigate the impacts of disasters.
{"title":"Unveiling Key Themes and Establishing a Hierarchical Taxonomy of Disaster-Related Tweets: A Text Mining Approach for Enhanced Emergency Management Planning","authors":"James Durham, Sudipta Chowdhury, Ammar Alzarrad","doi":"10.3390/info14070385","DOIUrl":"https://doi.org/10.3390/info14070385","url":null,"abstract":"Effectively harnessing the power of social media data for disaster management requires sophisticated analysis methods and frameworks. This research focuses on understanding the contextual information present in social media posts during disasters and developing a taxonomy to effectively categorize and classify the diverse range of topics discussed. First, the existing literature on social media analysis in disaster management is explored, highlighting the limitations and gaps in current methodologies. Second, a dataset comprising real-time social media posts related to various disasters is collected and preprocessed to ensure data quality and reliability. Third, three well-established topic modeling techniques, namely Latent Dirichlet Allocation (LDA), Latent Semantic Analysis (LSA), and Non-Negative Matrix Factorization (NMF), are employed to extract and analyze the latent topics and themes present in the social media data. The contributions of this research lie in the development of a taxonomy that effectively categorizes and classifies disaster-related social media data, the identification of key latent topics and themes, and the extraction of valuable insights to support and enhance emergency management efforts. Overall, the findings of this research have the potential to transform the way emergency management and response are conducted by harnessing the power of social media data. By incorporating these insights into decision-making processes, emergency managers can make more informed and strategic choices, resulting in more efficient and effective emergency response strategies. This, in turn, leads to improved outcomes, better utilization of resources, and ultimately, the ability to save lives and mitigate the impacts of disasters.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88122311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vassilis Papaspirou, Maria Papathanasaki, L. Maglaras, I. Kantzavelou, C. Douligeris, M. Ferrag, H. Janicke
Despite the rapid development of technology, computer systems still rely heavily on passwords for security, which can be problematic. Although multi-factor authentication has been introduced, it is not completely effective against more advanced attacks. To address this, this study proposes a new two-factor authentication method that uses honeytokens. Honeytokens and Google Authenticator are combined to create a stronger authentication process. The proposed approach aims to provide additional layers of security and protection to computer systems, increasing their overall security beyond what is currently provided by single-password or standard two-factor authentication methods. The key difference is that the proposed system resembles a two-factor authentication but, in reality, works like a multi-factor authentication system. Multi-factor authentication (MFA) is a security technique that verifies a user’s identity by requiring multiple credentials from distinct categories. These typically include knowledge factors (something the user knows, such as a password or PIN), possession factors (something the user has, such as a mobile phone or security token), and inherence factors (something the user is, such as a biometric characteristic like a fingerprint). This multi-tiered approach significantly enhances protection against potential attacks. We examined and evaluated our system’s robustness against various types of attacks. From the user’s side, the system is as friendly as a two-factor authentication method with an authenticator and is more secure.
{"title":"A Novel Authentication Method That Combines Honeytokens and Google Authenticator","authors":"Vassilis Papaspirou, Maria Papathanasaki, L. Maglaras, I. Kantzavelou, C. Douligeris, M. Ferrag, H. Janicke","doi":"10.3390/info14070386","DOIUrl":"https://doi.org/10.3390/info14070386","url":null,"abstract":"Despite the rapid development of technology, computer systems still rely heavily on passwords for security, which can be problematic. Although multi-factor authentication has been introduced, it is not completely effective against more advanced attacks. To address this, this study proposes a new two-factor authentication method that uses honeytokens. Honeytokens and Google Authenticator are combined to create a stronger authentication process. The proposed approach aims to provide additional layers of security and protection to computer systems, increasing their overall security beyond what is currently provided by single-password or standard two-factor authentication methods. The key difference is that the proposed system resembles a two-factor authentication but, in reality, works like a multi-factor authentication system. Multi-factor authentication (MFA) is a security technique that verifies a user’s identity by requiring multiple credentials from distinct categories. These typically include knowledge factors (something the user knows, such as a password or PIN), possession factors (something the user has, such as a mobile phone or security token), and inherence factors (something the user is, such as a biometric characteristic like a fingerprint). This multi-tiered approach significantly enhances protection against potential attacks. We examined and evaluated our system’s robustness against various types of attacks. From the user’s side, the system is as friendly as a two-factor authentication method with an authenticator and is more secure.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90348614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Termination or halting is an important system requirement that many systems should satisfy and can be expressed in linear temporal logic as eventual properties. We devised a divide-and-conquer approach to eventual model checking in order to reduce the state space explosion in model checking. The idea of the technique is to split an original model checking problem for eventual properties into multiple smaller model checking problems and handle each smaller one. Due to the nature of the divide-and-conquer approach, each smaller model checking problem can essentially be tackled independently. Hence, this paper proposes a parallel technique/tool based on a master–worker pattern for the divide-and-conquer approach to model checking eventual properties. We carry out some experiments to show the effectiveness of our parallel technique/tool, which can somewhat enhance the running performance to a certain extent when conducting model checking for eventual properties.
{"title":"A Layered and Parallelized Method of Eventual Model Checking","authors":"Yati Phyo, Moe Nandi Aung, Canh Minh Do, K. Ogata","doi":"10.3390/info14070384","DOIUrl":"https://doi.org/10.3390/info14070384","url":null,"abstract":"Termination or halting is an important system requirement that many systems should satisfy and can be expressed in linear temporal logic as eventual properties. We devised a divide-and-conquer approach to eventual model checking in order to reduce the state space explosion in model checking. The idea of the technique is to split an original model checking problem for eventual properties into multiple smaller model checking problems and handle each smaller one. Due to the nature of the divide-and-conquer approach, each smaller model checking problem can essentially be tackled independently. Hence, this paper proposes a parallel technique/tool based on a master–worker pattern for the divide-and-conquer approach to model checking eventual properties. We carry out some experiments to show the effectiveness of our parallel technique/tool, which can somewhat enhance the running performance to a certain extent when conducting model checking for eventual properties.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81122265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}