Abstract The world today runs on data, every minuscule task to the large one requires data. All the data is stored in the various technologies that we use. And to keep data safe, air gaps are introduced. Air gaps are a network security measure where secure computer networks are physically isolated from unsecured networks. Yet, different methods to hack the air gap have come forth. The paper analyzes the problem of hacking an air gap via screen brightness modulations. The proposed solution is a software program used to alert the user of a change in the brightness level of the screen. The concept of Windows Management Instrumentation (WMI) has been used to put forth the software. Applied to an air-gapped computer, the program displays an alert box immediately, as the screen brightness changes. The solution is an easy and efficient way to counter the attack. The program can be further implemented in different testing environments and the WMI concept can be applied to various other cyber hacks.
{"title":"Analysis on Hacking the Secured Air-Gapped Computer and Possible Solution","authors":"Vrinda Sati, R. Muthalagu","doi":"10.2478/cait-2023-0017","DOIUrl":"https://doi.org/10.2478/cait-2023-0017","url":null,"abstract":"Abstract The world today runs on data, every minuscule task to the large one requires data. All the data is stored in the various technologies that we use. And to keep data safe, air gaps are introduced. Air gaps are a network security measure where secure computer networks are physically isolated from unsecured networks. Yet, different methods to hack the air gap have come forth. The paper analyzes the problem of hacking an air gap via screen brightness modulations. The proposed solution is a software program used to alert the user of a change in the brightness level of the screen. The concept of Windows Management Instrumentation (WMI) has been used to put forth the software. Applied to an air-gapped computer, the program displays an alert box immediately, as the screen brightness changes. The solution is an easy and efficient way to counter the attack. The program can be further implemented in different testing environments and the WMI concept can be applied to various other cyber hacks.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43755359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nguyen Thi Thanh Thuy, Nguyen Ngoc Diep, Ngo Xuan Bach, Tu Minh Phuong
Abstract This paper deals with an important task in legal text processing, namely reference and relation extraction from legal documents, which includes two subtasks: 1) reference extraction; 2) relation determination. Motivated by the fact that two subtasks are related and share common information, we propose a joint learning model that solves simultaneously both subtasks. Our model employs a Transformer-based encoder-decoder architecture with non-autoregressive decoding that allows relaxing the sequentiality of traditional seq2seq models and extracting references and relations in one inference step. We also propose a method to enrich the decoder input with learnable meaningful information and therefore, improve the model accuracy. Experimental results on a dataset consisting of 5031 legal documents in Vietnamese with 61,446 references show that our proposed model performs better results than several strong baselines and achieves an F1 score of 99.4% for the joint reference and relation extraction task.
{"title":"Joint Reference and Relation Extraction from Legal Documents with Enhanced Decoder Input","authors":"Nguyen Thi Thanh Thuy, Nguyen Ngoc Diep, Ngo Xuan Bach, Tu Minh Phuong","doi":"10.2478/cait-2023-0014","DOIUrl":"https://doi.org/10.2478/cait-2023-0014","url":null,"abstract":"Abstract This paper deals with an important task in legal text processing, namely reference and relation extraction from legal documents, which includes two subtasks: 1) reference extraction; 2) relation determination. Motivated by the fact that two subtasks are related and share common information, we propose a joint learning model that solves simultaneously both subtasks. Our model employs a Transformer-based encoder-decoder architecture with non-autoregressive decoding that allows relaxing the sequentiality of traditional seq2seq models and extracting references and relations in one inference step. We also propose a method to enrich the decoder input with learnable meaningful information and therefore, improve the model accuracy. Experimental results on a dataset consisting of 5031 legal documents in Vietnamese with 61,446 references show that our proposed model performs better results than several strong baselines and achieves an F1 score of 99.4% for the joint reference and relation extraction task.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48635604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Age estimation from face images is one of the significant topics in the field of machine vision, which is of great interest to controlling age access and targeted marketing. In this article, there are two main stages for human age estimation; the first stage consists of extracting features from the face areas by using Pseudo Zernike Moments (PZM), Active Appearance Model (AAM), and Bio-Inspired Features (BIF). In the second step, Support Vector Machine (SVM) and Support Vector Regression (SVR) algorithms are used to predict the age range of face images. The proposed method has been assessed utilizing the renowned databases of IMDB-WIKI and WIT-DB. In general, from all results obtained in the experiments, we have concluded that the proposed method can be chosen as the best method for Age estimation from face images.
{"title":"A New Hybrid Model to Predict Human Age Estimation from Face Images Based on Supervised Machine Learning Algorithms","authors":"Mohammed Jawad Al-dujaili, Hydr jabar sabat Ahily","doi":"10.2478/cait-2023-0011","DOIUrl":"https://doi.org/10.2478/cait-2023-0011","url":null,"abstract":"Abstract Age estimation from face images is one of the significant topics in the field of machine vision, which is of great interest to controlling age access and targeted marketing. In this article, there are two main stages for human age estimation; the first stage consists of extracting features from the face areas by using Pseudo Zernike Moments (PZM), Active Appearance Model (AAM), and Bio-Inspired Features (BIF). In the second step, Support Vector Machine (SVM) and Support Vector Regression (SVR) algorithms are used to predict the age range of face images. The proposed method has been assessed utilizing the renowned databases of IMDB-WIKI and WIT-DB. In general, from all results obtained in the experiments, we have concluded that the proposed method can be chosen as the best method for Age estimation from face images.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47489664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The need for information is gradually shifting from text to images due to the technology’s growth and increase in digital images. It is quite challenging for people to find similar color images. To obtain similarity matching, the color of the image needs to be identified. This paper aims at various clustering techniques to identify the color of the digital image. Though many clustering techniques exist, this paper focuses on Fuzzy c-Means, Mean-Shift, and a hybrid technique that amalgamates the agglomerative hierarchies and k-Means, known as hKmeans to cluster the intensity of the image. Applying evaluation metrics of Mean Squared Error, Root Mean Squared Error, Mean Absolute Error, Homogeneity, Completeness, V-Score, and Peak signal-to-noise ratio it is proven that the results obtained demonstrate the good performance of the proposed technique. Then the color histogram is applied to identify the color and differentiate the color distribution on the original and clustered image.
{"title":"Image Clustering and Feature Extraction by Utilizing an Improvised Unsupervised Learning Approach","authors":"R. Bhuvanya, M. Kavitha","doi":"10.2478/cait-2023-0010","DOIUrl":"https://doi.org/10.2478/cait-2023-0010","url":null,"abstract":"Abstract The need for information is gradually shifting from text to images due to the technology’s growth and increase in digital images. It is quite challenging for people to find similar color images. To obtain similarity matching, the color of the image needs to be identified. This paper aims at various clustering techniques to identify the color of the digital image. Though many clustering techniques exist, this paper focuses on Fuzzy c-Means, Mean-Shift, and a hybrid technique that amalgamates the agglomerative hierarchies and k-Means, known as hKmeans to cluster the intensity of the image. Applying evaluation metrics of Mean Squared Error, Root Mean Squared Error, Mean Absolute Error, Homogeneity, Completeness, V-Score, and Peak signal-to-noise ratio it is proven that the results obtained demonstrate the good performance of the proposed technique. Then the color histogram is applied to identify the color and differentiate the color distribution on the original and clustered image.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49507823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The article proposes an addition to the tunnel parsing algorithm that enables it to parse grammars having countable repetitions and configurations of grammar elements generating empty words without refactoring the grammar. The equivalency of trees built by the use of ambiguous grammar is discussed. The class of the ε-ambiguous grammars is defined as a subclass of the ambiguous grammars relative to these trees. The ε-deterministic grammars are then defined as a subclass of the ε-ambiguous grammars. A technique for linearly parsing on the basis of non-left recursive ε-deterministic grammars with the tunnel parsing algorithm is shown.
{"title":"Tunnel Parsing with Ambiguous Grammars","authors":"Nikolay Handzhiyski, E. Somova","doi":"10.2478/cait-2023-0012","DOIUrl":"https://doi.org/10.2478/cait-2023-0012","url":null,"abstract":"Abstract The article proposes an addition to the tunnel parsing algorithm that enables it to parse grammars having countable repetitions and configurations of grammar elements generating empty words without refactoring the grammar. The equivalency of trees built by the use of ambiguous grammar is discussed. The class of the ε-ambiguous grammars is defined as a subclass of the ambiguous grammars relative to these trees. The ε-deterministic grammars are then defined as a subclass of the ε-ambiguous grammars. A technique for linearly parsing on the basis of non-left recursive ε-deterministic grammars with the tunnel parsing algorithm is shown.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45564195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This study proposes models for searching and recommending learning resources to meet the needs of learners, helping to achieve better student performance results. The study suggests a general architecture for searching and recommending learning resources. It specifically proposes (1) the model of learning resource classification based on deep learning techniques such as MLP; (2) the approach for searching learning resources based on document similarity; (3) the model to predict learning performance using deep learning techniques including learning performance prediction model on all student data using CNN, another model on ability group using MLP, and the other model on per student using LSTM; (4) the learning resource recommendation model using deep matrix factorization. Experimental results show that the proposed models are feasible for the classification, search, ranking prediction, and recommendation of learning resources in higher education institutions.
{"title":"Novel Approaches for Searching and Recommending Learning Resources","authors":"Tran Thanh Dien, Nguyen Thanh-Hai, Nguyen Thai-Nghe","doi":"10.2478/cait-2023-0019","DOIUrl":"https://doi.org/10.2478/cait-2023-0019","url":null,"abstract":"Abstract This study proposes models for searching and recommending learning resources to meet the needs of learners, helping to achieve better student performance results. The study suggests a general architecture for searching and recommending learning resources. It specifically proposes (1) the model of learning resource classification based on deep learning techniques such as MLP; (2) the approach for searching learning resources based on document similarity; (3) the model to predict learning performance using deep learning techniques including learning performance prediction model on all student data using CNN, another model on ability group using MLP, and the other model on per student using LSTM; (4) the learning resource recommendation model using deep matrix factorization. Experimental results show that the proposed models are feasible for the classification, search, ranking prediction, and recommendation of learning resources in higher education institutions.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46227240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract A Stream Cipher (SC) is a symmetric-key encryption type that scrambles each piece of data in clear text to conceal it from hackers. Despite its advantages, it has a substantial challenge. Correct handwriting of the script code for the cipher scheme is a challenge for programmers. In this paper, we propose a graphical Domain-Specific Modeling Language (DSML) to make it easier for non-technical users and domain specialists to implement an SC domain. The proposed language, SCLang, offers great expressiveness and flexibility. Six different methods of keystream generation are provided to obtain a random sequence. In addition, fifteen tests in the NIST suite are provided for random statistical analysis. The concepts of the SC domain and their relationships are presented in a meta-model. The evaluation of SCLang is based on qualitative analysis and is presented to demonstrate its effectiveness and efficiency.
{"title":"SCLang: Graphical Domain-Specific Modeling Language for Stream Cipher","authors":"Samar A. Qassir, M. Gaata, A. Sadiq","doi":"10.2478/cait-2023-0013","DOIUrl":"https://doi.org/10.2478/cait-2023-0013","url":null,"abstract":"Abstract A Stream Cipher (SC) is a symmetric-key encryption type that scrambles each piece of data in clear text to conceal it from hackers. Despite its advantages, it has a substantial challenge. Correct handwriting of the script code for the cipher scheme is a challenge for programmers. In this paper, we propose a graphical Domain-Specific Modeling Language (DSML) to make it easier for non-technical users and domain specialists to implement an SC domain. The proposed language, SCLang, offers great expressiveness and flexibility. Six different methods of keystream generation are provided to obtain a random sequence. In addition, fifteen tests in the NIST suite are provided for random statistical analysis. The concepts of the SC domain and their relationships are presented in a meta-model. The evaluation of SCLang is based on qualitative analysis and is presented to demonstrate its effectiveness and efficiency.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43412404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Logistic tasks are aimed at the optimal distribution of material, energy, financial and human resources. This research has a narrow field aimed at optimal management of financial resources and their redistribution. Specifically, a reinvestment policy model is derived by maximizing the profit of a business entity. Reinvestment is done with risk-free assets, but they have different maturity periods. This makes it difficult to assess the optimal investment strategy, as reinvestment can be done at the end of the maturity period. This study develops a model for a dynamic control process, which leads to the formalization of a discrete integer time optimization problem. Its solution gives a sequence of investments and a total optimal return. The solution to the problem is illustrated in an EXCEL environment. The added value of this study stems from the formalization and quantification of the model for the reinvestment strategy in the optimization problem.
{"title":"Model for Reinvestment Policy in Risk-Free Assets with Various Maturities","authors":"T. Stoilov, K. Stoilova, D. Kanev","doi":"10.2478/cait-2023-0018","DOIUrl":"https://doi.org/10.2478/cait-2023-0018","url":null,"abstract":"Abstract Logistic tasks are aimed at the optimal distribution of material, energy, financial and human resources. This research has a narrow field aimed at optimal management of financial resources and their redistribution. Specifically, a reinvestment policy model is derived by maximizing the profit of a business entity. Reinvestment is done with risk-free assets, but they have different maturity periods. This makes it difficult to assess the optimal investment strategy, as reinvestment can be done at the end of the maturity period. This study develops a model for a dynamic control process, which leads to the formalization of a discrete integer time optimization problem. Its solution gives a sequence of investments and a total optimal return. The solution to the problem is illustrated in an EXCEL environment. The added value of this study stems from the formalization and quantification of the model for the reinvestment strategy in the optimization problem.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43546827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Embedding the watermark is still a challenge in image watermarking. The watermark should not reduce the visual quality of the image being watermarked and hard to distinguish from its original. Embedding a watermark of a small size might be a good solution. However, the watermark might be easy to lose if there is any tampering with the watermarked image. This research proposes to increase the visual quality of the watermarked image using the Walsh Hadamard transform, which is applied to the singular value decomposition-based image watermarking. Technically, the watermark image is converted into a low bit-rate signal before being embedded in the host image. Using various watermark sizes, experimental results show that the proposed method could produce a good imperceptibility with 47.10 dB on average and also gives robustness close to the original watermark with a normalized correlation close to 1 on average. The proposed method can also recognize the original watermark from the tampered watermarked image at different levels of robustness.
{"title":"Visual Quality Improvement of Watermarked Image Based on Singular Value Decomposition Using Walsh Hadamard Transform","authors":"Aris Marjuni, A. Z. Fanani, O. Nurhayati","doi":"10.2478/cait-2023-0006","DOIUrl":"https://doi.org/10.2478/cait-2023-0006","url":null,"abstract":"Abstract Embedding the watermark is still a challenge in image watermarking. The watermark should not reduce the visual quality of the image being watermarked and hard to distinguish from its original. Embedding a watermark of a small size might be a good solution. However, the watermark might be easy to lose if there is any tampering with the watermarked image. This research proposes to increase the visual quality of the watermarked image using the Walsh Hadamard transform, which is applied to the singular value decomposition-based image watermarking. Technically, the watermark image is converted into a low bit-rate signal before being embedded in the host image. Using various watermark sizes, experimental results show that the proposed method could produce a good imperceptibility with 47.10 dB on average and also gives robustness close to the original watermark with a normalized correlation close to 1 on average. The proposed method can also recognize the original watermark from the tampered watermarked image at different levels of robustness.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47106155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Fog computing is one of the emerging forms of cloud computing which aims to satisfy the ever-increasing computation demands of the mobile applications. Effective offloading of tasks leads to increased efficiency of the fog network, but at the same time it suffers from various uncertainty issues with respect to task demands, fog node capabilities, information asymmetry, missing information, low trust, transaction failures, and so on. Several machine learning techniques have been proposed for the task offloading in fog environments, but they lack efficiency. In this paper, a novel uncertainty proof Type-2-Soft-Set (T2SS) enabled apprenticeship learning based task offloading framework is proposed which formulates the optimal task offloading policies. The performance of the proposed T2SS based apprenticeship learning is compared and found to be better than Q-learning and State-Action-Reward-State-Action (SARSA) learning techniques with respect to performance parameters such as total execution time, throughput, learning rate, and response time.
{"title":"Type-2-Soft-Set Based Uncertainty Aware Task Offloading Framework for Fog Computing Using Apprenticeship Learning","authors":"K. Bhargavi, B. Sathish Babu, S. Shiva","doi":"10.2478/cait-2023-0002","DOIUrl":"https://doi.org/10.2478/cait-2023-0002","url":null,"abstract":"Abstract Fog computing is one of the emerging forms of cloud computing which aims to satisfy the ever-increasing computation demands of the mobile applications. Effective offloading of tasks leads to increased efficiency of the fog network, but at the same time it suffers from various uncertainty issues with respect to task demands, fog node capabilities, information asymmetry, missing information, low trust, transaction failures, and so on. Several machine learning techniques have been proposed for the task offloading in fog environments, but they lack efficiency. In this paper, a novel uncertainty proof Type-2-Soft-Set (T2SS) enabled apprenticeship learning based task offloading framework is proposed which formulates the optimal task offloading policies. The performance of the proposed T2SS based apprenticeship learning is compared and found to be better than Q-learning and State-Action-Reward-State-Action (SARSA) learning techniques with respect to performance parameters such as total execution time, throughput, learning rate, and response time.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44296653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}