Abstract Eager equality is a novel semantics for equality in the presence of partial operations. We consider term rewriting for eager equality for arithmetic in which division is a partial operator. We use common meadows which are essentially fields that contain an absorptive element $bot $. The idea is that term rewriting is supposed to be semantics preserving for non-$bot $ terms only. We show soundness and adequacy results for eager term rewriting w.r.t. the class of all common meadows. However, we show that an eager term rewrite system which is complete for common meadows of rational numbers is not easy to obtain, if it exists at all.
{"title":"Eager Term Rewriting For The Fracterm Calculus Of Common Meadows","authors":"Jan A Bergstra, John V Tucker","doi":"10.1093/comjnl/bxad106","DOIUrl":"https://doi.org/10.1093/comjnl/bxad106","url":null,"abstract":"Abstract Eager equality is a novel semantics for equality in the presence of partial operations. We consider term rewriting for eager equality for arithmetic in which division is a partial operator. We use common meadows which are essentially fields that contain an absorptive element $bot $. The idea is that term rewriting is supposed to be semantics preserving for non-$bot $ terms only. We show soundness and adequacy results for eager term rewriting w.r.t. the class of all common meadows. However, we show that an eager term rewrite system which is complete for common meadows of rational numbers is not easy to obtain, if it exists at all.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"5 5-6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135509759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Unsupervised Domain Adaptation (UDA) techniques in real-world scenarios often encounter limitations due to their reliance on reducing distribution dissimilarity between source and target domains, assuming it leads to effective adaptation. However, they overlook the intricate factors causing domain shifts, including data distribution variations, domain-specific features and nonlinear relationships, thereby hindering robust performance in challenging UDA tasks. The Neuro-Fuzzy Meta-Learning (NF-ML) approach overcomes traditional UDA limitations with its flexible framework that adapts to intricate, nonlinear domain gaps without rigid assumptions. NF-ML enhances domain adaptation by selecting a UDA subset and optimizing their weights via a neuro-fuzzy system, utilizing meta-learning to efficiently adapt models to new domains using previously acquired knowledge. This approach mitigates domain adaptation challenges and bolsters traditional UDA methods’ performance by harnessing the strengths of multiple UDA methods to enhance overall model generalization. The proposed approach shows potential in advancing domain adaptation research by providing a robust and efficient solution for real-world domain shifts. Experiments on three standard image datasets confirm the proposed approach’s superiority over state-of-the-art UDA methods, validating the effectiveness of meta-learning. Remarkably, the Office+Caltech 10, ImageCLEF-DA and combined digit datasets exhibit substantial accuracy gains of 30.9%, 6.8% and 10.9%, respectively, compared with the best-second baseline UDA approach.
{"title":"Leveraging Meta-Learning To Improve Unsupervised Domain Adaptation","authors":"Amirfarhad Farhadi, Arash Sharifi","doi":"10.1093/comjnl/bxad104","DOIUrl":"https://doi.org/10.1093/comjnl/bxad104","url":null,"abstract":"Abstract Unsupervised Domain Adaptation (UDA) techniques in real-world scenarios often encounter limitations due to their reliance on reducing distribution dissimilarity between source and target domains, assuming it leads to effective adaptation. However, they overlook the intricate factors causing domain shifts, including data distribution variations, domain-specific features and nonlinear relationships, thereby hindering robust performance in challenging UDA tasks. The Neuro-Fuzzy Meta-Learning (NF-ML) approach overcomes traditional UDA limitations with its flexible framework that adapts to intricate, nonlinear domain gaps without rigid assumptions. NF-ML enhances domain adaptation by selecting a UDA subset and optimizing their weights via a neuro-fuzzy system, utilizing meta-learning to efficiently adapt models to new domains using previously acquired knowledge. This approach mitigates domain adaptation challenges and bolsters traditional UDA methods’ performance by harnessing the strengths of multiple UDA methods to enhance overall model generalization. The proposed approach shows potential in advancing domain adaptation research by providing a robust and efficient solution for real-world domain shifts. Experiments on three standard image datasets confirm the proposed approach’s superiority over state-of-the-art UDA methods, validating the effectiveness of meta-learning. Remarkably, the Office+Caltech 10, ImageCLEF-DA and combined digit datasets exhibit substantial accuracy gains of 30.9%, 6.8% and 10.9%, respectively, compared with the best-second baseline UDA approach.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"89 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135515302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dingyu Shou, Chao Li, Zhen Wang, Song Cheng, Xiaobo Hu, Kai Zhang, Mi Wen, Yong Wang
Abstract Security of computer information can be improved with the use of a network intrusion detection system. Since the network environment is becoming more complex, more and more new methods of attacking the network have emerged, making the original intrusion detection methods ineffective. Increased network activity also causes intrusion detection systems to identify errors more frequently. We suggest a new intrusion detection technique in this research that combines a Convolutional Neural Network (CNN) model with a Bi-directional Long Short-term Memory Network (BiLSTM) model for adding attention mechanisms. We distinguish our model from existing methods in three ways. First, we use the NCR-SMOTE algorithm to resample the dataset. Secondly, we use recursive feature elimination method based on extreme random tree to select features. Thirdly, we improve the profitability and accuracy of predictions by adding attention mechanism to CNN-BiLSTM. This experiment uses UNSW-UB15 dataset composed of real traffic, and the accuracy rate of multi-classification is 84.5$%$; the accuracy rate of multi-classification in CSE-IC-IDS2018 dataset reached 98.3$%$.
{"title":"An Intrusion Detection Method Based on Attention Mechanism to Improve CNN-BiLSTM Model","authors":"Dingyu Shou, Chao Li, Zhen Wang, Song Cheng, Xiaobo Hu, Kai Zhang, Mi Wen, Yong Wang","doi":"10.1093/comjnl/bxad105","DOIUrl":"https://doi.org/10.1093/comjnl/bxad105","url":null,"abstract":"Abstract Security of computer information can be improved with the use of a network intrusion detection system. Since the network environment is becoming more complex, more and more new methods of attacking the network have emerged, making the original intrusion detection methods ineffective. Increased network activity also causes intrusion detection systems to identify errors more frequently. We suggest a new intrusion detection technique in this research that combines a Convolutional Neural Network (CNN) model with a Bi-directional Long Short-term Memory Network (BiLSTM) model for adding attention mechanisms. We distinguish our model from existing methods in three ways. First, we use the NCR-SMOTE algorithm to resample the dataset. Secondly, we use recursive feature elimination method based on extreme random tree to select features. Thirdly, we improve the profitability and accuracy of predictions by adding attention mechanism to CNN-BiLSTM. This experiment uses UNSW-UB15 dataset composed of real traffic, and the accuracy rate of multi-classification is 84.5$%$; the accuracy rate of multi-classification in CSE-IC-IDS2018 dataset reached 98.3$%$.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"14 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135509913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hunar Abubakir Ahmed, Jafar Majidpour, Mohammed Hussein Ahmed, Samer Kais Jameel, Amir Majidpour
Abstract A method for testing the health of ear’s peripheral auditory nerve and its connection to the brainstem is called an auditory brainstem response (ABR). Manual quantification of ABR tests by an audiologist is not only costly but also time-consuming and susceptible to errors. Recently in machine learning have prompted a resurgence of research into ABR classification. This study presents an automated ABR recognition model. The initial step in our design process involves collecting a dataset by extracting ABR test images from sample test reports. Subsequently, we employ an elastic distortion approach to generate new images from the originals, effectively expanding the dataset while preserving the fundamental structure and morphology of the original ABR content. Finally, the Vision Transformer method was exploited to train and develop our model. In the testing phase, the incorporation of both the newly generated and original images yields an impressive accuracy rate of 97.83%. This result is noteworthy when benchmarked against the latest research in the field, underscoring the substantial performance enhancement achieved through the utilization of generated data.
{"title":"Enhancing Auditory Brainstem Response Classification Based On Vision Transformer","authors":"Hunar Abubakir Ahmed, Jafar Majidpour, Mohammed Hussein Ahmed, Samer Kais Jameel, Amir Majidpour","doi":"10.1093/comjnl/bxad107","DOIUrl":"https://doi.org/10.1093/comjnl/bxad107","url":null,"abstract":"Abstract A method for testing the health of ear’s peripheral auditory nerve and its connection to the brainstem is called an auditory brainstem response (ABR). Manual quantification of ABR tests by an audiologist is not only costly but also time-consuming and susceptible to errors. Recently in machine learning have prompted a resurgence of research into ABR classification. This study presents an automated ABR recognition model. The initial step in our design process involves collecting a dataset by extracting ABR test images from sample test reports. Subsequently, we employ an elastic distortion approach to generate new images from the originals, effectively expanding the dataset while preserving the fundamental structure and morphology of the original ABR content. Finally, the Vision Transformer method was exploited to train and develop our model. In the testing phase, the incorporation of both the newly generated and original images yields an impressive accuracy rate of 97.83%. This result is noteworthy when benchmarked against the latest research in the field, underscoring the substantial performance enhancement achieved through the utilization of generated data.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"12 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135510392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Violence detection is a critical task aimed at identifying violent behavior in video by extracting frames and applying classification models. However, the complexity of video data and the suddenness of violent events present significant hurdles in accurately pinpointing instances of violence, making the extraction of frames that indicate violence a challenging endeavor. Furthermore, designing and applying high-performance models for violence detection remains an open problem. Traditional models embed extracted spatial features from sampled frames directly into a temporal sequence, which ignores the spatio-temporal characteristics of video and limits the ability to express continuous changes between adjacent frames. To address the existing challenges, this paper proposes a novel framework called ACTION-VST. First, a keyframe extraction algorithm is developed to select frames that are most likely to represent violent scenes in videos. To transform visual sequences into spatio-temporal feature maps, a multi-path excitation module is proposed to activate spatio-temporal, channel and motion features. Next, an advanced Video Swin Transformer-based network is employed for both global and local spatio-temporal modeling, which enables comprehensive feature extraction and representation of violence. The proposed method was validated on two large-scale datasets, RLVS and RWF-2000, achieving accuracies of over 98 and 93%, respectively, surpassing the state of the art.
{"title":"Keyframe-guided Video Swin Transformer with Multi-path Excitation for Violence Detection","authors":"Chenghao Li, Xinyan Yang, Gang Liang","doi":"10.1093/comjnl/bxad103","DOIUrl":"https://doi.org/10.1093/comjnl/bxad103","url":null,"abstract":"Abstract Violence detection is a critical task aimed at identifying violent behavior in video by extracting frames and applying classification models. However, the complexity of video data and the suddenness of violent events present significant hurdles in accurately pinpointing instances of violence, making the extraction of frames that indicate violence a challenging endeavor. Furthermore, designing and applying high-performance models for violence detection remains an open problem. Traditional models embed extracted spatial features from sampled frames directly into a temporal sequence, which ignores the spatio-temporal characteristics of video and limits the ability to express continuous changes between adjacent frames. To address the existing challenges, this paper proposes a novel framework called ACTION-VST. First, a keyframe extraction algorithm is developed to select frames that are most likely to represent violent scenes in videos. To transform visual sequences into spatio-temporal feature maps, a multi-path excitation module is proposed to activate spatio-temporal, channel and motion features. Next, an advanced Video Swin Transformer-based network is employed for both global and local spatio-temporal modeling, which enables comprehensive feature extraction and representation of violence. The proposed method was validated on two large-scale datasets, RLVS and RWF-2000, achieving accuracies of over 98 and 93%, respectively, surpassing the state of the art.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"127 46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135618697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yangguang Tian, Yingjiu Li, Robert H Deng, Guomin Yang, Nan Li
Abstract In this paper, we introduce the first generic framework of policy-based remote user authentication from multiple biometrics. The proposed framework allows an authorized user to remotely authenticate herself to an authentication server using her multiple biometrics, which enhances both the security and usability of user authentications. The authentication server approves a user’s authentication request if and only if the user’s multiple biometrics satisfies an authentication policy. In particular, the authentication policy can be dynamically updated to satisfy different security and usability requirements in practice. We implement an instantiation of the proposed framework and report its performance under various authentication policies.
{"title":"Policy-Based Remote User Authentication From Multi-Biometrics","authors":"Yangguang Tian, Yingjiu Li, Robert H Deng, Guomin Yang, Nan Li","doi":"10.1093/comjnl/bxad102","DOIUrl":"https://doi.org/10.1093/comjnl/bxad102","url":null,"abstract":"Abstract In this paper, we introduce the first generic framework of policy-based remote user authentication from multiple biometrics. The proposed framework allows an authorized user to remotely authenticate herself to an authentication server using her multiple biometrics, which enhances both the security and usability of user authentications. The authentication server approves a user’s authentication request if and only if the user’s multiple biometrics satisfies an authentication policy. In particular, the authentication policy can be dynamically updated to satisfy different security and usability requirements in practice. We implement an instantiation of the proposed framework and report its performance under various authentication policies.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135781642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Robust and fast image recognition and matching is an important task in the underwater domain. The primary focus of this work is on extracting subsea features with sonar sensor for further Autonomous Underwater Vehicle navigation, such as the robotic localization and landmark mapping applications. With the assistance of high-resolution underwater features in the Side Scan Sonar (SSS) images, an efficient feature detector and descriptor, Speeded Up Robust Feature, is employed to seabed sonar image fusion task. In order to solve the nonlinear intensity difference problem in SSS images, the main novelty of this work is the proposed Underwater Wireless Sensor Network-based Delaunay Triangulation (UWSN-DT) algorithm for improving the performances of sonar map fusion accuracy with low computational complexity, in which the wireless nodes are considered as underwater feature points, since nodes could provide sufficiently useful information for the underwater map fusion, such as the location. In the simulated experiments, it shows that the presented UWSN-DT approach works efficiently and robustly, especially for the subsea environments where there are few distinguishable feature points.
{"title":"Underwater Wireless Sensor Network-Based Delaunay Triangulation (UWSN-DT) Algorithm for Sonar Map Fusion","authors":"Xin Yuan, Ning Li, Xiaobo Gong, Changli Yu, Xiaoteng Zhou, José-Fernán Martínez Ortega","doi":"10.1093/comjnl/bxad094","DOIUrl":"https://doi.org/10.1093/comjnl/bxad094","url":null,"abstract":"Abstract Robust and fast image recognition and matching is an important task in the underwater domain. The primary focus of this work is on extracting subsea features with sonar sensor for further Autonomous Underwater Vehicle navigation, such as the robotic localization and landmark mapping applications. With the assistance of high-resolution underwater features in the Side Scan Sonar (SSS) images, an efficient feature detector and descriptor, Speeded Up Robust Feature, is employed to seabed sonar image fusion task. In order to solve the nonlinear intensity difference problem in SSS images, the main novelty of this work is the proposed Underwater Wireless Sensor Network-based Delaunay Triangulation (UWSN-DT) algorithm for improving the performances of sonar map fusion accuracy with low computational complexity, in which the wireless nodes are considered as underwater feature points, since nodes could provide sufficiently useful information for the underwater map fusion, such as the location. In the simulated experiments, it shows that the presented UWSN-DT approach works efficiently and robustly, especially for the subsea environments where there are few distinguishable feature points.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136213850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Detecting if two functions in different compiled forms are similar has a wide range of applications in software security. We present a method that leverages both semantic and structural features of functions, learned by a neural-net model on the underlying control-flow graphs (CFGs). In particular, we devise a neural function-similarity regressor (NFSR) with attentions on dual CFGs. We train and evaluate NFSR on a dataset consisting of nearly 4 million functions from over 14 900 binary files. Experiments show that NFSR is superior to the SOTA models of SAFE, Gemini and GMN, especially for binary functions with large CFGs. An ablation study shows that attention on dual CFGs plays a significant role in detecting function similarities.
{"title":"Similarity Regression Of Functions In Different Compiled Forms With Neural Attentions On Dual Control-Flow Graphs","authors":"Yun Zhang, Yuling Liu, Ge Cheng, Jie Wang","doi":"10.1093/comjnl/bxad095","DOIUrl":"https://doi.org/10.1093/comjnl/bxad095","url":null,"abstract":"Abstract Detecting if two functions in different compiled forms are similar has a wide range of applications in software security. We present a method that leverages both semantic and structural features of functions, learned by a neural-net model on the underlying control-flow graphs (CFGs). In particular, we devise a neural function-similarity regressor (NFSR) with attentions on dual CFGs. We train and evaluate NFSR on a dataset consisting of nearly 4 million functions from over 14 900 binary files. Experiments show that NFSR is superior to the SOTA models of SAFE, Gemini and GMN, especially for binary functions with large CFGs. An ablation study shows that attention on dual CFGs plays a significant role in detecting function similarities.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136213519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We sometimes need to compute the most significant digits of the product of small integers with a multiplier requiring much storage, e.g. a large integer (e.g. $5^{100}$) or an irrational number ($pi $). We only need to access the most significant digits of the multiplier—as long as the integers are sufficiently small. We provide an efficient algorithm to compute the range of integers given a truncated multiplier and a desired number of digits.
{"title":"Exact Short Products From Truncated Multipliers","authors":"Daniel Lemire","doi":"10.1093/comjnl/bxad077","DOIUrl":"https://doi.org/10.1093/comjnl/bxad077","url":null,"abstract":"Abstract We sometimes need to compute the most significant digits of the product of small integers with a multiplier requiring much storage, e.g. a large integer (e.g. $5^{100}$) or an irrational number ($pi $). We only need to access the most significant digits of the multiplier—as long as the integers are sufficiently small. We provide an efficient algorithm to compute the range of integers given a truncated multiplier and a desired number of digits.","PeriodicalId":50641,"journal":{"name":"Computer Journal","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136213548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}