Pub Date : 2024-02-09DOI: 10.26599/TST.2023.9010060
Pengming Wang;Zijiang Zhu;Qing Chen;Weihuang Dai
With the advent of the information age, it will be more troublesome to search for a lot of relevant knowledge to find the information you need. Text reasoning is a very basic and important part of multi-hop question and answer tasks. This paper aims to study the integrity, uniformity, and speed of computational intelligence inference data capabilities. That is why multi-hop reasoning came into being, but it is still in its infancy, that is, it is far from enough to conduct multi-hop question and answer questions, such as search breadth, process complexity, response speed, comprehensiveness of information, etc. This paper makes a text comparison between traditional information retrieval and computational intelligence through corpus relevancy and other computing methods. The study finds that in the face of multi-hop question and answer reasoning, the reasoning data that traditional retrieval methods lagged behind in intelligence are about 35% worse. It shows that computational intelligence would be more complete, unified, and faster than traditional retrieval methods. This paper also introduces the relevant points of text reasoning and describes the process of the multi-hop question answering system, as well as the subsequent discussions and expectations.
{"title":"Text Reasoning Chain Extraction for Multi-Hop Question Answering","authors":"Pengming Wang;Zijiang Zhu;Qing Chen;Weihuang Dai","doi":"10.26599/TST.2023.9010060","DOIUrl":"https://doi.org/10.26599/TST.2023.9010060","url":null,"abstract":"With the advent of the information age, it will be more troublesome to search for a lot of relevant knowledge to find the information you need. Text reasoning is a very basic and important part of multi-hop question and answer tasks. This paper aims to study the integrity, uniformity, and speed of computational intelligence inference data capabilities. That is why multi-hop reasoning came into being, but it is still in its infancy, that is, it is far from enough to conduct multi-hop question and answer questions, such as search breadth, process complexity, response speed, comprehensiveness of information, etc. This paper makes a text comparison between traditional information retrieval and computational intelligence through corpus relevancy and other computing methods. The study finds that in the face of multi-hop question and answer reasoning, the reasoning data that traditional retrieval methods lagged behind in intelligence are about 35% worse. It shows that computational intelligence would be more complete, unified, and faster than traditional retrieval methods. This paper also introduces the relevant points of text reasoning and describes the process of the multi-hop question answering system, as well as the subsequent discussions and expectations.","PeriodicalId":48690,"journal":{"name":"Tsinghua Science and Technology","volume":"29 4","pages":"959-970"},"PeriodicalIF":6.6,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10431749","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139715258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The article designs a new type of bridge circuit with a controlled source—when the resistance on the bridge arm of the controlled source bridge circuit meets the bridge balance condition, and the bridge branch contains only one Current-Controlled Current Source (CCCS), a Voltage-Controlled Current Source (VCCS), a Current-Controlled Voltage Source (CCVS), or a Voltage-Controlled Voltage Source (VCVS), the circuit is called a controlled bridge circuit, which has the characteristics of bridge balance. Due to the relationship between the controlled source and the bridge arm, the sensitivity of the components on the bridge is higher mathematically and logically. When applied to measurement, engineering, automatic control, and other fields, the controlled bridge circuit has higher control ac-curacy. Mathematical derivation and simulation results prove the correctness of the bridge balance conclusion and the special properties of this bridge when applied to the measurement field.
{"title":"Characteristics of Controlled Bridge Circuit and Its Application in Magnetic Field Induction Measurement","authors":"Yanchu Li;Qingqing Ding;Mingchen Yan;Jiyao Wang;Jun Xu;Xinzhou Dong","doi":"10.26599/TST.2023.9010084","DOIUrl":"https://doi.org/10.26599/TST.2023.9010084","url":null,"abstract":"The article designs a new type of bridge circuit with a controlled source—when the resistance on the bridge arm of the controlled source bridge circuit meets the bridge balance condition, and the bridge branch contains only one Current-Controlled Current Source (CCCS), a Voltage-Controlled Current Source (VCCS), a Current-Controlled Voltage Source (CCVS), or a Voltage-Controlled Voltage Source (VCVS), the circuit is called a controlled bridge circuit, which has the characteristics of bridge balance. Due to the relationship between the controlled source and the bridge arm, the sensitivity of the components on the bridge is higher mathematically and logically. When applied to measurement, engineering, automatic control, and other fields, the controlled bridge circuit has higher control ac-curacy. Mathematical derivation and simulation results prove the correctness of the bridge balance conclusion and the special properties of this bridge when applied to the measurement field.","PeriodicalId":48690,"journal":{"name":"Tsinghua Science and Technology","volume":"29 4","pages":"1105-1117"},"PeriodicalIF":6.6,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10431735","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139715257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cyberattacks against highly integrated Internet of Things (IoT) servers, apps, and telecoms infrastructure are rapidly increasing when issues produced by IoT networks go unnoticed for an extended period. IoT interface attacks must be evaluated in real-time for effective safety and security measures. This study implements a smart intrusion detection system (IDS) designed for IoT threats, and interoperability with IoT connectivity standards is offered by the identity solution. An IDS is a common type of network security technology that has recently received increasing interest in the research community. The system has already piqued the curiosity of scientific and industrial communities to identify intrusions. Several IDSs based on machine learning (ML) and deep learning (DL) have been proposed. This study introduces IDS-SIoDL, a novel IDS for IoT-based smart cities that integrates long shortterm memory (LSTM) and feature engineering. This model is tested using tensor processing unit (TPU) on the enhanced BoT-IoT, Edge-IIoT, and NSL-KDD datasets. Compared with current IDSs, the obtained results provide good assessment features, such as accuracy, recall, and precision, with approximately 0.9990 recording time and calculating times of approximately 600 and 6 ms for training and classification, respectively.
{"title":"Enhanced IDS with Deep Learning for IoT-Based Smart Cities Security","authors":"Chaimae Hazman;Azidine Guezzaz;Said Benkirane;Mourade Azrour","doi":"10.26599/TST.2023.9010033","DOIUrl":"https://doi.org/10.26599/TST.2023.9010033","url":null,"abstract":"Cyberattacks against highly integrated Internet of Things (IoT) servers, apps, and telecoms infrastructure are rapidly increasing when issues produced by IoT networks go unnoticed for an extended period. IoT interface attacks must be evaluated in real-time for effective safety and security measures. This study implements a smart intrusion detection system (IDS) designed for IoT threats, and interoperability with IoT connectivity standards is offered by the identity solution. An IDS is a common type of network security technology that has recently received increasing interest in the research community. The system has already piqued the curiosity of scientific and industrial communities to identify intrusions. Several IDSs based on machine learning (ML) and deep learning (DL) have been proposed. This study introduces IDS-SIoDL, a novel IDS for IoT-based smart cities that integrates long shortterm memory (LSTM) and feature engineering. This model is tested using tensor processing unit (TPU) on the enhanced BoT-IoT, Edge-IIoT, and NSL-KDD datasets. Compared with current IDSs, the obtained results provide good assessment features, such as accuracy, recall, and precision, with approximately 0.9990 recording time and calculating times of approximately 600 and 6 ms for training and classification, respectively.","PeriodicalId":48690,"journal":{"name":"Tsinghua Science and Technology","volume":"29 4","pages":"929-947"},"PeriodicalIF":6.6,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10431757","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139715227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-09DOI: 10.26599/TST.2023.9010114
Shanwen Guan;Xinhua Lu;Ji Li;Rushi Lan;Xiaonan Luo
When estimating the direction of arrival (DOA) of wideband signals from multiple sources, the performance of sparse Bayesian methods is influenced by the frequency bands occupied by signals in different directions. This is particularly true when multiple signal frequency bands overlap. Message passing algorithms (MPA) with Dirichlet process (DP) prior can be employed in a sparse Bayesian learning (SBL) framework with high precision. However, existing methods suffer from either high complexity or low precision. To address this, we propose a low-complexity DOA estimation algorithm based on a factor graph. This approach introduces two strong constraints via a stretching transformation of the factor graph. The first constraint separates the observation from the DP prior, enabling the application of the unitary approximate message passing (UAMP) algorithm for simplified inference and mitigation of divergence issues. The second constraint compensates for the deviation in estimation angle caused by the grid mismatch problem. Compared to state-of-the-art algorithms, our proposed method offers higher estimation accuracy and lower complexity.
在估计来自多个信号源的宽带信号的到达方向(DOA)时,稀疏贝叶斯方法的性能会受到不同方向信号所占频带的影响。当多个信号频带重叠时尤其如此。在稀疏贝叶斯学习(SBL)框架中,可以采用具有 Dirichlet 过程(DP)先验的消息传递算法(MPA),且精度很高。然而,现有方法要么复杂度高,要么精度低。为此,我们提出了一种基于因子图的低复杂度 DOA 估计算法。这种方法通过因子图的拉伸变换引入了两个强约束。第一个约束条件将观测与 DP 先验分离开来,使单元近似信息传递(UAMP)算法得以应用,从而简化推理并缓解分歧问题。第二个约束条件弥补了网格不匹配问题造成的估计角度偏差。与最先进的算法相比,我们提出的方法具有更高的估计精度和更低的复杂度。
{"title":"Combined UAMP and MF Message Passing Algorithm for Multi-Target Wideband DOA Estimation with Dirichlet Process Prior","authors":"Shanwen Guan;Xinhua Lu;Ji Li;Rushi Lan;Xiaonan Luo","doi":"10.26599/TST.2023.9010114","DOIUrl":"https://doi.org/10.26599/TST.2023.9010114","url":null,"abstract":"When estimating the direction of arrival (DOA) of wideband signals from multiple sources, the performance of sparse Bayesian methods is influenced by the frequency bands occupied by signals in different directions. This is particularly true when multiple signal frequency bands overlap. Message passing algorithms (MPA) with Dirichlet process (DP) prior can be employed in a sparse Bayesian learning (SBL) framework with high precision. However, existing methods suffer from either high complexity or low precision. To address this, we propose a low-complexity DOA estimation algorithm based on a factor graph. This approach introduces two strong constraints via a stretching transformation of the factor graph. The first constraint separates the observation from the DP prior, enabling the application of the unitary approximate message passing (UAMP) algorithm for simplified inference and mitigation of divergence issues. The second constraint compensates for the deviation in estimation angle caused by the grid mismatch problem. Compared to state-of-the-art algorithms, our proposed method offers higher estimation accuracy and lower complexity.","PeriodicalId":48690,"journal":{"name":"Tsinghua Science and Technology","volume":"29 4","pages":"1069-1081"},"PeriodicalIF":6.6,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10431731","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139715256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-09DOI: 10.26599/TST.2023.9010086
Xuan Yang;James A. Esquivel
Edge computing, which migrates compute-intensive tasks to run on the storage resources of edge devices, efficiently reduces data transmission loss and protects data privacy. However, due to limited computing resources and storage capacity, edge devices fail to support real-time streaming data query and processing. To address this challenge, first, we propose a Long Short-Term Memory (LSTM) network-based adaptive approach in the intelligent end-edge-cloud system. Specifically, we maximize the Quality of Experience (QoE) of users by automatically adapting their resource requirements to the storage capacity of edge devices through an event mechanism. Second, to reduce the uncertainty and non-complete adaption of the edge device towards the user's requirements, we use the LSTM network to analyze the storage capacity of the edge device in real time. Finally, the storage features of the edge devices are aggregated to the cloud to reevaluate the comprehensive capability of the edge devices and ensure the fast response of the user devices during the dynamic adaptation matching process. A series of experimental results show that the proposed approach has superior performance compared with traditional centralized and matrix decomposition based approaches.
{"title":"LSTM Network-Based Adaptation Approach for Dynamic Integration in Intelligent End-Edge-Cloud Systems","authors":"Xuan Yang;James A. Esquivel","doi":"10.26599/TST.2023.9010086","DOIUrl":"https://doi.org/10.26599/TST.2023.9010086","url":null,"abstract":"Edge computing, which migrates compute-intensive tasks to run on the storage resources of edge devices, efficiently reduces data transmission loss and protects data privacy. However, due to limited computing resources and storage capacity, edge devices fail to support real-time streaming data query and processing. To address this challenge, first, we propose a Long Short-Term Memory (LSTM) network-based adaptive approach in the intelligent end-edge-cloud system. Specifically, we maximize the Quality of Experience (QoE) of users by automatically adapting their resource requirements to the storage capacity of edge devices through an event mechanism. Second, to reduce the uncertainty and non-complete adaption of the edge device towards the user's requirements, we use the LSTM network to analyze the storage capacity of the edge device in real time. Finally, the storage features of the edge devices are aggregated to the cloud to reevaluate the comprehensive capability of the edge devices and ensure the fast response of the user devices during the dynamic adaptation matching process. A series of experimental results show that the proposed approach has superior performance compared with traditional centralized and matrix decomposition based approaches.","PeriodicalId":48690,"journal":{"name":"Tsinghua Science and Technology","volume":"29 4","pages":"1219-1231"},"PeriodicalIF":6.6,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10431758","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139715133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-09DOI: 10.26599/TST.2024.9010014
With the exponential growth in data availability and the advancements in computing power, the importance of neural networks lies in its ability to process large-scale data, enable automation tasks, support decision-making, etc. The transformative power of neural networks has the potential to reshape industries, improve lives, and contribute to the advancement of society as a whole. Neural networks depicted in ordinary differential equations (ODEs) ingeniously integrate neural networks and differential equations, two prominent modeling approaches widely applied in various fields such as chemistry, physics, engineering, and economics. Serving as equations that describe the relationship between a class of functions and their derivatives, ODEs possess rich mathematical analysis methods and are thus integral tools in classical mathematical theory. Neural networks depicted in ODEs leverage the differential equation description of physical processes, combining it with the potent fitting capabilities of neural networks. In contrast to traditional neural networks that overlook physical information and rely solely on numerous neurons for fitting, neural networks depicted in ODEs can achieve more accurate estimates with fewer neurons, while maintaining robustness, generalization, and interpretability in the learned systems. To fulfill the powerful potential of robots, plenty of algorithms based on neural networks depicted in ODEs are researched to simulate human-like learning processes, realize decision-making tasks, and address the issues of uncertain models and control strategies. Robots have great application value in the fields of artificial intelligence, information technology, and intelligent manufacturing due to their efficient perception, decision-making, and execution capabilities.
{"title":"Call for Papers: Special Issue on Neural Networks Depicted in ODEs with Applications","authors":"","doi":"10.26599/TST.2024.9010014","DOIUrl":"https://doi.org/10.26599/TST.2024.9010014","url":null,"abstract":"With the exponential growth in data availability and the advancements in computing power, the importance of neural networks lies in its ability to process large-scale data, enable automation tasks, support decision-making, etc. The transformative power of neural networks has the potential to reshape industries, improve lives, and contribute to the advancement of society as a whole. Neural networks depicted in ordinary differential equations (ODEs) ingeniously integrate neural networks and differential equations, two prominent modeling approaches widely applied in various fields such as chemistry, physics, engineering, and economics. Serving as equations that describe the relationship between a class of functions and their derivatives, ODEs possess rich mathematical analysis methods and are thus integral tools in classical mathematical theory. Neural networks depicted in ODEs leverage the differential equation description of physical processes, combining it with the potent fitting capabilities of neural networks. In contrast to traditional neural networks that overlook physical information and rely solely on numerous neurons for fitting, neural networks depicted in ODEs can achieve more accurate estimates with fewer neurons, while maintaining robustness, generalization, and interpretability in the learned systems. To fulfill the powerful potential of robots, plenty of algorithms based on neural networks depicted in ODEs are researched to simulate human-like learning processes, realize decision-making tasks, and address the issues of uncertain models and control strategies. Robots have great application value in the fields of artificial intelligence, information technology, and intelligent manufacturing due to their efficient perception, decision-making, and execution capabilities.","PeriodicalId":48690,"journal":{"name":"Tsinghua Science and Technology","volume":"29 4","pages":"1248-1248"},"PeriodicalIF":6.6,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10431755","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139715193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-09DOI: 10.26599/TST.2023.9010080
Ahmad Alzu'bi;Ala'a Alomar;Shahed Alkhaza'leh;Abdelrahman Abuarqoub;Mohammad Hammoudeh
The healthcare industry is rapidly adapting to new computing environments and technologies. With academics increasingly committed to developing and enhancing healthcare solutions that combine the Internet of Things (IoT) and edge computing, there is a greater need than ever to adequately monitor the data being acquired, shared, processed, and stored. The growth of cloud, IoT, and edge computing models presents severe data privacy concerns, especially in the healthcare sector. However, rigorous research to develop appropriate data privacy solutions in the healthcare sector is still lacking. This paper discusses the current state of privacy-preservation solutions in IoT and edge healthcare applications. It identifies the common strategies often used to include privacy by the intelligent edges and technologies in healthcare systems. Furthermore, the study addresses the technical complexity, efficacy, and sustainability limits of these methods. The study also highlights the privacy issues and current research directions that have driven the IoT and edge healthcare solutions, with which more insightful future applications are encouraged.
{"title":"A Review of Privacy and Security of Edge Computing in Smart Healthcare Systems: Issues, Challenges, and Research Directions","authors":"Ahmad Alzu'bi;Ala'a Alomar;Shahed Alkhaza'leh;Abdelrahman Abuarqoub;Mohammad Hammoudeh","doi":"10.26599/TST.2023.9010080","DOIUrl":"https://doi.org/10.26599/TST.2023.9010080","url":null,"abstract":"The healthcare industry is rapidly adapting to new computing environments and technologies. With academics increasingly committed to developing and enhancing healthcare solutions that combine the Internet of Things (IoT) and edge computing, there is a greater need than ever to adequately monitor the data being acquired, shared, processed, and stored. The growth of cloud, IoT, and edge computing models presents severe data privacy concerns, especially in the healthcare sector. However, rigorous research to develop appropriate data privacy solutions in the healthcare sector is still lacking. This paper discusses the current state of privacy-preservation solutions in IoT and edge healthcare applications. It identifies the common strategies often used to include privacy by the intelligent edges and technologies in healthcare systems. Furthermore, the study addresses the technical complexity, efficacy, and sustainability limits of these methods. The study also highlights the privacy issues and current research directions that have driven the IoT and edge healthcare solutions, with which more insightful future applications are encouraged.","PeriodicalId":48690,"journal":{"name":"Tsinghua Science and Technology","volume":"29 4","pages":"1152-1180"},"PeriodicalIF":6.6,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10431732","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139715195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-09DOI: 10.26599/TST.2023.9010032
A. E. M. Eljialy;Mohammed Yousuf Uddin;Sultan Ahmad
Intrusion detection systems (IDSs) are deployed to detect anomalies in real time. They classify a network's incoming traffic as benign or anomalous (attack). An efficient and robust IDS in software-defined networks is an inevitable component of network security. The main challenges of such an IDS are achieving zero or extremely low false positive rates and high detection rates. Internet of Things (IoT) networks run by using devices with minimal resources. This situation makes deploying traditional IDSs in IoT networks unfeasible. Machine learning (ML) techniques are extensively applied to build robust IDSs. Many researchers have utilized different ML methods and techniques to address the above challenges. The development of an efficient IDS starts with a good feature selection process to avoid overfitting the ML model. This work proposes a multiple feature selection process followed by classification. In this study, the Software-defined networking (SDN) dataset is used to train and test the proposed model. This model applies multiple feature selection techniques to select high-scoring features from a set of features. Highly relevant features for anomaly detection are selected on the basis of their scores to generate the candidate dataset. Multiple classification algorithms are applied to the candidate dataset to build models. The proposed model exhibits considerable improvement in the detection of attacks with high accuracy and low false positive rates, even with a few features selected.
入侵检测系统(IDS)用于实时检测异常情况。它们将网络输入流量分为良性和异常(攻击)两种。软件定义网络中高效、强大的 IDS 是网络安全不可或缺的组成部分。此类 IDS 面临的主要挑战是实现零或极低的误报率和高检测率。物联网 (IoT) 网络使用资源极少的设备运行。这种情况使得在物联网网络中部署传统 IDS 变得不可行。机器学习(ML)技术被广泛应用于构建稳健的 IDS。许多研究人员利用不同的 ML 方法和技术来应对上述挑战。高效 IDS 的开发始于良好的特征选择过程,以避免 ML 模型的过度拟合。本研究提出了一个多特征选择过程,然后进行分类。本研究使用软件定义网络(SDN)数据集来训练和测试所提出的模型。该模型采用多重特征选择技术,从一组特征中选择高分特征。根据得分选出与异常检测高度相关的特征,生成候选数据集。对候选数据集采用多种分类算法来建立模型。即使只选择少量特征,所提出的模型在检测攻击方面也有相当大的改进,准确率高,误报率低。
{"title":"Novel Framework for an Intrusion Detection System Using Multiple Feature Selection Methods Based on Deep Learning","authors":"A. E. M. Eljialy;Mohammed Yousuf Uddin;Sultan Ahmad","doi":"10.26599/TST.2023.9010032","DOIUrl":"https://doi.org/10.26599/TST.2023.9010032","url":null,"abstract":"Intrusion detection systems (IDSs) are deployed to detect anomalies in real time. They classify a network's incoming traffic as benign or anomalous (attack). An efficient and robust IDS in software-defined networks is an inevitable component of network security. The main challenges of such an IDS are achieving zero or extremely low false positive rates and high detection rates. Internet of Things (IoT) networks run by using devices with minimal resources. This situation makes deploying traditional IDSs in IoT networks unfeasible. Machine learning (ML) techniques are extensively applied to build robust IDSs. Many researchers have utilized different ML methods and techniques to address the above challenges. The development of an efficient IDS starts with a good feature selection process to avoid overfitting the ML model. This work proposes a multiple feature selection process followed by classification. In this study, the Software-defined networking (SDN) dataset is used to train and test the proposed model. This model applies multiple feature selection techniques to select high-scoring features from a set of features. Highly relevant features for anomaly detection are selected on the basis of their scores to generate the candidate dataset. Multiple classification algorithms are applied to the candidate dataset to build models. The proposed model exhibits considerable improvement in the detection of attacks with high accuracy and low false positive rates, even with a few features selected.","PeriodicalId":48690,"journal":{"name":"Tsinghua Science and Technology","volume":"29 4","pages":"948-958"},"PeriodicalIF":6.6,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10431760","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139715206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-09DOI: 10.26599/TST.2023.9010073
Zexi Chen;Dongqiang Jia;Yushu Sun;Lin Yang;Wenjie Jin;Ruoxi Liu
In order to support the perception and defense of the operation risk of the medium and low voltage distribution system, it is crucial to conduct data mining on the time series generated by the system to learn anomalous patterns, and carry out accurate and timely anomaly detection for timely discovery of anomalous conditions and early alerting. And edge computing has been widely used in the processing of Internet of Things (IoT) data. The key challenge of univariate time series anomaly detection is how to model complex nonlinear time dependence. However, most of the previous works only model the short-term time dependence, without considering the periodic long-term time dependence. Therefore, we propose a new Hierarchical Attention Network (HAN), which introduces seven day-level attention networks to capture fine-grained short-term time dependence, and uses a week-level attention network to model the periodic long-term time dependence. Then we combine the day-level feature learned by day-level attention network and week-level feature learned by week-level attention network to obtain the high-level time feature, according to which we can calculate the anomaly probability and further detect the anomaly. Extensive experiments on a public anomaly detection dataset, and deployment in a real-world medium and low voltage distribution system show the superiority of our proposed framework over state-of-the-arts.
{"title":"Univariate Time Series Anomaly Detection Based on Hierarchical Attention Network","authors":"Zexi Chen;Dongqiang Jia;Yushu Sun;Lin Yang;Wenjie Jin;Ruoxi Liu","doi":"10.26599/TST.2023.9010073","DOIUrl":"https://doi.org/10.26599/TST.2023.9010073","url":null,"abstract":"In order to support the perception and defense of the operation risk of the medium and low voltage distribution system, it is crucial to conduct data mining on the time series generated by the system to learn anomalous patterns, and carry out accurate and timely anomaly detection for timely discovery of anomalous conditions and early alerting. And edge computing has been widely used in the processing of Internet of Things (IoT) data. The key challenge of univariate time series anomaly detection is how to model complex nonlinear time dependence. However, most of the previous works only model the short-term time dependence, without considering the periodic long-term time dependence. Therefore, we propose a new Hierarchical Attention Network (HAN), which introduces seven day-level attention networks to capture fine-grained short-term time dependence, and uses a week-level attention network to model the periodic long-term time dependence. Then we combine the day-level feature learned by day-level attention network and week-level feature learned by week-level attention network to obtain the high-level time feature, according to which we can calculate the anomaly probability and further detect the anomaly. Extensive experiments on a public anomaly detection dataset, and deployment in a real-world medium and low voltage distribution system show the superiority of our proposed framework over state-of-the-arts.","PeriodicalId":48690,"journal":{"name":"Tsinghua Science and Technology","volume":"29 4","pages":"1181-1193"},"PeriodicalIF":6.6,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10431752","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139715131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}