首页 > 最新文献

Icon最新文献

英文 中文
Implementation of the CPFSK Signal Non-coherent Multi-symbol Detection Algorithm with Reduced Complexity 降低复杂度的CPFSK信号非相干多符号检测算法的实现
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/ICNLP58431.2023.00030
Xihai Xie, Mingxin Zhang
The traditional non-coherent Multi-Symbol Detection (MSD) method for Continuous Phase Frequency Shift Keying (CPFSK) signals suffers from the problems of a large number of associated operations and high complexity. In this paper, we propose a non-coherent multi-symbol detection algorithm with reduced complexity. The improved algorithm selects only the local reference signal whose first symbol is the same as the already detected symbol for the correlation operation, which reduces the correlation operation by nearly 50% compared with the traditional algorithm and reduces the complexity of the algorithm. A graphical system modeling and simulation of the non-coherent multi-symbol detection algorithm with reduced complexity are performed in the DSP Builder platform. The model is simulated by using ModelSim software for engineering validation, and the simulation result shows that the model can implement the improved algorithm for CPFSK signal detection.
传统的连续相移频键控(CPFSK)信号非相干多符号检测(MSD)方法存在关联运算量大、复杂度高的问题。本文提出了一种降低复杂度的非相干多符号检测算法。改进算法只选择第一个符号与已经检测到的符号相同的局部参考信号进行相关运算,与传统算法相比,相关运算减少了近50%,降低了算法的复杂度。在DSP Builder平台上对降低复杂度的非相干多符号检测算法进行了图形化系统建模和仿真。利用ModelSim软件对模型进行了仿真验证,仿真结果表明,该模型能够实现改进的CPFSK信号检测算法。
{"title":"Implementation of the CPFSK Signal Non-coherent Multi-symbol Detection Algorithm with Reduced Complexity","authors":"Xihai Xie, Mingxin Zhang","doi":"10.1109/ICNLP58431.2023.00030","DOIUrl":"https://doi.org/10.1109/ICNLP58431.2023.00030","url":null,"abstract":"The traditional non-coherent Multi-Symbol Detection (MSD) method for Continuous Phase Frequency Shift Keying (CPFSK) signals suffers from the problems of a large number of associated operations and high complexity. In this paper, we propose a non-coherent multi-symbol detection algorithm with reduced complexity. The improved algorithm selects only the local reference signal whose first symbol is the same as the already detected symbol for the correlation operation, which reduces the correlation operation by nearly 50% compared with the traditional algorithm and reduces the complexity of the algorithm. A graphical system modeling and simulation of the non-coherent multi-symbol detection algorithm with reduced complexity are performed in the DSP Builder platform. The model is simulated by using ModelSim software for engineering validation, and the simulation result shows that the model can implement the improved algorithm for CPFSK signal detection.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81729167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multilingual BERT Cross-Lingual Transferability with Pre-trained Representations on Tangut: A Survey 多语言BERT跨语言可移植性与预训练的切线表示:一项调查
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/ICNLP58431.2023.00048
Xiaoming Lu, Wenjian Liu, Shengyi Jiang, Changqing Liu
Natural Language Processing (NLP) systems have three main components including tokenization, embedding, and model architectures (top deep learning models such as BERT, GPT-2, or GPT-3). In this paper, the authors attempt to explore and sum up possible ways of fine-tuning the Multilingual BERT (mBERT) model and feeding it with effective encodings of Tangut characters. Tangut is an extinct low-resource language. We expect to introduce a tailored embedding layer into Tangut as part of the fine-tuning procedure without altering mBERT internal structure. The initial work is listed on. By reviewing existing State of the Art (SOTA) approaches, we hope to further analyze the performance boost of mBERT when applied to low-resource languages.
自然语言处理(NLP)系统有三个主要组成部分,包括标记化、嵌入和模型架构(顶级深度学习模型,如BERT、GPT-2或GPT-3)。在本文中,作者试图探索和总结微调多语言BERT (mBERT)模型并为其提供有效的切线字符编码的可能方法。唐古特语是一种已经灭绝的资源匮乏的语言。我们希望在不改变mBERT内部结构的情况下,将一个定制的嵌入层引入到Tangut中,作为微调过程的一部分。最初的工作列在。通过回顾现有的SOTA方法,我们希望进一步分析mBERT在应用于低资源语言时的性能提升。
{"title":"Multilingual BERT Cross-Lingual Transferability with Pre-trained Representations on Tangut: A Survey","authors":"Xiaoming Lu, Wenjian Liu, Shengyi Jiang, Changqing Liu","doi":"10.1109/ICNLP58431.2023.00048","DOIUrl":"https://doi.org/10.1109/ICNLP58431.2023.00048","url":null,"abstract":"Natural Language Processing (NLP) systems have three main components including tokenization, embedding, and model architectures (top deep learning models such as BERT, GPT-2, or GPT-3). In this paper, the authors attempt to explore and sum up possible ways of fine-tuning the Multilingual BERT (mBERT) model and feeding it with effective encodings of Tangut characters. Tangut is an extinct low-resource language. We expect to introduce a tailored embedding layer into Tangut as part of the fine-tuning procedure without altering mBERT internal structure. The initial work is listed on. By reviewing existing State of the Art (SOTA) approaches, we hope to further analyze the performance boost of mBERT when applied to low-resource languages.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81833326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Improved LMS Adaptive Filtering Speech Enhancement Algorithm 一种改进的LMS自适应滤波语音增强算法
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/ICNLP58431.2023.00033
Xi Hai Xie, Wen Chuan Wang
In order to improve the accuracy of speech recognition, the input speech signal is usually denoised first, which is typically done using the Least Mean Square (LMS) algorithm. To address the drawback that the fixed-step LMS algorithm in adaptive filtering cannot achieve a balance between convergence speed and steady-state error, this paper proposes a variable-step LMS algorithm based on an improved inverse hyperbolic sine function. In this paper, the improved algorithm is applied to speech enhancement, and the performance of this algorithm is compared with several other improved algorithms. The simulation results show that the improved algorithm takes better care of the conflict between convergence speed and steady-state error, and the algorithm has an obvious denoising effect for noisy speech, which effectively improves the clarity and intelligibility of speech and provides prerequisites for speech recognition.
为了提高语音识别的准确性,通常首先对输入语音信号进行降噪,通常使用最小均方算法(LMS)进行降噪。针对自适应滤波中固定步长LMS算法无法在收敛速度和稳态误差之间取得平衡的缺点,提出了一种基于改进双曲正弦逆函数的变步长LMS算法。本文将改进算法应用于语音增强,并与其他几种改进算法的性能进行了比较。仿真结果表明,改进后的算法较好地处理了收敛速度与稳态误差之间的冲突,对带噪语音具有明显的去噪效果,有效地提高了语音的清晰度和可理解性,为语音识别提供了前提条件。
{"title":"An Improved LMS Adaptive Filtering Speech Enhancement Algorithm","authors":"Xi Hai Xie, Wen Chuan Wang","doi":"10.1109/ICNLP58431.2023.00033","DOIUrl":"https://doi.org/10.1109/ICNLP58431.2023.00033","url":null,"abstract":"In order to improve the accuracy of speech recognition, the input speech signal is usually denoised first, which is typically done using the Least Mean Square (LMS) algorithm. To address the drawback that the fixed-step LMS algorithm in adaptive filtering cannot achieve a balance between convergence speed and steady-state error, this paper proposes a variable-step LMS algorithm based on an improved inverse hyperbolic sine function. In this paper, the improved algorithm is applied to speech enhancement, and the performance of this algorithm is compared with several other improved algorithms. The simulation results show that the improved algorithm takes better care of the conflict between convergence speed and steady-state error, and the algorithm has an obvious denoising effect for noisy speech, which effectively improves the clarity and intelligibility of speech and provides prerequisites for speech recognition.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81118513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph-to-Text Generation Combining Directed and Undirected Structural Information in Knowledge Graphs* 结合知识图中有向和无向结构信息的图到文本生成*
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/ICNLP58431.2023.00064
Hongda Gong, Shimin Shan, Hongkui Wei
Graph-to-text generation task is transforms knowledge graphs into natural language. In current research, pretrained language models(PLMs) have shown better performance than structured graph encoders in the generation task. Currently, PLMs serialise knowledge graphs mostly by transforming them into undirected graph structures. The advantage of an undirected graph structure is that it provides a more comprehensive representation of the information in knowledge graph, but it is difficult to capture the dependencies between entities, so the information represented may not be accurate. Therefore, We use four types of positional embedding to obtain both the directed and undirected structure of the knowledge graph, so that we can more fully represent the information in knowledge graph, and the dependencies between entities. We then add a semantic aggregation module to the Transformer layer of PLMs, which is used to obtain a more comprehensive representation of the information in knowledge graph, as well as to capture the dependencies between entities. Thus, our approach combines the advantages of both directed and undirected structural information. In addition, our new approach is more capable of capturing generic knowledge and can show better results with small samples of data.
图-文本生成任务是将知识图转换为自然语言。在目前的研究中,预训练语言模型(PLMs)在生成任务中表现出比结构化图编码器更好的性能。目前,plm序列化知识图的方法主要是将知识图转换为无向图结构。无向图结构的优点是它提供了更全面的知识图信息表示,但难以捕获实体之间的依赖关系,因此表示的信息可能不准确。因此,我们使用四种类型的位置嵌入来获得知识图的有向和无向结构,从而更充分地表示知识图中的信息,以及实体之间的依赖关系。然后,我们在plm的Transformer层添加语义聚合模块,该模块用于在知识图中获得更全面的信息表示,以及捕获实体之间的依赖关系。因此,我们的方法结合了有向和无向结构信息的优点。此外,我们的新方法更有能力捕获通用知识,并且可以在小样本数据中显示更好的结果。
{"title":"Graph-to-Text Generation Combining Directed and Undirected Structural Information in Knowledge Graphs*","authors":"Hongda Gong, Shimin Shan, Hongkui Wei","doi":"10.1109/ICNLP58431.2023.00064","DOIUrl":"https://doi.org/10.1109/ICNLP58431.2023.00064","url":null,"abstract":"Graph-to-text generation task is transforms knowledge graphs into natural language. In current research, pretrained language models(PLMs) have shown better performance than structured graph encoders in the generation task. Currently, PLMs serialise knowledge graphs mostly by transforming them into undirected graph structures. The advantage of an undirected graph structure is that it provides a more comprehensive representation of the information in knowledge graph, but it is difficult to capture the dependencies between entities, so the information represented may not be accurate. Therefore, We use four types of positional embedding to obtain both the directed and undirected structure of the knowledge graph, so that we can more fully represent the information in knowledge graph, and the dependencies between entities. We then add a semantic aggregation module to the Transformer layer of PLMs, which is used to obtain a more comprehensive representation of the information in knowledge graph, as well as to capture the dependencies between entities. Thus, our approach combines the advantages of both directed and undirected structural information. In addition, our new approach is more capable of capturing generic knowledge and can show better results with small samples of data.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86472407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Construction of Part of Speech Tagger for Malay Language: A Review 马来语词性标注器的建构述评
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/ICNLP58431.2023.00053
Nurulhuda Mohamad Ali, Goh Hui Ngo, Amy Lim Hui Lan
Part-of-Speech (POS) Tagging is one of the fundamental tasks in Natural Language Processing (NLP) in analyzing human languages. It is a process of identifying how words are used in a sentence by assigning the proper POS for each word. Thus far, most well-researched POS tagging is on European languages which are considered rich-resource languages due to the unlimited linguistic resources such as research studies and large standard corpus. However, POS tagging is arduous for low-resource languages due to the limitation of linguistic resources. The Malay language is considered as a low-resource language. Most POS tagging studies for the Malay language are using rule-based and stochastic methods. However, exploration in Deep Learning (DL) for Malay language is limited. Thus, studies with POS tagging methods that implement DL for other low-resource languages within South East Asia are included in this study. Hence, the aim of this study is to identify the state of the art, challenges, and future works of Malay POS tagger. This study provides a review of different methods, datasets, and performance measures used in POS tagging studies.
词性标注是自然语言处理(NLP)分析人类语言的基本任务之一。这是一个通过为每个单词分配适当的词序来确定单词在句子中如何使用的过程。到目前为止,对词性标注研究最多的是欧洲语言,由于研究研究和标准语料库庞大等语言资源无限,欧洲语言被认为是资源丰富的语言。然而,由于语言资源的限制,对低资源语言进行词性标注是一项艰巨的任务。马来语被认为是资源匮乏的语言。马来语词性标注的研究大多采用基于规则和随机的方法。然而,马来语深度学习(DL)的探索是有限的。因此,对东南亚其他低资源语言的词性标注方法的研究包括在本研究中。因此,本研究的目的是确定马来语POS标注器的艺术状态,挑战和未来的工作。这项研究提供了不同的方法,数据集,并在词性标注研究中使用的性能指标的回顾。
{"title":"Construction of Part of Speech Tagger for Malay Language: A Review","authors":"Nurulhuda Mohamad Ali, Goh Hui Ngo, Amy Lim Hui Lan","doi":"10.1109/ICNLP58431.2023.00053","DOIUrl":"https://doi.org/10.1109/ICNLP58431.2023.00053","url":null,"abstract":"Part-of-Speech (POS) Tagging is one of the fundamental tasks in Natural Language Processing (NLP) in analyzing human languages. It is a process of identifying how words are used in a sentence by assigning the proper POS for each word. Thus far, most well-researched POS tagging is on European languages which are considered rich-resource languages due to the unlimited linguistic resources such as research studies and large standard corpus. However, POS tagging is arduous for low-resource languages due to the limitation of linguistic resources. The Malay language is considered as a low-resource language. Most POS tagging studies for the Malay language are using rule-based and stochastic methods. However, exploration in Deep Learning (DL) for Malay language is limited. Thus, studies with POS tagging methods that implement DL for other low-resource languages within South East Asia are included in this study. Hence, the aim of this study is to identify the state of the art, challenges, and future works of Malay POS tagger. This study provides a review of different methods, datasets, and performance measures used in POS tagging studies.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87004278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Entity Relationship Extraction Method Based on Multi-head Attention and Graph Convolutional Network 基于多头注意和图卷积网络的实体关系提取方法
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/ICNLP58431.2023.00060
Sheping Zhai, Hang Li, Fangyi Li, Xinnian Kang
Extracting entities and relations from text is crucial in the field of natural language processing. Current methods for relation extraction rely on training sets labeled using remote supervision techniques. However, these methods have limitations as they do not consider the connection between entity and relation extraction and cannot extract overlapping entities and relations. Therefore, accurate joint entity-relation extraction remains challenging. Our paper introduces a model for entity relation extraction based on multi-head attention and graph convolutional networks. We utilize the multi-head attention approach to extract entity features, building on the text features extracted by the graph convolutional network. Utilizing the New York Times (NYT) dataset, we evaluated the performance of our model. The experimentation revealed that our model effectively captures the semantic correlation between entity and relation extraction and minimizes the impact of unrelated entity pairings, resulting in improved recognition accuracy even in scenarios with overlapping entities.
从文本中提取实体和关系是自然语言处理领域的关键。当前的关系提取方法依赖于使用远程监督技术标记的训练集。但是,这些方法没有考虑实体和关系提取之间的联系,无法提取重叠的实体和关系,存在一定的局限性。因此,准确地提取关节实体关系仍然是一个挑战。本文提出了一种基于多头注意和图卷积网络的实体关系抽取模型。我们在图卷积网络提取的文本特征的基础上,利用多头注意方法提取实体特征。利用纽约时报(NYT)数据集,我们评估了模型的性能。实验表明,我们的模型有效地捕获了实体之间的语义相关性和关系提取,并最大限度地减少了不相关实体配对的影响,即使在实体重叠的情况下也能提高识别精度。
{"title":"Entity Relationship Extraction Method Based on Multi-head Attention and Graph Convolutional Network","authors":"Sheping Zhai, Hang Li, Fangyi Li, Xinnian Kang","doi":"10.1109/ICNLP58431.2023.00060","DOIUrl":"https://doi.org/10.1109/ICNLP58431.2023.00060","url":null,"abstract":"Extracting entities and relations from text is crucial in the field of natural language processing. Current methods for relation extraction rely on training sets labeled using remote supervision techniques. However, these methods have limitations as they do not consider the connection between entity and relation extraction and cannot extract overlapping entities and relations. Therefore, accurate joint entity-relation extraction remains challenging. Our paper introduces a model for entity relation extraction based on multi-head attention and graph convolutional networks. We utilize the multi-head attention approach to extract entity features, building on the text features extracted by the graph convolutional network. Utilizing the New York Times (NYT) dataset, we evaluated the performance of our model. The experimentation revealed that our model effectively captures the semantic correlation between entity and relation extraction and minimizes the impact of unrelated entity pairings, resulting in improved recognition accuracy even in scenarios with overlapping entities.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79205296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementation of License Plate Detection Based on Improved YOLOv5s 基于改进YOLOv5s的车牌检测实现
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/ICNLP58431.2023.00026
Chen Yang, Guang-Yuan Zhao
In order to solve the problem of low accuracy of license plate detection, an improved license plate detection algorithm is proposed. The super-resolution reconstruction network SRGAN is used to enhance the image of the dataset and make the image of the license plate area clearer; The fourth C3 module of YOLOv5s backbone network is replaced with CBAM attention mechanism module to enhance the ability of backbone network to extract feature information, thus improving the detection accuracy. The experimental results show that YOLOv5s network using SRGAN for image enhancement and embedding CBAM attention mechanism improves the accuracy of license plate image.
为了解决车牌检测精度低的问题,提出了一种改进的车牌检测算法。采用超分辨率重构网络SRGAN对数据集进行图像增强,使车牌区域图像更加清晰;将YOLOv5s骨干网的第四个C3模块替换为CBAM注意机制模块,增强骨干网提取特征信息的能力,从而提高检测精度。实验结果表明,YOLOv5s网络采用SRGAN进行图像增强,并嵌入CBAM注意机制,提高了车牌图像的精度。
{"title":"Implementation of License Plate Detection Based on Improved YOLOv5s","authors":"Chen Yang, Guang-Yuan Zhao","doi":"10.1109/ICNLP58431.2023.00026","DOIUrl":"https://doi.org/10.1109/ICNLP58431.2023.00026","url":null,"abstract":"In order to solve the problem of low accuracy of license plate detection, an improved license plate detection algorithm is proposed. The super-resolution reconstruction network SRGAN is used to enhance the image of the dataset and make the image of the license plate area clearer; The fourth C3 module of YOLOv5s backbone network is replaced with CBAM attention mechanism module to enhance the ability of backbone network to extract feature information, thus improving the detection accuracy. The experimental results show that YOLOv5s network using SRGAN for image enhancement and embedding CBAM attention mechanism improves the accuracy of license plate image.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82574818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Service Migration Method Based on User Mobility 基于用户迁移的动态业务迁移方法
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/ICNLP58431.2023.00092
Haibo Ge, Haodong Feng, lJiajun Geng, Wenhao He, Yu An, Xing Song
With the rapid development of 5th Generation Mobile Communication Technology (5G), mobile edge computing (MEC) has played an important role in improving user experience and reducing energy consumption and latency. Mobile edge computing will decentralize some of the computing and storage problems of the central cloud to the network edge, so that the data generated by terminal devices can be processed quickly, but how to ensure that users can obtain good performance when moving between different locations is a key issue. In order to solve this problem and reduce end-to-end delay and energy consumption, a service migration algorithm based on node residual energy hybrid migration strategy for mobile edge computing (HOS-RE) is proposed. The experimental results show that compared with other methods, this method can reduce the time of service migration and ensure the continuity of services.
随着第5代移动通信技术(5G)的快速发展,移动边缘计算(MEC)在提升用户体验、降低能耗和延迟方面发挥了重要作用。移动边缘计算将中心云的一些计算和存储问题分散到网络边缘,使终端设备产生的数据能够得到快速处理,但如何保证用户在不同位置之间移动时能够获得良好的性能是一个关键问题。为了解决这一问题,降低端到端时延和能耗,提出了一种基于节点剩余能量混合迁移策略的移动边缘计算业务迁移算法(HOS-RE)。实验结果表明,与其他方法相比,该方法可以减少业务迁移时间,保证业务的连续性。
{"title":"Dynamic Service Migration Method Based on User Mobility","authors":"Haibo Ge, Haodong Feng, lJiajun Geng, Wenhao He, Yu An, Xing Song","doi":"10.1109/ICNLP58431.2023.00092","DOIUrl":"https://doi.org/10.1109/ICNLP58431.2023.00092","url":null,"abstract":"With the rapid development of 5th Generation Mobile Communication Technology (5G), mobile edge computing (MEC) has played an important role in improving user experience and reducing energy consumption and latency. Mobile edge computing will decentralize some of the computing and storage problems of the central cloud to the network edge, so that the data generated by terminal devices can be processed quickly, but how to ensure that users can obtain good performance when moving between different locations is a key issue. In order to solve this problem and reduce end-to-end delay and energy consumption, a service migration algorithm based on node residual energy hybrid migration strategy for mobile edge computing (HOS-RE) is proposed. The experimental results show that compared with other methods, this method can reduce the time of service migration and ensure the continuity of services.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90348181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Composite Kernels ELM Based on Spatial Feature Extraction for Hyperspectral Vegetation Image Classification 基于空间特征提取的深度复合核ELM高光谱植被图像分类
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/ICNLP58431.2023.00023
Yu Lei, Guangyuan Zhao, Lingjie Zhang
Vegetation classification has a pivotal role in forest management and ecological research. It is a specific application problem in hyperspectral image classification. However, the existing classification models do not make sufficient use of the spatial features of vegetation, and cannot extract deep feature information. To address these issues, we propose a deep composite kernel extreme learning machine based on spatial feature extraction (DCKELM-SPATIAL) to classify vegetation. Especially, we use the Gabor filter and super-pixel density peak clustering method to obtain a new set of spatial composite kernels. Experiments are carried out on two sets of real hyperspectral vegetation datasets. The results show that this method is superior to some classical and advanced methods in classification accuracy, and satisfactory results are obtained.
植被分类在森林经营和生态学研究中具有举足轻重的作用。它是高光谱图像分类中的一个具体应用问题。然而,现有的分类模型没有充分利用植被的空间特征,无法提取深度特征信息。为了解决这些问题,我们提出了一种基于空间特征提取的深度复合核极限学习机(DCKELM-SPATIAL)来对植被进行分类。特别地,我们使用Gabor滤波器和超像素密度峰聚类方法获得了一组新的空间复合核。在两组真实高光谱植被数据集上进行了实验。结果表明,该方法在分类精度上优于一些经典和先进的方法,并取得了令人满意的结果。
{"title":"Deep Composite Kernels ELM Based on Spatial Feature Extraction for Hyperspectral Vegetation Image Classification","authors":"Yu Lei, Guangyuan Zhao, Lingjie Zhang","doi":"10.1109/ICNLP58431.2023.00023","DOIUrl":"https://doi.org/10.1109/ICNLP58431.2023.00023","url":null,"abstract":"Vegetation classification has a pivotal role in forest management and ecological research. It is a specific application problem in hyperspectral image classification. However, the existing classification models do not make sufficient use of the spatial features of vegetation, and cannot extract deep feature information. To address these issues, we propose a deep composite kernel extreme learning machine based on spatial feature extraction (DCKELM-SPATIAL) to classify vegetation. Especially, we use the Gabor filter and super-pixel density peak clustering method to obtain a new set of spatial composite kernels. Experiments are carried out on two sets of real hyperspectral vegetation datasets. The results show that this method is superior to some classical and advanced methods in classification accuracy, and satisfactory results are obtained.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72693937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research based on improved SSD target detection algorithm 研究基于改进SSD的目标检测算法
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/icnlp58431.2023.00009
Qiang Li, Haibo Ge, Chaofeng Huang, Ting Zhou
In view of the problem of missed detection and false detection in complex environment, especially at night environment and false target environment, the detection ability of target detection is poor. With the development of deep learning, an improved SSD-based target detection algorithm is proposed, and the attention mechanism and function fusion module are added on the basis of SSD, which is integrated into the original network. Secondly, FPN module is a kind of shallow network, which is used to integrate deep network and shallow network to improve the representation ability of semantic information. Experiments were carried out on VOC2007 data set, pseudo target detection data set and night target detection data set. The results show that the detection accuracy of this method is up to 92.1%, which is verified by the camouflage data set and the night target detection data set. Compared with SSD andMobile-V2-SSD, the detection accuracy of this method is improved by 16.3% and 4.8%, respectively, and it has better robustness and real-time detection ability in complex environments.
针对复杂环境下,特别是夜间环境和假目标环境下的漏检和误检问题,目标检测的检测能力较差。随着深度学习的发展,提出了一种改进的基于SSD的目标检测算法,并在SSD的基础上增加了注意机制和功能融合模块,整合到原有网络中。其次,FPN模块是一种浅层网络,将深层网络与浅层网络相结合,提高语义信息的表示能力。在VOC2007数据集、伪目标检测数据集和夜间目标检测数据集上进行了实验。结果表明,该方法的检测精度可达92.1%,并通过伪装数据集和夜间目标检测数据集进行了验证。与SSD和mobile - v2 -SSD相比,该方法的检测准确率分别提高了16.3%和4.8%,在复杂环境下具有更好的鲁棒性和实时检测能力。
{"title":"Research based on improved SSD target detection algorithm","authors":"Qiang Li, Haibo Ge, Chaofeng Huang, Ting Zhou","doi":"10.1109/icnlp58431.2023.00009","DOIUrl":"https://doi.org/10.1109/icnlp58431.2023.00009","url":null,"abstract":"In view of the problem of missed detection and false detection in complex environment, especially at night environment and false target environment, the detection ability of target detection is poor. With the development of deep learning, an improved SSD-based target detection algorithm is proposed, and the attention mechanism and function fusion module are added on the basis of SSD, which is integrated into the original network. Secondly, FPN module is a kind of shallow network, which is used to integrate deep network and shallow network to improve the representation ability of semantic information. Experiments were carried out on VOC2007 data set, pseudo target detection data set and night target detection data set. The results show that the detection accuracy of this method is up to 92.1%, which is verified by the camouflage data set and the night target detection data set. Compared with SSD andMobile-V2-SSD, the detection accuracy of this method is improved by 16.3% and 4.8%, respectively, and it has better robustness and real-time detection ability in complex environments.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73690632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Icon
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1