首页 > 最新文献

Icon最新文献

英文 中文
Multi feature fusion paper classification model based on attention mechanism 基于注意机制的多特征融合纸张分类模型
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/ICNLP58431.2023.00063
C. Fan, Yongchun Li, Yuexin Wu
In recent years, the number of published scientific research papers has shown a growing trend. How to classify scientific research papers efficiently and accurately is a very important issue. However, excellent paper classification system platforms at home and abroad, such as China National Knowledge Infrastructure, Microsoft Academic Network, etc., rely heavily on the structured or semi-structured text in papers for classification, and do not interpret the unstructured text data in papers enough. To solve this problem, we proposed a multi-feature fusion paper classification model based on attention mechanism (AttentionMFF), which uses the fusion features of structured and unstructured text data in papers to improve classification performance. First, Attention MFF extracts the features of different texts in papers by a BERT layer, then uses attention mechanism to fuse different features, and finally get category through the linear layer. Experiments on the arXiv paper dataset show that the Attention MFF has higher F1-Score than TextCNN model and BERT model that only uses the feature of abstract.
近年来,发表的科研论文数量呈增长趋势。如何高效、准确地对科研论文进行分类是一个非常重要的问题。然而,国内外优秀的论文分类系统平台,如中国知识基础设施、微软学术网等,都严重依赖论文中的结构化或半结构化文本进行分类,对论文中的非结构化文本数据解释不够。为了解决这一问题,我们提出了一种基于注意机制的多特征融合论文分类模型(AttentionMFF),该模型利用论文中结构化和非结构化文本数据的融合特征来提高分类性能。注意MFF首先通过BERT层提取论文中不同文本的特征,然后利用注意机制融合不同的特征,最后通过线性层得到分类。在arXiv论文数据集上的实验表明,该注意力MFF模型的F1-Score高于仅使用摘要特征的TextCNN模型和BERT模型。
{"title":"Multi feature fusion paper classification model based on attention mechanism","authors":"C. Fan, Yongchun Li, Yuexin Wu","doi":"10.1109/ICNLP58431.2023.00063","DOIUrl":"https://doi.org/10.1109/ICNLP58431.2023.00063","url":null,"abstract":"In recent years, the number of published scientific research papers has shown a growing trend. How to classify scientific research papers efficiently and accurately is a very important issue. However, excellent paper classification system platforms at home and abroad, such as China National Knowledge Infrastructure, Microsoft Academic Network, etc., rely heavily on the structured or semi-structured text in papers for classification, and do not interpret the unstructured text data in papers enough. To solve this problem, we proposed a multi-feature fusion paper classification model based on attention mechanism (AttentionMFF), which uses the fusion features of structured and unstructured text data in papers to improve classification performance. First, Attention MFF extracts the features of different texts in papers by a BERT layer, then uses attention mechanism to fuse different features, and finally get category through the linear layer. Experiments on the arXiv paper dataset show that the Attention MFF has higher F1-Score than TextCNN model and BERT model that only uses the feature of abstract.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75240793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Alignment Offset Based Adaptive Training for Simultaneous Machine Translation 基于对齐偏移量的同步机器翻译自适应训练
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/ICNLP58431.2023.00035
Qiqi Liang, Yanjun Liu, Fandong Meng, Jinan Xu, Yufeng Chen, Jie Zhou
Given incomplete source sentences as inputs, it is generally difficult for Simultaneous Machine Translation (SiMT) models to generate a target token once its aligned source tokens are absent. How to measure such difficulty and further conduct adaptive training for SiMT models are not sufficiently studied. In this paper, we propose a new metric named alignment offset (AO) to quantify the learning difficulty of target tokens for SiMT models. Given a target token, its AO is calculated by the offset between its aligned source tokens and the already received source tokens. Furthermore, we design two AO-based adaptive training methods to improve the training of SiMT models. Firstly, we introduce token-level curriculum learning based on AO, which progressively switches the training process from easy target tokens to difficult ones. Secondly, we assign an appropriate weight to the training loss of each target token according to its AO. Experimental results on four datasets demonstrate that our methods significantly and consistently outperform all the strong baselines.
给定不完整的源句子作为输入,一旦对齐的源标记不存在,同步机器翻译(SiMT)模型通常很难生成目标标记。如何测量SiMT模型的难度并进一步对其进行自适应训练的研究还不够。在本文中,我们提出了一个新的度量称为对齐偏移量(AO)来量化SiMT模型的目标标记的学习难度。给定一个目标令牌,它的AO由对齐的源令牌与已经接收到的源令牌之间的偏移量计算。此外,我们设计了两种基于ao的自适应训练方法来改进SiMT模型的训练。首先,我们引入了基于AO的令牌级课程学习,将训练过程从简单的目标令牌逐步转换为困难的目标令牌。其次,根据每个目标标记的AO值对其训练损失分配适当的权值;在四个数据集上的实验结果表明,我们的方法显著且一致地优于所有强基线。
{"title":"Alignment Offset Based Adaptive Training for Simultaneous Machine Translation","authors":"Qiqi Liang, Yanjun Liu, Fandong Meng, Jinan Xu, Yufeng Chen, Jie Zhou","doi":"10.1109/ICNLP58431.2023.00035","DOIUrl":"https://doi.org/10.1109/ICNLP58431.2023.00035","url":null,"abstract":"Given incomplete source sentences as inputs, it is generally difficult for Simultaneous Machine Translation (SiMT) models to generate a target token once its aligned source tokens are absent. How to measure such difficulty and further conduct adaptive training for SiMT models are not sufficiently studied. In this paper, we propose a new metric named alignment offset (AO) to quantify the learning difficulty of target tokens for SiMT models. Given a target token, its AO is calculated by the offset between its aligned source tokens and the already received source tokens. Furthermore, we design two AO-based adaptive training methods to improve the training of SiMT models. Firstly, we introduce token-level curriculum learning based on AO, which progressively switches the training process from easy target tokens to difficult ones. Secondly, we assign an appropriate weight to the training loss of each target token according to its AO. Experimental results on four datasets demonstrate that our methods significantly and consistently outperform all the strong baselines.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85477397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Kernelized Evidence C-Means Clustering Combining Spatial Information for Noisy Image Segmentation 结合空间信息的自适应核证据c均值聚类在噪声图像分割中的应用
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/ICNLP58431.2023.00016
Lan Rong, Haowen Mi, Qu Na, Zhao Feng, Haiyan Yu, Zhang Lu
Although the evidence c-means clustering (ECM) has the capability to process uncertain information, it is not suitable for noisy image segmentation, because the spatial information of pixels is not considered. To solve the problem, an adaptive kernelized evidence c-means clustering combining spatial information for noisy image segmentation algorithm is proposed. Firstly, an adaptive noise distance that can be iteratively updated is constructed using the local information of the pixels. Secondly, to improve the classification performance, an adaptive kernel function is proposed to measure the distance between the pixel and the cluster center. Simultaneously, the original, local and non-local information of pixels are introduced adaptively into the objective function to enhance the robustness to noise. In the iteration, the noise cluster is automatically recovered using the recovery factor constructed by the gray and spatial information of neighborhood pixels. Finally, the credal partition is transformed into a fuzzy partition by pignistic transformation, the classification of pixel be determined by the maximum membership principle. Experiments on synthetic images and real images demonstrate that the proposed algorithm has strong noise suppression ability. Visual effects and evaluation indexes verify the effectiveness of the proposed algorithm for noisy image segmentation.
证据c均值聚类(ECM)虽然具有处理不确定信息的能力,但由于没有考虑像素的空间信息,因此不适用于噪声图像分割。为了解决这一问题,提出了一种结合空间信息的自适应核证据c均值聚类算法用于噪声图像分割。首先,利用像素的局部信息构造可迭代更新的自适应噪声距离;其次,为了提高分类性能,提出了一种自适应核函数来度量像素与聚类中心之间的距离;同时,自适应地将像素的原始信息、局部信息和非局部信息引入目标函数,增强了目标函数对噪声的鲁棒性。在迭代中,利用邻域像素的灰度和空间信息构造的恢复因子自动恢复噪声聚类。最后,通过皮格尼格变换将凭据划分转化为模糊划分,利用最大隶属度原则确定像素的分类。在合成图像和真实图像上的实验表明,该算法具有较强的噪声抑制能力。视觉效果和评价指标验证了该算法对噪声图像分割的有效性。
{"title":"Adaptive Kernelized Evidence C-Means Clustering Combining Spatial Information for Noisy Image Segmentation","authors":"Lan Rong, Haowen Mi, Qu Na, Zhao Feng, Haiyan Yu, Zhang Lu","doi":"10.1109/ICNLP58431.2023.00016","DOIUrl":"https://doi.org/10.1109/ICNLP58431.2023.00016","url":null,"abstract":"Although the evidence c-means clustering (ECM) has the capability to process uncertain information, it is not suitable for noisy image segmentation, because the spatial information of pixels is not considered. To solve the problem, an adaptive kernelized evidence c-means clustering combining spatial information for noisy image segmentation algorithm is proposed. Firstly, an adaptive noise distance that can be iteratively updated is constructed using the local information of the pixels. Secondly, to improve the classification performance, an adaptive kernel function is proposed to measure the distance between the pixel and the cluster center. Simultaneously, the original, local and non-local information of pixels are introduced adaptively into the objective function to enhance the robustness to noise. In the iteration, the noise cluster is automatically recovered using the recovery factor constructed by the gray and spatial information of neighborhood pixels. Finally, the credal partition is transformed into a fuzzy partition by pignistic transformation, the classification of pixel be determined by the maximum membership principle. Experiments on synthetic images and real images demonstrate that the proposed algorithm has strong noise suppression ability. Visual effects and evaluation indexes verify the effectiveness of the proposed algorithm for noisy image segmentation.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76723762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey of Speech Recognition Based on Deep Learning 基于深度学习的语音识别研究综述
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/icnlp58431.2023.00034
Youyao Liu, Jiale Chen, Jialei Gao, Shihao Gai
Artificial intelligence is the vane leading the world’s scientific and technological development and future lifestyle change in the 21st century, and speech recognition, as one of the indispensable technical means, is inevitably the focus of human attention. There are two problems in traditional speech recognition: first, speech recognition technology cannot be significantly improved, and second, speech recognition systems cannot accurately extract data and features. In order to solve these problems, this paper first compares the traditional speech recognition GMM-HMM model and establishes a DNN-HMM model, which proposes a method to improve the speed of speech recognition and greatly improves the recognition rate. However, DNN-HMM lacks the ability to use historical information to assist in the current task, and a second model is proposed on the basis of this problem, that is, the LSTM model is used to solve the problem of insufficient contextual information, which further improves the speech recognition ability. Then, in order to solve the problem of long memory loss and speed up training, the Transformer model is cited, and in order to solve the problem that the traditional language model can only predict the next word in one direction, the BERT model, which has a bidirectional language model, is invoked.
人工智能是引领21世纪世界科技发展和未来生活方式改变的风向标,而语音识别作为其中不可或缺的技术手段之一,必然是人类关注的焦点。传统的语音识别存在两个问题:一是语音识别技术无法得到显著的改进,二是语音识别系统无法准确提取数据和特征。为了解决这些问题,本文首先比较了传统语音识别的GMM-HMM模型,建立了DNN-HMM模型,提出了一种提高语音识别速度的方法,大大提高了识别率。然而,DNN-HMM缺乏利用历史信息辅助当前任务的能力,在此基础上提出了第二种模型,即利用LSTM模型解决上下文信息不足的问题,进一步提高了语音识别能力。然后,为了解决长时间记忆丢失的问题,加快训练速度,引用了Transformer模型,为了解决传统语言模型只能在一个方向上预测下一个单词的问题,调用了具有双向语言模型的BERT模型。
{"title":"A Survey of Speech Recognition Based on Deep Learning","authors":"Youyao Liu, Jiale Chen, Jialei Gao, Shihao Gai","doi":"10.1109/icnlp58431.2023.00034","DOIUrl":"https://doi.org/10.1109/icnlp58431.2023.00034","url":null,"abstract":"Artificial intelligence is the vane leading the world’s scientific and technological development and future lifestyle change in the 21st century, and speech recognition, as one of the indispensable technical means, is inevitably the focus of human attention. There are two problems in traditional speech recognition: first, speech recognition technology cannot be significantly improved, and second, speech recognition systems cannot accurately extract data and features. In order to solve these problems, this paper first compares the traditional speech recognition GMM-HMM model and establishes a DNN-HMM model, which proposes a method to improve the speed of speech recognition and greatly improves the recognition rate. However, DNN-HMM lacks the ability to use historical information to assist in the current task, and a second model is proposed on the basis of this problem, that is, the LSTM model is used to solve the problem of insufficient contextual information, which further improves the speech recognition ability. Then, in order to solve the problem of long memory loss and speed up training, the Transformer model is cited, and in order to solve the problem that the traditional language model can only predict the next word in one direction, the BERT model, which has a bidirectional language model, is invoked.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77048846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Siamese Network Visual Tracking Algorithm Based on GCT Attention and Dual-Template Update 基于GCT关注和双模板更新的Siamese网络视觉跟踪算法
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/ICNLP58431.2023.00014
Sugang Ma, Siwei Sun, Lei Pu, Xiaobao Yang
To address the problem of insufficient representational capability and lack of online update of the Fully-convolutional Siamese Network (SiamFC) tracker in complex scenes, this paper proposes a siamese network visual tracking algorithm based on GCT attention and dual-template update mechanism. First, the feature extraction network is constructed by replacing AlexNet with the VGG16 network and SoftPool is used to replace the maximum pooling layer. Secondly, the attention module is added after the backbone network to enhance the network’s ability to extract object features. Finally, a dual-template update mechanism is designed for response map fusion. Average Peak-to-Correlation Energy (APCE) is used to determine whether to update the dynamic templates, effectively improving the tracking robustness. The proposed algorithm is trained on the Got-10k dataset and tested on the OTB2015 and VOT2018 datasets. The experimental results show that, compared with SiamFC, the success rate and accuracy reach 0.663 and 0.891 on the OTB2015, which improve respectively 7.6% and 11.9%; On the VOT2018 dataset, the tracking accuracy, robustness and EAO are improved respectively by 2.9%, 29% and 14%. The proposed algorithm achieves high tracking accuracy in complex scenes and the tracking speed reaches 52.6 Fps, which meets the real-time tracking requirements.
针对全卷积连体网络(SiamFC)跟踪器在复杂场景下表现能力不足和缺乏在线更新的问题,提出了一种基于GCT关注和双模板更新机制的连体网络视觉跟踪算法。首先,用VGG16网络代替AlexNet构建特征提取网络,用SoftPool代替最大池化层。其次,在骨干网之后加入注意力模块,增强网络对目标特征的提取能力;最后,设计了双模板更新机制进行响应图融合。利用平均峰相关能(APCE)来决定是否更新动态模板,有效地提高了跟踪的鲁棒性。该算法在Got-10k数据集上进行了训练,并在OTB2015和VOT2018数据集上进行了测试。实验结果表明,与SiamFC相比,OTB2015的成功率和准确率分别达到0.663和0.891,分别提高了7.6%和11.9%;在VOT2018数据集上,跟踪精度、鲁棒性和EAO分别提高了2.9%、29%和14%。该算法在复杂场景下具有较高的跟踪精度,跟踪速度达到52.6 Fps,满足实时跟踪要求。
{"title":"Siamese Network Visual Tracking Algorithm Based on GCT Attention and Dual-Template Update","authors":"Sugang Ma, Siwei Sun, Lei Pu, Xiaobao Yang","doi":"10.1109/ICNLP58431.2023.00014","DOIUrl":"https://doi.org/10.1109/ICNLP58431.2023.00014","url":null,"abstract":"To address the problem of insufficient representational capability and lack of online update of the Fully-convolutional Siamese Network (SiamFC) tracker in complex scenes, this paper proposes a siamese network visual tracking algorithm based on GCT attention and dual-template update mechanism. First, the feature extraction network is constructed by replacing AlexNet with the VGG16 network and SoftPool is used to replace the maximum pooling layer. Secondly, the attention module is added after the backbone network to enhance the network’s ability to extract object features. Finally, a dual-template update mechanism is designed for response map fusion. Average Peak-to-Correlation Energy (APCE) is used to determine whether to update the dynamic templates, effectively improving the tracking robustness. The proposed algorithm is trained on the Got-10k dataset and tested on the OTB2015 and VOT2018 datasets. The experimental results show that, compared with SiamFC, the success rate and accuracy reach 0.663 and 0.891 on the OTB2015, which improve respectively 7.6% and 11.9%; On the VOT2018 dataset, the tracking accuracy, robustness and EAO are improved respectively by 2.9%, 29% and 14%. The proposed algorithm achieves high tracking accuracy in complex scenes and the tracking speed reaches 52.6 Fps, which meets the real-time tracking requirements.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77884382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Post-encoding and contrastive learning method for response selection task 反应选择任务的后编码与对比学习方法
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/ICNLP58431.2023.00050
Xianwei Xue, Chunping Li, Zhilin Lu, Youshu Zhang, Shanghua Xiao
Retrieval-based dialogue systems have achieved great performance improvements after the raise of pre-trained language models and Transformer mechanisms. In the process of context and response selection, the pre-trained language model can capture the relationship between texts, but current existing methods don’t consider the order of sentences and the relationship between the context and the response. At the same time, as the problem of a small number of positive samples in retrieval-based dialogue systems, it is difficult to train a learning model with high performance. In addition, existing methods usually requires the larger computational cost after splicing the context and the response. To solve the above problems, we propose a post-encoding approach combining with the strategy of contrastive learning. The order of the context and the relationship between sentences in dialogues and response are reflected in the encoding process, and a new loss function is designed for contrastive learning. The propose approach is validated through experiments on public datasets. The experiment results show that our model achieves better performance and effectiveness compared to existing methods.
在提出预训练语言模型和Transformer机制后,基于检索的对话系统取得了很大的性能改进。在语境和反应选择过程中,预训练的语言模型可以捕捉到文本之间的关系,但目前的方法没有考虑句子的顺序以及语境和反应之间的关系。同时,由于基于检索的对话系统中正样本数量少的问题,很难训练出高性能的学习模型。此外,现有的方法在拼接上下文和响应后通常需要较大的计算开销。为了解决上述问题,我们提出了一种结合对比学习策略的后编码方法。在编码过程中反映了语境的顺序以及对话和反应中句子之间的关系,并设计了一种新的损失函数用于对比学习。通过公共数据集的实验验证了该方法的有效性。实验结果表明,与现有方法相比,我们的模型具有更好的性能和有效性。
{"title":"Post-encoding and contrastive learning method for response selection task","authors":"Xianwei Xue, Chunping Li, Zhilin Lu, Youshu Zhang, Shanghua Xiao","doi":"10.1109/ICNLP58431.2023.00050","DOIUrl":"https://doi.org/10.1109/ICNLP58431.2023.00050","url":null,"abstract":"Retrieval-based dialogue systems have achieved great performance improvements after the raise of pre-trained language models and Transformer mechanisms. In the process of context and response selection, the pre-trained language model can capture the relationship between texts, but current existing methods don’t consider the order of sentences and the relationship between the context and the response. At the same time, as the problem of a small number of positive samples in retrieval-based dialogue systems, it is difficult to train a learning model with high performance. In addition, existing methods usually requires the larger computational cost after splicing the context and the response. To solve the above problems, we propose a post-encoding approach combining with the strategy of contrastive learning. The order of the context and the relationship between sentences in dialogues and response are reflected in the encoding process, and a new loss function is designed for contrastive learning. The propose approach is validated through experiments on public datasets. The experiment results show that our model achieves better performance and effectiveness compared to existing methods.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87773012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive SLIC-Based Fuzzy Intensity Dissimilarity Thresholding for Color Image Segmentation 基于自适应slic的模糊强度不相似阈值分割彩色图像
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/icnlp58431.2023.00017
Lan Rong, Danlin Feng, Zhao Feng, Haiyan Yu, Zhang Lu
In order to make full use of the color information of the image and improve the accuracy of color image segmentation, this paper proposes an adaptive SLIC-based fuzzy intensity dissimilarity thresholding for color image segmentation, which does not need gray conversion. Firstly, the proposed algorithm adaptively selects the number of super-pixels through the sum of image information and image complexity, and uses SLIC technology to extract image super-pixels; Then, the median value of each channel pixel in each super-pixel block is used as the super-pixel value to calculate the super-pixel intensity information, and the super-pixel intensity histogram is counted; Finally, an intensity dissimilarity function based on IT2FS is constructed to search the optimal threshold. On Berkeley images and Weizmann images, the proposed algorithm is compared with the five related algorithms. The experiments show that the proposed algorithm has achieved good results in terms of visual effects and evaluation indicators, which proves the effectiveness of the algorithm.
为了充分利用图像的颜色信息,提高彩色图像分割的精度,本文提出了一种不需要灰度转换的基于slic的自适应模糊强度不相似阈值分割方法。首先,该算法通过对图像信息和图像复杂度进行求和,自适应选择图像超像素个数,并采用SLIC技术提取图像超像素;然后,以每个超像素块中每个通道像素的中值作为超像素值计算超像素强度信息,并对超像素强度直方图进行计数;最后,构造基于IT2FS的强度不相似函数来搜索最优阈值。在Berkeley图像和Weizmann图像上,与五种相关算法进行了比较。实验表明,该算法在视觉效果和评价指标方面都取得了较好的效果,证明了算法的有效性。
{"title":"Adaptive SLIC-Based Fuzzy Intensity Dissimilarity Thresholding for Color Image Segmentation","authors":"Lan Rong, Danlin Feng, Zhao Feng, Haiyan Yu, Zhang Lu","doi":"10.1109/icnlp58431.2023.00017","DOIUrl":"https://doi.org/10.1109/icnlp58431.2023.00017","url":null,"abstract":"In order to make full use of the color information of the image and improve the accuracy of color image segmentation, this paper proposes an adaptive SLIC-based fuzzy intensity dissimilarity thresholding for color image segmentation, which does not need gray conversion. Firstly, the proposed algorithm adaptively selects the number of super-pixels through the sum of image information and image complexity, and uses SLIC technology to extract image super-pixels; Then, the median value of each channel pixel in each super-pixel block is used as the super-pixel value to calculate the super-pixel intensity information, and the super-pixel intensity histogram is counted; Finally, an intensity dissimilarity function based on IT2FS is constructed to search the optimal threshold. On Berkeley images and Weizmann images, the proposed algorithm is compared with the five related algorithms. The experiments show that the proposed algorithm has achieved good results in terms of visual effects and evaluation indicators, which proves the effectiveness of the algorithm.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83919324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessment of Nonverbal-behavior Annotation Tags in Multimodal Learner Corpus 多模态学习者语料库中非语言行为标注标签的评价
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/ICNLP58431.2023.00046
Katsunori Kotani, T. Yoshimi
Learner corpus research has revealed the appropriateness of language learners’ language use by analyzing corpus data including annotation tags. Annotation tags provide linguistic information such as part-of-speech and error information on lexical, syntactic, semantic, and phonetic items. Recent learner corpus research compiling a multimodal learner corpus has extended the research target to nonverbal behaviors such as facial expressions and gesturing because nonverbal behaviors play a significant role in communication. The goal of this paper is two-fold. The first objective is to validate nonverbal-behavior annotation tags of the previous multimodal learner corpora. The second objective is to propose a plausible nonverbal-behavior tag set for a multimodal learner corpus.
学习者语料库研究通过分析包括标注标签在内的语料库数据,揭示了语言学习者语言使用的适当性。注释标记提供语言信息,例如词性信息和有关词汇、句法、语义和语音项的错误信息。由于非语言行为在交际中起着重要的作用,近年来编写多模态学习者语料库的研究将研究对象扩展到面部表情和手势等非语言行为。本文的目的有两个。第一个目标是验证以前的多模态学习器语料库的非语言行为注释标签。第二个目标是为多模态学习者语料库提出一个合理的非语言行为标签集。
{"title":"Assessment of Nonverbal-behavior Annotation Tags in Multimodal Learner Corpus","authors":"Katsunori Kotani, T. Yoshimi","doi":"10.1109/ICNLP58431.2023.00046","DOIUrl":"https://doi.org/10.1109/ICNLP58431.2023.00046","url":null,"abstract":"Learner corpus research has revealed the appropriateness of language learners’ language use by analyzing corpus data including annotation tags. Annotation tags provide linguistic information such as part-of-speech and error information on lexical, syntactic, semantic, and phonetic items. Recent learner corpus research compiling a multimodal learner corpus has extended the research target to nonverbal behaviors such as facial expressions and gesturing because nonverbal behaviors play a significant role in communication. The goal of this paper is two-fold. The first objective is to validate nonverbal-behavior annotation tags of the previous multimodal learner corpora. The second objective is to propose a plausible nonverbal-behavior tag set for a multimodal learner corpus.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88692089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Graph Autoencoder-based Anomaly Detection Method for Attributed Networks 基于图自编码器的属性网络异常检测方法
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/ICNLP58431.2023.00067
Kunpeng Zhang, Guangyue Lu, Yuxin Li, Cai Xu
Anomaly detection in attributed networks aims to find anomalous nodes in the network that differ from the behavior pattern of most nodes, and graph neural network provide a way to use fused structural and attribute information. However, existing methods based on Graph Convolutional Network (GCN) detection do not consider the over-smoothing phenomenon of GCN due to the stacks of network layers, which causes significant performance deterioration. To address the above problems, we propose a graph autoencoder-based anomaly detection method for attributed networks: Residual Graph Autoencoder (Res-GAE), by which the performance is effectively improved. Res-GAE contains an encoder and two decoders. More specifically, the encoder consists of a GCN and a residual network is utilized to learn the network representation. The decoders are designed to reconstruct the network structure and node attributes respectively. After that, the objective function is used to analyze the reconstruction error to generate the anomaly score ranking, to realize anomaly detection. Extensive experiments on the three datasets (BlogCatalog, Flickr, ACM) demonstrate that the proposed method has the significant improvement compared with other baseline methods.
属性网络中的异常检测旨在发现网络中与大多数节点行为模式不同的异常节点,而图神经网络提供了一种融合结构信息和属性信息的方法。然而,现有的基于图卷积网络(Graph Convolutional Network, GCN)的检测方法没有考虑GCN由于网络层的堆叠而产生的过度平滑现象,导致性能显著下降。针对上述问题,本文提出了一种基于图自编码器的属性网络异常检测方法:残差图自编码器(Res-GAE),有效地提高了检测性能。Res-GAE包含一个编码器和两个解码器。更具体地说,编码器由GCN和残差网络组成,并利用残差网络学习网络表示。解码器分别用于重构网络结构和节点属性。然后利用目标函数对重构误差进行分析,生成异常评分排序,实现异常检测。在三个数据集(BlogCatalog, Flickr, ACM)上进行的大量实验表明,与其他基线方法相比,该方法具有显著的改进。
{"title":"A Graph Autoencoder-based Anomaly Detection Method for Attributed Networks","authors":"Kunpeng Zhang, Guangyue Lu, Yuxin Li, Cai Xu","doi":"10.1109/ICNLP58431.2023.00067","DOIUrl":"https://doi.org/10.1109/ICNLP58431.2023.00067","url":null,"abstract":"Anomaly detection in attributed networks aims to find anomalous nodes in the network that differ from the behavior pattern of most nodes, and graph neural network provide a way to use fused structural and attribute information. However, existing methods based on Graph Convolutional Network (GCN) detection do not consider the over-smoothing phenomenon of GCN due to the stacks of network layers, which causes significant performance deterioration. To address the above problems, we propose a graph autoencoder-based anomaly detection method for attributed networks: Residual Graph Autoencoder (Res-GAE), by which the performance is effectively improved. Res-GAE contains an encoder and two decoders. More specifically, the encoder consists of a GCN and a residual network is utilized to learn the network representation. The decoders are designed to reconstruct the network structure and node attributes respectively. After that, the objective function is used to analyze the reconstruction error to generate the anomaly score ranking, to realize anomaly detection. Extensive experiments on the three datasets (BlogCatalog, Flickr, ACM) demonstrate that the proposed method has the significant improvement compared with other baseline methods.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89892466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Construction and Performance Analysis of Combinatorial Chaotic Map Based on Fuzzy Entropy 基于模糊熵的组合混沌映射构造及性能分析
Q3 Arts and Humanities Pub Date : 2023-03-01 DOI: 10.1109/icnlp58431.2023.00021
Tingting Chen, Xiaodong Zhang, Meixia Miao, Pengfei Tu
In order to solve the problems of simple chaotic behavior, poor initial value sensitivity corresponding to the traditional one-dimensional chaotic maps, a one-dimensional combined chaotic map based on fuzzy entropy theory is proposed. The designed combinatorial chaotic map combines the definition of fuzzy entropy and the classical one-dimensional chaotic map, and can effectively improve the performance of the combinatorial chaotic maps by extending the system parameters. Through the experimental simulation and analysis of the bifurcation diagram, Lyapunov exponent, initial value sensitivity and other related properties of the combined chaotic map, the experimental results show that the combined chaotic map constructed in this paper has good chaotic properties such as initial value sensitivity and randomness.
针对传统一维混沌映射混沌行为简单、初值敏感性差的问题,提出了一种基于模糊熵理论的一维组合混沌映射。所设计的组合混沌映射将模糊熵的定义与经典的一维混沌映射相结合,通过扩展系统参数,有效地提高了组合混沌映射的性能。通过对组合混沌映射的分岔图、Lyapunov指数、初值灵敏度等相关性质的实验模拟和分析,实验结果表明本文构造的组合混沌映射具有良好的初值灵敏度和随机性等混沌特性。
{"title":"Construction and Performance Analysis of Combinatorial Chaotic Map Based on Fuzzy Entropy","authors":"Tingting Chen, Xiaodong Zhang, Meixia Miao, Pengfei Tu","doi":"10.1109/icnlp58431.2023.00021","DOIUrl":"https://doi.org/10.1109/icnlp58431.2023.00021","url":null,"abstract":"In order to solve the problems of simple chaotic behavior, poor initial value sensitivity corresponding to the traditional one-dimensional chaotic maps, a one-dimensional combined chaotic map based on fuzzy entropy theory is proposed. The designed combinatorial chaotic map combines the definition of fuzzy entropy and the classical one-dimensional chaotic map, and can effectively improve the performance of the combinatorial chaotic maps by extending the system parameters. Through the experimental simulation and analysis of the bifurcation diagram, Lyapunov exponent, initial value sensitivity and other related properties of the combined chaotic map, the experimental results show that the combined chaotic map constructed in this paper has good chaotic properties such as initial value sensitivity and randomness.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78193234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Icon
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1