首页 > 最新文献

ACM Transactions on Asian and Low-Resource Language Information Processing最新文献

英文 中文
A Survey of Knowledge Enhanced Pre-trained Language Models 知识增强型预训练语言模型调查
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-03-01 DOI: 10.1145/3631392
Jian Yang, Xinyu Hu, Gang Xiao, Yulong Shen

Pre-trained language models learn informative word representations on a large-scale text corpus through self-supervised learning, which has achieved promising performance in fields of natural language processing (NLP) after fine-tuning. These models, however, suffer from poor robustness and lack of interpretability. We refer to pre-trained language models with knowledge injection as knowledge-enhanced pre-trained language models (KEPLMs). These models demonstrate deep understanding and logical reasoning and introduce interpretability. In this survey, we provide a comprehensive overview of KEPLMs in NLP. We first discuss the advancements in pre-trained language models and knowledge representation learning. Then we systematically categorize existing KEPLMs from three different perspectives. Finally, we outline some potential directions of KEPLMs for future research.

预训练语言模型通过自我监督学习在大规模文本语料库上学习信息词表征,经过微调后在自然语言处理(NLP)领域取得了可喜的成绩。然而,这些模型存在鲁棒性差和缺乏可解释性的问题。我们将注入知识的预训练语言模型称为知识增强预训练语言模型(KEPLM)。这些模型展示了深度理解和逻辑推理能力,并引入了可解释性。在本调查报告中,我们将全面综述知识增强预训练语言模型在 NLP 中的应用。我们首先讨论了预训练语言模型和知识表示学习的进展。然后,我们从三个不同的角度对现有的 KEPLM 进行了系统分类。最后,我们概述了 KEPLMs 未来研究的一些潜在方向。
{"title":"A Survey of Knowledge Enhanced Pre-trained Language Models","authors":"Jian Yang, Xinyu Hu, Gang Xiao, Yulong Shen","doi":"10.1145/3631392","DOIUrl":"https://doi.org/10.1145/3631392","url":null,"abstract":"<p>Pre-trained language models learn informative word representations on a large-scale text corpus through self-supervised learning, which has achieved promising performance in fields of natural language processing (NLP) after fine-tuning. These models, however, suffer from poor robustness and lack of interpretability. We refer to pre-trained language models with knowledge injection as knowledge-enhanced pre-trained language models (KEPLMs). These models demonstrate deep understanding and logical reasoning and introduce interpretability. In this survey, we provide a comprehensive overview of KEPLMs in NLP. We first discuss the advancements in pre-trained language models and knowledge representation learning. Then we systematically categorize existing KEPLMs from three different perspectives. Finally, we outline some potential directions of KEPLMs for future research.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140008632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploration on Advanced Intelligent Algorithms of Artificial Intelligence for Verb Recognition in Machine Translation 机器翻译中动词识别的人工智能高级算法探讨
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-28 DOI: 10.1145/3649891
Qinghua Ai, Qingyan Ai, Jun Wang

This article aimed to address the problems of word order confusion, context dependency, and ambiguity in traditional machine translation (MT) methods for verb recognition. By applying advanced intelligent algorithms of artificial intelligence, verb recognition can be better processed and the quality and accuracy of MT can be improved. Based on Neural machine translation (NMT), basic attention mechanisms, historical attention information, dynamically obtain information related to the generated words, and constraint mechanisms were introduced to embed semantic information, represent polysemy, and annotate semantic roles of verbs. This article used the Workshop on machine translation (WMT), British National Corpus (BNC), Gutenberg, Reuters Corpus, OpenSubtitles corpus, and enhanced the data in the corpus. The improved NMT model was compared with traditional NMT models, Rule Based machine translation (RBMT), and Statistical machine translation (SMT). The experimental results showed that the average verb semantic matching degree of the improved NMT model in 5 corpora was 0.85, and the average Bilingual Evaluation Understudy (BLEU) score in 5 corpora was 0.90. The improved NMT model in this article can effectively improve the accuracy of verb recognition in MT, providing new methods for verb recognition in MT.

本文旨在解决传统机器翻译(MT)方法在动词识别中存在的词序混乱、语境依赖和歧义等问题。通过应用人工智能的先进智能算法,可以更好地处理动词识别,提高 MT 的质量和准确性。在神经机器翻译(NMT)的基础上,引入了基本关注机制、历史关注信息、动态获取生成词相关信息和约束机制,以嵌入语义信息、表示多义词和注释动词的语义角色。本文使用了机器翻译研讨会(WMT)、英国国家语料库(BNC)、古腾堡语料库、路透社语料库、OpenSubtitles 语料库,并对语料库中的数据进行了增强。改进后的 NMT 模型与传统 NMT 模型、基于规则的机器翻译(RBMT)和统计机器翻译(SMT)进行了比较。实验结果表明,改进后的 NMT 模型在 5 个语料库中的平均动词语义匹配度为 0.85,在 5 个语料库中的平均双语评估(BLEU)得分为 0.90。本文改进的 NMT 模型能有效提高 MT 中动词识别的准确率,为 MT 中的动词识别提供了新的方法。
{"title":"Exploration on Advanced Intelligent Algorithms of Artificial Intelligence for Verb Recognition in Machine Translation","authors":"Qinghua Ai, Qingyan Ai, Jun Wang","doi":"10.1145/3649891","DOIUrl":"https://doi.org/10.1145/3649891","url":null,"abstract":"<p>This article aimed to address the problems of word order confusion, context dependency, and ambiguity in traditional machine translation (MT) methods for verb recognition. By applying advanced intelligent algorithms of artificial intelligence, verb recognition can be better processed and the quality and accuracy of MT can be improved. Based on Neural machine translation (NMT), basic attention mechanisms, historical attention information, dynamically obtain information related to the generated words, and constraint mechanisms were introduced to embed semantic information, represent polysemy, and annotate semantic roles of verbs. This article used the Workshop on machine translation (WMT), British National Corpus (BNC), Gutenberg, Reuters Corpus, OpenSubtitles corpus, and enhanced the data in the corpus. The improved NMT model was compared with traditional NMT models, Rule Based machine translation (RBMT), and Statistical machine translation (SMT). The experimental results showed that the average verb semantic matching degree of the improved NMT model in 5 corpora was 0.85, and the average Bilingual Evaluation Understudy (BLEU) score in 5 corpora was 0.90. The improved NMT model in this article can effectively improve the accuracy of verb recognition in MT, providing new methods for verb recognition in MT.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140008685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Bidirectionl LSTM with CRFs for Pashto tagging 利用双向 LSTM 和 CRFs 进行普什图语标记
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-27 DOI: 10.1145/3649456
Farooq Zaman, Onaiza Maqbool, Jaweria Kanwal

Part-of-speech tagging plays a vital role in text processing and natural language understanding. Very few attempts have been made in the past for tagging Pashto Part-of-Speech. In this work, we present LSTM based approach for Pashto part-of-speech tagging with special focus on ambiguity resolution. Initially we created a corpus of Pashto sentences having words with multiple meanings and their tags. We introduce a powerful sentences representation and new architecture for Pashto text processing. The accuracy of the proposed approach is compared with state-of-the-art Hidden Markov Model. Our Model shows 87.60% accuracy for all words excluding punctuations and 95.45% for ambiguous words, on the other hand Hidden Markov Model shows 78.37% and 44.72% accuracy respectively. Results show that our approach outperform Hidden Markov Model in Part-of-Speech tagging for Pashto text.

语音部分标记在文本处理和自然语言理解中起着至关重要的作用。过去,很少有人尝试对普什图语部分语音进行标记。在这项工作中,我们提出了基于 LSTM 的普什图语部分语音标记方法,并特别关注歧义解决。最初,我们创建了一个普什图语句子语料库,其中包含多义词及其标签。我们为普什图语文本处理引入了强大的句子表示法和新架构。我们将所提出方法的准确率与最先进的隐马尔可夫模型进行了比较。我们的模型对所有单词(不包括标点符号)的准确率为 87.60%,对模糊单词的准确率为 95.45%,而隐马尔可夫模型的准确率分别为 78.37% 和 44.72%。结果表明,在普什图语文本的语音部分标记方面,我们的方法优于隐马尔可夫模型。
{"title":"Leveraging Bidirectionl LSTM with CRFs for Pashto tagging","authors":"Farooq Zaman, Onaiza Maqbool, Jaweria Kanwal","doi":"10.1145/3649456","DOIUrl":"https://doi.org/10.1145/3649456","url":null,"abstract":"<p>Part-of-speech tagging plays a vital role in text processing and natural language understanding. Very few attempts have been made in the past for tagging Pashto Part-of-Speech. In this work, we present LSTM based approach for Pashto part-of-speech tagging with special focus on ambiguity resolution. Initially we created a corpus of Pashto sentences having words with multiple meanings and their tags. We introduce a powerful sentences representation and new architecture for Pashto text processing. The accuracy of the proposed approach is compared with state-of-the-art Hidden Markov Model. Our Model shows 87.60% accuracy for all words excluding punctuations and 95.45% for ambiguous words, on the other hand Hidden Markov Model shows 78.37% and 44.72% accuracy respectively. Results show that our approach outperform Hidden Markov Model in Part-of-Speech tagging for Pashto text.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139977793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Hybrid Scene Text Script Identification Network for regional Indian Languages 印度地区语言的混合场景文本脚本识别网络
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-24 DOI: 10.1145/3649439
Veronica Naosekpam, Nilkanta Sahu

In this work, we introduce WAFFNet, an attention-centric feature fusion architecture tailored for word-level multi-lingual scene text script identification. Motivated by the limitations of traditional approaches that rely exclusively on feature-based methods or deep learning strategies, our approach amalgamates statistical and deep features to bridge the gap. At the core of WAFFNet, we utilized the merits of Local Binary Pattern —a prominent descriptor capturing low-level texture features with high-dimensional, semantically-rich convolutional features. This fusion is judiciously augmented by a spatial attention mechanism, ensuring targeted emphasis on semantically critical regions of the input image. To address the class imbalance problem in multi-class classification scenarios, we employed a weighted objective function. This not only regularizes the learning process but also addresses the class imbalance problem. The architectural integrity of WAFFNet is preserved through an end-to-end training paradigm, leveraging transfer learning to expedite convergence and optimize performance metrics. Considering the under-representation of regional Indian languages in current datasets, we meticulously curated IIITG-STLI2023, a comprehensive dataset encapsulating English alongside six under-represented Indian languages: Hindi, Kannada, Malayalam, Telugu, Bengali, and Manipuri. Rigorous evaluation of the IIITG-STLI2023, as well as the established MLe2e and SIW-13 datasets, underscores WAFFNet’s supremacy over both traditional feature-engineering approaches as well as state-of-the-art deep learning frameworks. Thus, the proposed WAFFNet framework offers a robust and effective solution for language identification in scene text images.

在这项工作中,我们介绍了 WAFFNet,这是一种以注意力为中心的特征融合架构,专为词级多语言场景文本脚本识别而定制。由于传统方法完全依赖基于特征的方法或深度学习策略存在局限性,我们的方法融合了统计特征和深度特征,从而缩小了差距。在 WAFFNet 的核心中,我们利用了局部二进制模式(Local Binary Pattern)的优点,这是一种捕捉低级纹理特征的著名描述符,具有高维、语义丰富的卷积特征。空间注意力机制对这种融合进行了明智的补充,确保有针对性地强调输入图像的语义关键区域。为了解决多类分类场景中的类不平衡问题,我们采用了加权目标函数。这不仅规范了学习过程,还解决了类不平衡问题。WAFFNet 的结构完整性通过端到端训练范例得以保留,并利用迁移学习加快收敛速度和优化性能指标。考虑到印度地区语言在当前数据集中的代表性不足,我们精心策划了 IIITG-STLI2023,这是一个包含英语和六种代表性不足的印度语言的综合数据集:印地语、卡纳达语、马拉雅拉姆语、泰卢固语、孟加拉语和曼尼普尔语。对 IIITG-STLI2023 以及已建立的 MLe2e 和 SIW-13 数据集的严格评估,彰显了 WAFFNet 超越传统特征工程方法和最先进深度学习框架的优势。因此,所提出的 WAFFNet 框架为场景文本图像中的语言识别提供了一种稳健而有效的解决方案。
{"title":"A Hybrid Scene Text Script Identification Network for regional Indian Languages","authors":"Veronica Naosekpam, Nilkanta Sahu","doi":"10.1145/3649439","DOIUrl":"https://doi.org/10.1145/3649439","url":null,"abstract":"<p>In this work, we introduce WAFFNet, an attention-centric feature fusion architecture tailored for word-level multi-lingual scene text script identification. Motivated by the limitations of traditional approaches that rely exclusively on feature-based methods or deep learning strategies, our approach amalgamates statistical and deep features to bridge the gap. At the core of WAFFNet, we utilized the merits of Local Binary Pattern —a prominent descriptor capturing low-level texture features with high-dimensional, semantically-rich convolutional features. This fusion is judiciously augmented by a spatial attention mechanism, ensuring targeted emphasis on semantically critical regions of the input image. To address the class imbalance problem in multi-class classification scenarios, we employed a weighted objective function. This not only regularizes the learning process but also addresses the class imbalance problem. The architectural integrity of WAFFNet is preserved through an end-to-end training paradigm, leveraging transfer learning to expedite convergence and optimize performance metrics. Considering the under-representation of regional Indian languages in current datasets, we meticulously curated IIITG-STLI2023, a comprehensive dataset encapsulating English alongside six under-represented Indian languages: Hindi, Kannada, Malayalam, Telugu, Bengali, and Manipuri. Rigorous evaluation of the IIITG-STLI2023, as well as the established MLe2e and SIW-13 datasets, underscores WAFFNet’s supremacy over both traditional feature-engineering approaches as well as state-of-the-art deep learning frameworks. Thus, the proposed WAFFNet framework offers a robust and effective solution for language identification in scene text images.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139949859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Natural Language Processing System for Text Classification Corpus Based on Machine Learning 基于机器学习的文本分类语料库自然语言处理系统
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-19 DOI: 10.1145/3648361
Yawen Su

A classification system for hazardous materials in air traffic control was investigated using the Human Factors Analysis and Classification System (HFACS) framework and natural language processing to prevent hazardous situations in air traffic control. Based on the development of the HFACS standard, an air traffic control hazard classification system will be created. The dangerous data of the aviation safety management system is selected by dead bodies, classified and marked in 5 levels. TFIDF TextRank text classification method based on key content extraction and text classification model based on CNN and BERT model were used in the experiment to solve the problem of small samples, many labels and random samples in hazardous environment of air pollution control. The results show that the total cost of model training time and classification accuracy is the highest when the keywords are around 8. As the number of points increases, the time spent in dimensioning decreases and affects accuracy. When the number of points reaches about 93, the time spent in determining the size increases, but the accuracy of the allocation remains close to 0.7, but the increase in the value of time leads to a decrease in the total cost. It has been proven that extracting key content can solve text classification problems for small companies and contribute to further research in the development of security systems.

利用人为因素分析和分类系统(HFACS)框架和自然语言处理技术,对空中交通管制中的危险品分类系统进行了研究,以防止空中交通管制中出现危险情况。在制定 HFACS 标准的基础上,将创建一个空中交通管制危险分类系统。航空安全管理系统中的危险数据通过死体筛选、分类并标记为 5 个等级。实验中采用了基于关键内容提取的 TFIDF TextRank 文本分类方法和基于 CNN 和 BERT 模型的文本分类模型,解决了空管危险环境中样本少、标签多、随机样本多的问题。结果表明,当关键词数在 8 个左右时,模型训练时间总成本和分类准确率最高。随着点数的增加,维度计算所花费的时间会减少,并影响准确率。当点的数量达到 93 个左右时,确定尺寸所花费的时间会增加,但分配的准确率仍然接近 0.7,但时间值的增加导致总成本的降低。事实证明,提取关键内容可以解决小型公司的文本分类问题,并有助于进一步研究安全系统的开发。
{"title":"A Natural Language Processing System for Text Classification Corpus Based on Machine Learning","authors":"Yawen Su","doi":"10.1145/3648361","DOIUrl":"https://doi.org/10.1145/3648361","url":null,"abstract":"<p>A classification system for hazardous materials in air traffic control was investigated using the Human Factors Analysis and Classification System (HFACS) framework and natural language processing to prevent hazardous situations in air traffic control. Based on the development of the HFACS standard, an air traffic control hazard classification system will be created. The dangerous data of the aviation safety management system is selected by dead bodies, classified and marked in 5 levels. TFIDF TextRank text classification method based on key content extraction and text classification model based on CNN and BERT model were used in the experiment to solve the problem of small samples, many labels and random samples in hazardous environment of air pollution control. The results show that the total cost of model training time and classification accuracy is the highest when the keywords are around 8. As the number of points increases, the time spent in dimensioning decreases and affects accuracy. When the number of points reaches about 93, the time spent in determining the size increases, but the accuracy of the allocation remains close to 0.7, but the increase in the value of time leads to a decrease in the total cost. It has been proven that extracting key content can solve text classification problems for small companies and contribute to further research in the development of security systems.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139910190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCT:Summary Caption Technique for Retrieving Relevant Images in Alignment with Multimodal Abstractive Summary SCT:根据多模态摘要检索相关图像的摘要标题技术
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-17 DOI: 10.1145/3645029
Shaik Rafi, Ranjita Das

This work proposes an efficient Summary Caption Technique(SCT) which considers the multimodal summary and image captions as input to retrieve the correspondence images from the captions that are highly influential to the multimodal summary. Matching a multimodal summary with an appropriate image is a challenging task in computer vision (CV) and natural language processing (NLP) fields. Merging of these fields are tedious, though the research community has steadily focused on the cross-modal retrieval. These issues include the visual question-answering, matching queries with the images, and semantic relationship matching between two modalities for retrieving the corresponding image. Relevant works consider in questions to match the relationship of visual information, object detection and to match the text with visual information, and employing structural-level representation to align the images with the text. However, these techniques are primarily focused on retrieving the images to text or for the image captioning. But less effort has been spent on retrieving relevant images for the multimodal summary. Hence, our proposed technique extracts and merge features in Hybrid Image Text(HIT) layer and captions in the semantic embeddings with word2vec where the contextual features and semantic relationships are compared and matched with each vector between the modalities, with cosine semantic similarity. In cross-modal retrieval, we achieve top five related images and align the relevant images to the multimodal summary that achieves the highest cosine score among the retrieved images. The model has been trained with seq-to-seq modal with 100 epochs, besides reducing the information loss by the sparse categorical cross entropy. Further, experimenting with the multimodal summarization with multimodal output dataset (MSMO), in cross-modal retrieval, helps to evaluate the quality of image alignment with an image-precision metric that demonstrate the best results.

本作品提出了一种高效的摘要标题技术(SCT),该技术将多模态摘要和图像标题作为输入,从对多模态摘要影响较大的标题中检索对应图像。在计算机视觉(CV)和自然语言处理(NLP)领域,将多模态摘要与合适的图像相匹配是一项具有挑战性的任务。虽然研究界一直在关注跨模态检索,但将这些领域合并在一起是一项繁琐的工作。这些问题包括视觉问题解答、查询与图像匹配,以及两种模式之间的语义关系匹配以检索相应图像。相关作品考虑了在问题中匹配视觉信息的关系、对象检测并将文本与视觉信息匹配,以及采用结构层表示法将图像与文本对齐。然而,这些技术主要集中在检索图像到文本或图像标题。但在为多模态摘要检索相关图像方面所做的努力较少。因此,我们提出的技术通过 word2vec 提取并合并混合图像文本(HIT)层中的特征和语义嵌入中的标题,其中上下文特征和语义关系通过余弦语义相似性与模态之间的每个向量进行比较和匹配。在跨模态检索中,我们获得前五张相关图像,并将相关图像对齐到检索图像中余弦分数最高的多模态摘要。除了通过稀疏分类交叉熵减少信息损失外,该模型还经过了 100 次序列到序列模态训练。此外,在跨模态检索中使用多模态输出数据集(MSMO)进行多模态总结实验,有助于通过图像精度指标评估图像对齐的质量,从而展示最佳结果。
{"title":"SCT:Summary Caption Technique for Retrieving Relevant Images in Alignment with Multimodal Abstractive Summary","authors":"Shaik Rafi, Ranjita Das","doi":"10.1145/3645029","DOIUrl":"https://doi.org/10.1145/3645029","url":null,"abstract":"<p>This work proposes an efficient Summary Caption Technique(SCT) which considers the multimodal summary and image captions as input to retrieve the correspondence images from the captions that are highly influential to the multimodal summary. Matching a multimodal summary with an appropriate image is a challenging task in computer vision (CV) and natural language processing (NLP) fields. Merging of these fields are tedious, though the research community has steadily focused on the cross-modal retrieval. These issues include the visual question-answering, matching queries with the images, and semantic relationship matching between two modalities for retrieving the corresponding image. Relevant works consider in questions to match the relationship of visual information, object detection and to match the text with visual information, and employing structural-level representation to align the images with the text. However, these techniques are primarily focused on retrieving the images to text or for the image captioning. But less effort has been spent on retrieving relevant images for the multimodal summary. Hence, our proposed technique extracts and merge features in Hybrid Image Text(HIT) layer and captions in the semantic embeddings with word2vec where the contextual features and semantic relationships are compared and matched with each vector between the modalities, with cosine semantic similarity. In cross-modal retrieval, we achieve top five related images and align the relevant images to the multimodal summary that achieves the highest cosine score among the retrieved images. The model has been trained with seq-to-seq modal with 100 epochs, besides reducing the information loss by the sparse categorical cross entropy. Further, experimenting with the multimodal summarization with multimodal output dataset (MSMO), in cross-modal retrieval, helps to evaluate the quality of image alignment with an image-precision metric that demonstrate the best results.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Handling Imbalance and Limited Data in Thyroid Ultrasound and Diabetic Retinopathy Datasets Using Discrete Levy Flights Grey Wolf Optimizer Based Random Forest for Robust Medical Data Classification 使用基于灰狼优化器的离散利维飞行随机森林处理甲状腺超声和糖尿病视网膜病变数据集中的不平衡和有限数据,实现可靠的医疗数据分类
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-16 DOI: 10.1145/3648363
Shobha Aswal, Neelu Jyothi Ahuja, Ritika Mehra

In the field of disease diagnosis, medical image classification faces an inherent challenge due to various factors involving data imbalance, image quality variability, annotation variability, and limited data availability and data representativeness. Such challenges affect the algorithm's classification ability on the medical images in an adverse way, which leads to biased model outcomes and inaccurate interpretations. In this paper, a novel Discrete Levy Flight Grey Wolf Optimizer (DLFGWO) is combined with the Random Forest (RF) classifier to address the above limitations on the biomedical datasets and to achieve better classification rate. The DLFGWO-RF resolves the image quality variability in ultrasound images and limits the inaccuracies on classification using RF by handling the incomplete and noisy data. The sheer focus on the majority class may lead to unequal distribution of classes and thus leads to data imbalance. The DLFGWO balances such distribution by leveraging grey wolves and its exploration and exploitation capabilities are improved using Discrete Levy Flight (DLF). It further optimizes the classifier's performance to achieve balanced classification rate. DLFGWO-RF is designed to perform classification even on limited datasets, thereby the requirement of numerous expert annotations can thus be reduced. In diabetic retinopathy grading, the DLFGWO-RF reduces disagreements in annotation variability using subjective interpretations. However, the representativeness of the diabetic retinopathy dataset fails to capture the entire population diversity, which limits the generalization ability of the proposed DLFGWO-RF. Thus, fine-tuning of RF can robustly adapt to the subgroups in the dataset, enhancing its overall performance. The experiments are conducted on two widely used medical image datasets to test the efficacy of the model. The experimental results show that the DLFGWO-RF classifier achieves improved classification accuracy between 90-95%, which outperforms the existing techniques for various imbalanced datasets.

在疾病诊断领域,由于数据不平衡、图像质量变化、注释变化以及数据可用性和数据代表性有限等各种因素,医学图像分类面临着固有的挑战。这些挑战会对算法对医学图像的分类能力产生不利影响,从而导致模型结果有偏差和解释不准确。本文将新颖的离散李维灰狼优化器(DLFGWO)与随机森林(RF)分类器相结合,以解决生物医学数据集的上述局限性,并获得更好的分类率。DLFGWO-RF 解决了超声图像中的图像质量变异问题,并通过处理不完整和有噪声的数据限制了 RF 分类的不准确性。只关注大多数类别可能会导致类别分布不均,从而导致数据失衡。DLFGWO 通过利用灰狼来平衡这种分布,并利用离散列维飞行(DLF)提高了探索和利用能力。它进一步优化了分类器的性能,以实现均衡的分类率。DLFGWO-RF 即使在有限的数据集上也能进行分类,因此可以减少对大量专家注释的需求。在糖尿病视网膜病变分级中,DLFGWO-RF 利用主观解释减少了注释差异中的分歧。然而,糖尿病视网膜病变数据集的代表性无法捕捉整个人群的多样性,这限制了所提出的 DLFGWO-RF 的泛化能力。因此,对射频进行微调可以稳健地适应数据集中的亚群,从而提高其整体性能。实验在两个广泛使用的医学图像数据集上进行,以检验模型的有效性。实验结果表明,DLFGWO-RF 分类器的分类准确率提高了 90-95% 之间,在各种不平衡数据集上优于现有技术。
{"title":"Handling Imbalance and Limited Data in Thyroid Ultrasound and Diabetic Retinopathy Datasets Using Discrete Levy Flights Grey Wolf Optimizer Based Random Forest for Robust Medical Data Classification","authors":"Shobha Aswal, Neelu Jyothi Ahuja, Ritika Mehra","doi":"10.1145/3648363","DOIUrl":"https://doi.org/10.1145/3648363","url":null,"abstract":"<p>In the field of disease diagnosis, medical image classification faces an inherent challenge due to various factors involving data imbalance, image quality variability, annotation variability, and limited data availability and data representativeness. Such challenges affect the algorithm's classification ability on the medical images in an adverse way, which leads to biased model outcomes and inaccurate interpretations. In this paper, a novel Discrete Levy Flight Grey Wolf Optimizer (DLFGWO) is combined with the Random Forest (RF) classifier to address the above limitations on the biomedical datasets and to achieve better classification rate. The DLFGWO-RF resolves the image quality variability in ultrasound images and limits the inaccuracies on classification using RF by handling the incomplete and noisy data. The sheer focus on the majority class may lead to unequal distribution of classes and thus leads to data imbalance. The DLFGWO balances such distribution by leveraging grey wolves and its exploration and exploitation capabilities are improved using Discrete Levy Flight (DLF). It further optimizes the classifier's performance to achieve balanced classification rate. DLFGWO-RF is designed to perform classification even on limited datasets, thereby the requirement of numerous expert annotations can thus be reduced. In diabetic retinopathy grading, the DLFGWO-RF reduces disagreements in annotation variability using subjective interpretations. However, the representativeness of the diabetic retinopathy dataset fails to capture the entire population diversity, which limits the generalization ability of the proposed DLFGWO-RF. Thus, fine-tuning of RF can robustly adapt to the subgroups in the dataset, enhancing its overall performance. The experiments are conducted on two widely used medical image datasets to test the efficacy of the model. The experimental results show that the DLFGWO-RF classifier achieves improved classification accuracy between 90-95%, which outperforms the existing techniques for various imbalanced datasets.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enriching Urdu NER with BERT Embedding, Data Augmentation, and Hybrid Encoder-CNN Architecture 利用 BERT 嵌入、数据增强和混合编码器-CNN 架构丰富乌尔都语 NER
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-15 DOI: 10.1145/3648362
Anil Ahmed, Degen Huang, Syed Yasser Arafat, Imran Hameed

Named Entity Recognition (NER) is an indispensable component of Natural Language Processing (NLP), which aims to identify and classify entities within text data. While Deep Learning (DL) models have excelled in NER for well-resourced languages like English, Spanish, and Chinese, they face significant hurdles when dealing with low-resource languages like Urdu. These challenges stem from the intricate linguistic characteristics of Urdu, including morphological diversity, context-dependent lexicon, and the scarcity of training data. This study addresses these issues by focusing on Urdu Named Entity Recognition (U-NER) and introducing three key contributions. First, various pre-trained embedding methods are employed, encompassing Word2vec (W2V), GloVe, FastText, Bidirectional Encoder Representations from Transformers (BERT), and Embeddings from language models (ELMo). In particular, fine-tuning is performed on BERTBASE and ELMo using Urdu Wikipedia and news articles. Secondly, a novel generative Data Augmentation (DA) technique replaces Named Entities (NEs) with mask tokens, employing pre-trained masked language models to predict masked tokens, effectively expanding the training dataset. Finally, the study introduces a novel hybrid model combining a Transformer Encoder with a Convolutional Neural Network (CNN) to capture the intricate morphology of Urdu. These modules enable the model to handle polysemy, extract short and long-range dependencies, and enhance learning capacity. Empirical experiments demonstrate that the proposed model, incorporating BERT embeddings and an innovative DA approach, attains the highest F1-Score of 93.99%, highlighting its efficacy for the U-NER task.

命名实体识别(NER)是自然语言处理(NLP)不可或缺的组成部分,旨在识别文本数据中的实体并对其进行分类。虽然深度学习(DL)模型在英语、西班牙语和中文等资源丰富的语言的 NER 中表现出色,但在处理乌尔都语等资源匮乏的语言时却面临巨大障碍。这些挑战源于乌尔都语错综复杂的语言特点,包括形态多样性、上下文相关词汇以及训练数据的稀缺性。本研究通过关注乌尔都语命名实体识别(U-NER)来解决这些问题,并引入了三个关键贡献。首先,采用了多种预训练嵌入方法,包括 Word2vec (W2V)、GloVe、FastText、来自变换器的双向编码器表示法 (BERT) 和来自语言模型的嵌入法 (ELMo)。其中,利用乌尔都语维基百科和新闻文章对 BERTBASE 和 ELMo 进行了微调。其次,一种新颖的生成性数据增强(DA)技术用掩码标记取代了命名实体(NE),利用预先训练好的掩码语言模型来预测掩码标记,从而有效地扩展了训练数据集。最后,该研究引入了一种新型混合模型,该模型结合了变换器编码器和卷积神经网络(CNN),以捕捉乌尔都语复杂的形态。这些模块使模型能够处理多义词,提取短程和长程依赖关系,并增强学习能力。实证实验表明,所提出的模型结合了 BERT 嵌入和创新的 DA 方法,达到了最高的 F1-Score 93.99%,突显了其在 U-NER 任务中的功效。
{"title":"Enriching Urdu NER with BERT Embedding, Data Augmentation, and Hybrid Encoder-CNN Architecture","authors":"Anil Ahmed, Degen Huang, Syed Yasser Arafat, Imran Hameed","doi":"10.1145/3648362","DOIUrl":"https://doi.org/10.1145/3648362","url":null,"abstract":"<p>Named Entity Recognition (NER) is an indispensable component of Natural Language Processing (NLP), which aims to identify and classify entities within text data. While Deep Learning (DL) models have excelled in NER for well-resourced languages like English, Spanish, and Chinese, they face significant hurdles when dealing with low-resource languages like Urdu. These challenges stem from the intricate linguistic characteristics of Urdu, including morphological diversity, context-dependent lexicon, and the scarcity of training data. This study addresses these issues by focusing on Urdu Named Entity Recognition (U-NER) and introducing three key contributions. First, various pre-trained embedding methods are employed, encompassing Word2vec (W2V), GloVe, FastText, Bidirectional Encoder Representations from Transformers (BERT), and Embeddings from language models (ELMo). In particular, fine-tuning is performed on BERT<sub>BASE</sub> and ELMo using Urdu Wikipedia and news articles. Secondly, a novel generative Data Augmentation (DA) technique replaces Named Entities (NEs) with mask tokens, employing pre-trained masked language models to predict masked tokens, effectively expanding the training dataset. Finally, the study introduces a novel hybrid model combining a Transformer Encoder with a Convolutional Neural Network (CNN) to capture the intricate morphology of Urdu. These modules enable the model to handle polysemy, extract short and long-range dependencies, and enhance learning capacity. Empirical experiments demonstrate that the proposed model, incorporating BERT embeddings and an innovative DA approach, attains the highest F1-Score of 93.99%, highlighting its efficacy for the U-NER task.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sentiment Analysis Method of Epidemic-related Microblog Based on Hesitation Theory 基于犹豫不决理论的疫情相关微博情感分析方法
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-14 DOI: 10.1145/3648360
Yang Yu, Dong Qiu, HuanYu Wan

The COVID-19 pandemic in 2020 brought an unprecedented global crisis. After two years of control efforts, life gradually returned to the pre-pandemic state, but localized outbreaks continued to occur. Towards the end of 2022, COVID-19 resurged in China, leading to another disruption of people’s lives and work. Many pieces of information on social media reflected people’s views and emotions towards the second outbreak, which showed distinct differences compared to the first outbreak in 2020. To explore people’s emotional attitudes towards the pandemic at different stages and the underlying reasons, this study collected microblog data from November 2022 to January 2023 and from January to June 2020, encompassing Chinese reactions to the COVID-19 pandemic. Based on hesitancy and the Fuzzy Intuition theory, we proposed a hypothesis: hesitancy can be integrated into machine learning models to select suitable corpora for training, which not only improves accuracy but also enhances model efficiency. Based on this hypothesis, we designed a hesitancy-integrated model. The experimental results demonstrated the model’s positive performance on a self-constructed database. By applying this model to analyze people’s attitudes towards the pandemic, we obtained their sentiments in different months. We found that the most negative emotions appeared at the beginning of the pandemic, followed by emotional fluctuations influenced by social events, ultimately showing an overall positive trend. Combining word cloud techniques and the Latent Dirichlet Allocation (LDA) model effectively helped explore the reasons behind the changes in pandemic attitude.

2020 年的 COVID-19 大流行带来了前所未有的全球性危机。经过两年的控制努力,人们的生活逐渐恢复到疫情爆发前的状态,但局部地区仍有疫情爆发。2022 年底,COVID-19 在中国卷土重来,再次扰乱了人们的生活和工作。社交媒体上的许多信息反映了人们对第二次疫情的看法和情绪,与 2020 年的第一次疫情相比有明显差异。为了探究人们在不同阶段对疫情的情感态度及其背后的原因,本研究收集了2022年11月至2023年1月以及2020年1月至6月的微博数据,涵盖了中国人对COVID-19疫情的反应。基于犹豫不决和模糊直觉理论,我们提出了一个假设:犹豫不决可以被集成到机器学习模型中,以选择合适的语料进行训练,这不仅能提高准确率,还能提高模型效率。基于这一假设,我们设计了一个犹豫整合模型。实验结果表明,该模型在自建数据库中表现良好。通过应用该模型分析人们对大流行病的态度,我们获得了他们在不同月份的情绪。我们发现,最消极的情绪出现在大流行的初期,随后受社会事件影响出现情绪波动,最终呈现出整体积极的趋势。将词云技术与潜在德里希勒分配(LDA)模型相结合,有效地帮助探索了大流行病态度变化背后的原因。
{"title":"Sentiment Analysis Method of Epidemic-related Microblog Based on Hesitation Theory","authors":"Yang Yu, Dong Qiu, HuanYu Wan","doi":"10.1145/3648360","DOIUrl":"https://doi.org/10.1145/3648360","url":null,"abstract":"<p>The COVID-19 pandemic in 2020 brought an unprecedented global crisis. After two years of control efforts, life gradually returned to the pre-pandemic state, but localized outbreaks continued to occur. Towards the end of 2022, COVID-19 resurged in China, leading to another disruption of people’s lives and work. Many pieces of information on social media reflected people’s views and emotions towards the second outbreak, which showed distinct differences compared to the first outbreak in 2020. To explore people’s emotional attitudes towards the pandemic at different stages and the underlying reasons, this study collected microblog data from November 2022 to January 2023 and from January to June 2020, encompassing Chinese reactions to the COVID-19 pandemic. Based on hesitancy and the Fuzzy Intuition theory, we proposed a hypothesis: hesitancy can be integrated into machine learning models to select suitable corpora for training, which not only improves accuracy but also enhances model efficiency. Based on this hypothesis, we designed a hesitancy-integrated model. The experimental results demonstrated the model’s positive performance on a self-constructed database. By applying this model to analyze people’s attitudes towards the pandemic, we obtained their sentiments in different months. We found that the most negative emotions appeared at the beginning of the pandemic, followed by emotional fluctuations influenced by social events, ultimately showing an overall positive trend. Combining word cloud techniques and the Latent Dirichlet Allocation (LDA) model effectively helped explore the reasons behind the changes in pandemic attitude.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSEConv: A Unified Warping Framework for Video Frame Interpolation MSEConv:用于视频帧插值的统一经编框架
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-14 DOI: 10.1145/3648364
Xiangling Ding, Pu Huang, Dengyong Zhang, Wei Liang, Feng Li, Gaobo Yang, Xin Liao, Yue Li

Within the context of video frame interpolation, complex motion modeling is the task of capturing, in a video sequence, where the moving objects are located in the interpolated frame, and how to maintain the temporal consistency of motion. Existing video frame interpolation methods typically assign either a fixed size of the motion kernel or a refined optical flow to model complex motions. However, they have the limitation of data redundancy and inaccuracy representation of motion. This paper introduces a unified warping framework, named multi-scale expandable deformable convolution (MSEConv), for simultaneously performing complex motion modeling and frame interpolation. In the proposed framework, a deep fully convolutional neural network with global attention is proposed to estimate multiple small-scale kernel weights with different expansion degrees and adaptive weight allocation for each pixel synthesis. Moreover, most of the kernel-based interpolation methods can be treated as the special case of the proposed MSEConv, thus, MSEConv can be easily transferred to other kernel-based frame interpolation methods for performance improvement. To further improve the robustness of motion occlusions, an operation of mask occlusion is introduced. As a consequence, our proposed MSEConv shows strong performance on par or even better than the state-of-the-art kernel-based frame interpolation works on public datasets. Our source code and visual comparable results are available at https://github.com/Pumpkin123709/MSEConv.

在视频帧插值中,复杂运动建模的任务是在视频序列中捕捉运动物体在插值帧中的位置,以及如何保持运动的时间一致性。现有的视频帧插值方法通常采用固定大小的运动核或精细光流来建立复杂运动模型。然而,这些方法都存在数据冗余和运动表示不准确的局限性。本文介绍了一种统一的扭曲框架,名为多尺度可扩展变形卷积(MSEConv),可同时执行复杂运动建模和帧插值。在该框架中,提出了一种具有全局注意力的深度全卷积神经网络,用于估计具有不同扩展度的多个小尺度内核权重,并为每个像素合成进行自适应权重分配。此外,大多数基于内核的插值方法都可以被视为 MSEConv 的特例,因此 MSEConv 可以很容易地移植到其他基于内核的帧插值方法中以提高性能。为了进一步提高运动遮挡的鲁棒性,我们引入了遮挡操作。因此,我们提出的 MSEConv 在公共数据集上显示出与最先进的基于内核的帧插值方法相当甚至更好的性能。我们的源代码和可视化比较结果可在 https://github.com/Pumpkin123709/MSEConv 上获取。
{"title":"MSEConv: A Unified Warping Framework for Video Frame Interpolation","authors":"Xiangling Ding, Pu Huang, Dengyong Zhang, Wei Liang, Feng Li, Gaobo Yang, Xin Liao, Yue Li","doi":"10.1145/3648364","DOIUrl":"https://doi.org/10.1145/3648364","url":null,"abstract":"<p>Within the context of video frame interpolation, complex motion modeling is the task of capturing, in a video sequence, where the moving objects are located in the interpolated frame, and how to maintain the temporal consistency of motion. Existing video frame interpolation methods typically assign either a fixed size of the motion kernel or a refined optical flow to model complex motions. However, they have the limitation of data redundancy and inaccuracy representation of motion. This paper introduces a unified warping framework, named multi-scale expandable deformable convolution (MSEConv), for simultaneously performing complex motion modeling and frame interpolation. In the proposed framework, a deep fully convolutional neural network with global attention is proposed to estimate multiple small-scale kernel weights with different expansion degrees and adaptive weight allocation for each pixel synthesis. Moreover, most of the kernel-based interpolation methods can be treated as the special case of the proposed MSEConv, thus, MSEConv can be easily transferred to other kernel-based frame interpolation methods for performance improvement. To further improve the robustness of motion occlusions, an operation of mask occlusion is introduced. As a consequence, our proposed MSEConv shows strong performance on par or even better than the state-of-the-art kernel-based frame interpolation works on public datasets. Our source code and visual comparable results are available at https://github.com/Pumpkin123709/MSEConv.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Asian and Low-Resource Language Information Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1