首页 > 最新文献

ACM Transactions on Asian and Low-Resource Language Information Processing最新文献

英文 中文
Boundary-Aware Abstractive Summarization with Entity-Augmented Attention for Enhancing Faithfulness 利用实体增强注意力进行边界感知抽象总结以提高忠实度
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-13 DOI: 10.1145/3641278
Jiuyi Li, Junpeng Liu, Jianjun Ma, Wei Yang, Degen Huang

With the successful application of deep learning, document summarization systems can produce more readable results. However, abstractive summarization still suffers from unfaithful outputs and factual errors, especially in named entities. Current approaches tend to employ external knowledge to improve model performance while neglecting the boundary information and the semantics of the entities. In this paper, we propose an entity-augmented method (EAM) to encourage the model to make full use of the entity boundary information and pay more attention to the critical entities. Experimental results on three Chinese and English summarization datasets show that our method outperforms several strong baselines and achieves state-of-the-art performance on the CLTS dataset. Our method can also improve the faithfulness of the summary and generalize well to different pre-trained language models. Moreover, we propose a method to evaluate the integrity of generated entities. Besides, we adapt the data augmentation method in the FactCC model according to the difference between Chinese and English in grammar and train a new evaluation model for factual consistency evaluation in Chinese summarization.

随着深度学习的成功应用,文档摘要系统可以产生更具可读性的结果。然而,抽象摘要仍然存在输出不真实和事实错误的问题,尤其是在命名实体方面。目前的方法倾向于利用外部知识来提高模型性能,却忽视了实体的边界信息和语义。在本文中,我们提出了一种实体增强方法(EAM),鼓励模型充分利用实体边界信息,并对关键实体给予更多关注。在三个中英文摘要数据集上的实验结果表明,我们的方法优于几种强基线方法,并在 CLTS 数据集上达到了最先进的性能。我们的方法还能提高摘要的忠实度,并能很好地泛化到不同的预训练语言模型中。此外,我们还提出了一种评估生成实体完整性的方法。此外,我们还根据中英文语法差异调整了 FactCC 模型中的数据增强方法,并训练了一个新的评估模型,用于中文摘要中的事实一致性评估。
{"title":"Boundary-Aware Abstractive Summarization with Entity-Augmented Attention for Enhancing Faithfulness","authors":"Jiuyi Li, Junpeng Liu, Jianjun Ma, Wei Yang, Degen Huang","doi":"10.1145/3641278","DOIUrl":"https://doi.org/10.1145/3641278","url":null,"abstract":"<p>With the successful application of deep learning, document summarization systems can produce more readable results. However, abstractive summarization still suffers from unfaithful outputs and factual errors, especially in named entities. Current approaches tend to employ external knowledge to improve model performance while neglecting the boundary information and the semantics of the entities. In this paper, we propose an entity-augmented method (EAM) to encourage the model to make full use of the entity boundary information and pay more attention to the critical entities. Experimental results on three Chinese and English summarization datasets show that our method outperforms several strong baselines and achieves state-of-the-art performance on the CLTS dataset. Our method can also improve the faithfulness of the summary and generalize well to different pre-trained language models. Moreover, we propose a method to evaluate the integrity of generated entities. Besides, we adapt the data augmentation method in the FactCC model according to the difference between Chinese and English in grammar and train a new evaluation model for factual consistency evaluation in Chinese summarization.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Study for Enhancing Low-resource Thai-Myanmar-English Neural Machine Translation 加强低资源泰缅英神经机器翻译的研究
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-13 DOI: 10.1145/3645111
Mya Ei San, Sasiporn Usanavasin, Ye Kyaw Thu, Manabu Okumura

Several methodologies have recently been proposed to enhance the performance of low-resource Neural Machine Translation (NMT). However, these techniques have yet to be explored thoroughly in low-resource Thai and Myanmar languages. Therefore, we first applied augmentation techniques such as SwitchOut and Ciphertext Based Data Augmentation (CipherDAug) to improve NMT performance in these languages. We secondly enhanced the NMT performance by fine-tuning the pre-trained Multilingual Denoising BART model (mBART), where BART denotes Bidirectional and Auto-Regressive Transformer. We implemented three NMT systems: namely, Transformer+SwitchOut, Multi-source Transformer+CipherDAug, and fine-tuned mBART in the bidirectional translations of Thai-English-Myanmar language pairs from the ASEAN-MT corpus. Experimental results showed that Multi-source Transformer+CipherDAug significantly improved BLEU, ChrF, and TER scores over the first baseline Transformer and second baseline Edit-Based Transformer (EDITOR). The model achieved notable BLEU scores: 37.9 (English-to-Thai), 42.7 (Thai-to-English), 28.9 (English-to-Myanmar), 31.2 (Myanmar-to-English), 25.3 (Thai-to-Myanmar), and 25.5 (Myanmar-to-Thai). The fine-tuned mBART model also considerably outperformed the two baselines, except for the Myanmar-to-English pair. SwitchOut improved over the second baseline in all pairs and performed similarly to the first baseline in most cases. Lastly, we performed detailed analyses verifying that the CipherDAug and mBART models potentially facilitate improving low-resource NMT performance in Thai and Myanmar languages.

最近提出了几种方法来提高低资源神经机器翻译(NMT)的性能。但是,这些技术在低资源泰语和缅甸语中还没有得到深入探讨。因此,我们首先应用了增强技术,如 SwitchOut 和基于密文的数据增强(CipherDAug),以提高这些语言的 NMT 性能。其次,我们通过微调预先训练好的多语言去噪 BART 模型(mBART)来提高 NMT 性能,其中 BART 表示双向和自动回归变换器(Bidirectional and Auto-Regressive Transformer)。我们在 ASEAN-MT 语料库的泰英缅语言对的双向翻译中实施了三种 NMT 系统:即 Transformer+SwitchOut、Multi-source Transformer+CipherDAug,以及微调后的 mBART。实验结果表明,与第一基线转换器和第二基线基于编辑的转换器(EDITOR)相比,多源转换器+CipherDAug 显著提高了 BLEU、ChrF 和 TER 分数。该模型取得了显著的 BLEU 分数:37.9(英译泰)、42.7(泰译英)、28.9(英译缅)、31.2(缅译英)、25.3(泰译缅)和 25.5(缅译泰)。微调后的 mBART 模型也大大优于两个基线模型,但缅甸语对英语除外。SwitchOut 在所有语音对中的表现都优于第二基线,在大多数情况下与第一基线的表现相似。最后,我们进行了详细的分析,验证了 CipherDAug 和 mBART 模型可能有助于提高泰语和缅甸语的低资源 NMT 性能。
{"title":"A Study for Enhancing Low-resource Thai-Myanmar-English Neural Machine Translation","authors":"Mya Ei San, Sasiporn Usanavasin, Ye Kyaw Thu, Manabu Okumura","doi":"10.1145/3645111","DOIUrl":"https://doi.org/10.1145/3645111","url":null,"abstract":"<p>Several methodologies have recently been proposed to enhance the performance of low-resource Neural Machine Translation (NMT). However, these techniques have yet to be explored thoroughly in low-resource Thai and Myanmar languages. Therefore, we first applied augmentation techniques such as SwitchOut and Ciphertext Based Data Augmentation (CipherDAug) to improve NMT performance in these languages. We secondly enhanced the NMT performance by fine-tuning the pre-trained Multilingual Denoising BART model (mBART), where BART denotes Bidirectional and Auto-Regressive Transformer. We implemented three NMT systems: namely, Transformer+SwitchOut, Multi-source Transformer+CipherDAug, and fine-tuned mBART in the bidirectional translations of Thai-English-Myanmar language pairs from the ASEAN-MT corpus. Experimental results showed that Multi-source Transformer+CipherDAug significantly improved BLEU, ChrF, and TER scores over the first baseline Transformer and second baseline Edit-Based Transformer (EDITOR). The model achieved notable BLEU scores: 37.9 (English-to-Thai), 42.7 (Thai-to-English), 28.9 (English-to-Myanmar), 31.2 (Myanmar-to-English), 25.3 (Thai-to-Myanmar), and 25.5 (Myanmar-to-Thai). The fine-tuned mBART model also considerably outperformed the two baselines, except for the Myanmar-to-English pair. SwitchOut improved over the second baseline in all pairs and performed similarly to the first baseline in most cases. Lastly, we performed detailed analyses verifying that the CipherDAug and mBART models potentially facilitate improving low-resource NMT performance in Thai and Myanmar languages.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Disambiguation of Isolated Manipuri Tonal Contrast Word Pairs using Acoustic Features 利用声学特征消歧孤立的曼尼普尔音调对比词对
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-12 DOI: 10.1145/3643830
Thiyam Susma Devi, Pradip K. Das

Manipuri is a low-resource, Tibeto-Burman tonal language spoken mainly in Manipur, a northeastern state of India. Tone identification is crucial to speech comprehension for tonal languages, where tone defines the word’s meaning. Automatic Speech Recognition for those languages can perform better by including tonal information from a powerful tone detection system. While significant research has been conducted on tonal languages like Mandarin, Thai, Cantonese and Vietnamese, a notable gap exists in exploring Manipuri within this context. To address this gap, this work expands our previously developed handcrafted speech corpus, ManiTo, which comprises of isolated Manipuri tonal contrast word pairs to study the tones of Manipuri. This extension includes contributions from twenty native speakers. Preliminary findings have confirmed that Manipuri has two unique tones, Falling and Level. The study then conducts a comprehensive acoustic feature analysis. Two sets of features based on Pitch contours, Jitter and Shimmer measurements are investigated to distinguish the two tones of Manipuri. Support Vector Machine, Long Short-Term Memory, Random Forest and k-Nearest Neighbors are the classifiers adopted to validate the selected feature sets. The results indicate that the second set of features consistently outperformed the first set, demonstrating higher accuracy, particularly when utilizing the Random Forest classifier, which provides valuable insights for further advancements in speech recognition technology for low-resource tonal language Manipuri.

曼尼普尔语是一种资源匮乏的藏缅语调语言,主要在印度东北部的曼尼普尔邦使用。音调识别对于音调语言的语音理解至关重要,因为音调决定了单词的含义。如果将强大的音调检测系统提供的音调信息包括在内,这些语言的自动语音识别功能就能发挥得更好。虽然对普通话、泰语、粤语和越南语等声调语言进行了大量研究,但在探索曼尼普里语方面还存在明显差距。为了填补这一空白,这项工作扩展了我们之前开发的手工制作语音语料库 ManiTo,该语料库由孤立的曼尼普尔语声调对比词对组成,用于研究曼尼普尔语的声调。这一扩展包括来自 20 位母语人士的贡献。初步研究结果证实,曼尼普尔语有两种独特的音调,即 "下降 "和 "水平"。研究随后进行了全面的声学特征分析。研究了基于音高轮廓、抖动和微光测量的两组特征,以区分曼尼普里语的两种音调。支持向量机、长短期记忆、随机森林和 k 近邻是验证所选特征集的分类器。结果表明,第二组特征始终优于第一组特征,尤其是在使用随机森林分类器时,表现出更高的准确性,这为进一步提高低资源音调语言曼尼普尔语的语音识别技术提供了宝贵的见解。
{"title":"Disambiguation of Isolated Manipuri Tonal Contrast Word Pairs using Acoustic Features","authors":"Thiyam Susma Devi, Pradip K. Das","doi":"10.1145/3643830","DOIUrl":"https://doi.org/10.1145/3643830","url":null,"abstract":"<p>Manipuri is a low-resource, Tibeto-Burman tonal language spoken mainly in Manipur, a northeastern state of India. Tone identification is crucial to speech comprehension for tonal languages, where tone defines the word’s meaning. Automatic Speech Recognition for those languages can perform better by including tonal information from a powerful tone detection system. While significant research has been conducted on tonal languages like Mandarin, Thai, Cantonese and Vietnamese, a notable gap exists in exploring Manipuri within this context. To address this gap, this work expands our previously developed handcrafted speech corpus, ManiTo, which comprises of isolated Manipuri tonal contrast word pairs to study the tones of Manipuri. This extension includes contributions from twenty native speakers. Preliminary findings have confirmed that Manipuri has two unique tones, Falling and Level. The study then conducts a comprehensive acoustic feature analysis. Two sets of features based on Pitch contours, Jitter and Shimmer measurements are investigated to distinguish the two tones of Manipuri. Support Vector Machine, Long Short-Term Memory, Random Forest and k-Nearest Neighbors are the classifiers adopted to validate the selected feature sets. The results indicate that the second set of features consistently outperformed the first set, demonstrating higher accuracy, particularly when utilizing the Random Forest classifier, which provides valuable insights for further advancements in speech recognition technology for low-resource tonal language Manipuri.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CodeKGC: Code Language Model for Generative Knowledge Graph Construction CodeKGC:生成式知识图谱构建的代码语言模型
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-09 DOI: 10.1145/3641850
Zhen Bi, Jing Chen, Yinuo Jiang, Feiyu Xiong, Wei Guo, Huajun Chen, Ningyu Zhang

Current generative knowledge graph construction approaches usually fail to capture structural knowledge by simply flattening natural language into serialized texts or a specification language. However, large generative language model trained on structured data such as code has demonstrated impressive capability in understanding natural language for structural prediction and reasoning tasks. Intuitively, we address the task of generative knowledge graph construction with code language model: given a code-format natural language input, the target is to generate triples which can be represented as code completion tasks. Specifically, we develop schema-aware prompts that effectively utilize the semantic structure within the knowledge graph. As code inherently possesses structure, such as class and function definitions, it serves as a useful model for prior semantic structural knowledge. Furthermore, we employ a rationale-enhanced generation method to boost the performance. Rationales provide intermediate steps, thereby improving knowledge extraction abilities. Experimental results indicate that the proposed approach can obtain better performance on benchmark datasets compared with baselines.

当前的生成式知识图谱构建方法通常只是将自然语言扁平化为序列化文本或规范语言,从而无法捕捉结构性知识。然而,在代码等结构化数据上训练的大型生成语言模型在理解自然语言以完成结构预测和推理任务方面表现出了令人印象深刻的能力。直观地说,我们利用代码语言模型来完成生成知识图谱的构建任务:给定代码格式的自然语言输入,目标是生成可表示为代码完成任务的三元组。具体来说,我们开发了可有效利用知识图谱内语义结构的模式感知提示。由于代码本身具有结构(如类和函数定义),因此它可以作为先验语义结构知识的有用模型。此外,我们还采用了增强理由生成方法来提高性能。理由提供了中间步骤,从而提高了知识提取能力。实验结果表明,与基线方法相比,所提出的方法在基准数据集上可以获得更好的性能。
{"title":"CodeKGC: Code Language Model for Generative Knowledge Graph Construction","authors":"Zhen Bi, Jing Chen, Yinuo Jiang, Feiyu Xiong, Wei Guo, Huajun Chen, Ningyu Zhang","doi":"10.1145/3641850","DOIUrl":"https://doi.org/10.1145/3641850","url":null,"abstract":"<p>Current generative knowledge graph construction approaches usually fail to capture structural knowledge by simply flattening natural language into serialized texts or a specification language. However, large generative language model trained on structured data such as code has demonstrated impressive capability in understanding natural language for structural prediction and reasoning tasks. Intuitively, we address the task of generative knowledge graph construction with code language model: given a code-format natural language input, the target is to generate triples which can be represented as code completion tasks. Specifically, we develop schema-aware prompts that effectively utilize the semantic structure within the knowledge graph. As code inherently possesses structure, such as class and function definitions, it serves as a useful model for prior semantic structural knowledge. Furthermore, we employ a rationale-enhanced generation method to boost the performance. Rationales provide intermediate steps, thereby improving knowledge extraction abilities. Experimental results indicate that the proposed approach can obtain better performance on benchmark datasets compared with baselines.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrastive Language-Knowledge Graph Pre-training 对比语言知识图谱预培训
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-09 DOI: 10.1145/3644820
Xiaowei Yuan, Kang Liu, Yequan Wang

Recent years have witnessed a surge of academic interest in knowledge-enhanced pre-trained language models (PLMs) that incorporate factual knowledge to enhance knowledge-driven applications. Nevertheless, existing studies primarily focus on shallow, static, and separately pre-trained entity embeddings, with few delving into the potential of deep contextualized knowledge representation for knowledge incorporation. Consequently, the performance gains of such models remain limited. In this paper, we introduce a simple yet effective knowledge-enhanced model, College (Contrastive Language-Knowledge Graph Pre-training), which leverages contrastive learning to incorporate factual knowledge into PLMs. This approach maintains the knowledge in its original graph structure to provide the most available information and circumvents the issue of heterogeneous embedding fusion. Experimental results demonstrate that our approach achieves more effective results on several knowledge-intensive tasks compared to previous state-of-the-art methods. Our code and trained models are available at https://github.com/Stacy027/COLLEGE.

近年来,学术界对知识增强型预训练语言模型(PLMs)的兴趣激增,这些模型结合事实知识来增强知识驱动型应用。然而,现有的研究主要集中在浅层、静态和单独预训练的实体嵌入上,很少有人深入研究深度语境化知识表征在知识整合方面的潜力。因此,此类模型的性能提升仍然有限。在本文中,我们介绍了一种简单而有效的知识增强模型--College(对比语言-知识图谱预训练),它利用对比学习将事实知识纳入 PLM。这种方法将知识保持在原始图结构中,以提供最可用的信息,并避免了异构嵌入融合的问题。实验结果表明,与之前最先进的方法相比,我们的方法在一些知识密集型任务中取得了更有效的结果。我们的代码和训练有素的模型可在 https://github.com/Stacy027/COLLEGE 上获取。
{"title":"Contrastive Language-Knowledge Graph Pre-training","authors":"Xiaowei Yuan, Kang Liu, Yequan Wang","doi":"10.1145/3644820","DOIUrl":"https://doi.org/10.1145/3644820","url":null,"abstract":"<p>Recent years have witnessed a surge of academic interest in knowledge-enhanced pre-trained language models (PLMs) that incorporate factual knowledge to enhance knowledge-driven applications. Nevertheless, existing studies primarily focus on shallow, static, and separately pre-trained entity embeddings, with few delving into the potential of deep contextualized knowledge representation for knowledge incorporation. Consequently, the performance gains of such models remain limited. In this paper, we introduce a simple yet effective knowledge-enhanced model, <span>College</span> (<b>Co</b>ntrastive <b>L</b>anguage-Know<b>le</b>dge <b>G</b>raph Pr<b>e</b>-training), which leverages contrastive learning to incorporate factual knowledge into PLMs. This approach maintains the knowledge in its original graph structure to provide the most available information and circumvents the issue of heterogeneous embedding fusion. Experimental results demonstrate that our approach achieves more effective results on several knowledge-intensive tasks compared to previous state-of-the-art methods. Our code and trained models are available at https://github.com/Stacy027/COLLEGE.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Regression Analysis with Ensemble Pipeline Approach for Applications Across Multiple Domains 利用集合管道法改进回归分析,实现跨领域应用
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-08 DOI: 10.1145/3645110
Debajyoty Banik, Rahul Paul, Rajkumar Singh Rathore, Rutvij H. Jhaveri

In this research, we introduce two new machine learning regression methods: the Ensemble Average and the Pipelined Model. These methods aim to enhance traditional regression analysis for predictive tasks and have undergone thorough evaluation across three datasets: Kaggle House Price, Boston House Price, and California Housing, using various performance metrics. The results consistently show that our models outperform existing methods in terms of accuracy and reliability across all three datasets. The Pipelined Model, in particular, is notable for its ability to combine predictions from multiple models, leading to higher accuracy and impressive scalability. This scalability allows for their application in diverse fields like technology, finance, and healthcare. Furthermore, these models can be adapted for real-time and streaming data analysis, making them valuable for applications such as fraud detection, stock market prediction, and IoT sensor data analysis. Enhancements to the models also make them suitable for big data applications, ensuring their relevance for large datasets and distributed computing environments. It’s important to acknowledge some limitations of our models, including potential data biases, specific assumptions, increased complexity, and challenges related to interpretability when using them in practical scenarios. Nevertheless, these innovations advance predictive modeling, and our comprehensive evaluation underscores their potential to provide increased accuracy and reliability across a wide range of applications. The results indicate that the proposed models outperform existing models in terms of accuracy and robustness for all three datasets. The source code can be found at https://huggingface.co/DebajyotyBanik/Ensemble-Pipelined-Regression/tree/main.

在这项研究中,我们介绍了两种新的机器学习回归方法:集合平均法和流水线模型。这些方法旨在增强预测任务的传统回归分析,并在三个数据集上进行了全面评估:我们使用各种性能指标对 Kaggle 房价、波士顿房价和加州住房三个数据集进行了全面评估。结果一致表明,在所有三个数据集上,我们的模型在准确性和可靠性方面都优于现有方法。特别是管道化模型,它能够结合多个模型的预测结果,从而获得更高的准确性和令人印象深刻的可扩展性。这种可扩展性使其能够应用于技术、金融和医疗保健等不同领域。此外,这些模型还可用于实时和流数据分析,因此在欺诈检测、股市预测和物联网传感器数据分析等应用中非常有价值。对模型的改进还使其适用于大数据应用,确保其适用于大型数据集和分布式计算环境。必须承认我们的模型存在一些局限性,包括潜在的数据偏差、特定的假设、复杂性的增加以及在实际场景中使用时与可解释性相关的挑战。然而,这些创新推动了预测建模的发展,我们的综合评估强调了它们在广泛应用中提供更高精度和可靠性的潜力。结果表明,就所有三个数据集而言,所提出的模型在准确性和稳健性方面都优于现有模型。源代码见 https://huggingface.co/DebajyotyBanik/Ensemble-Pipelined-Regression/tree/main。
{"title":"Improved Regression Analysis with Ensemble Pipeline Approach for Applications Across Multiple Domains","authors":"Debajyoty Banik, Rahul Paul, Rajkumar Singh Rathore, Rutvij H. Jhaveri","doi":"10.1145/3645110","DOIUrl":"https://doi.org/10.1145/3645110","url":null,"abstract":"<p>In this research, we introduce two new machine learning regression methods: the Ensemble Average and the Pipelined Model. These methods aim to enhance traditional regression analysis for predictive tasks and have undergone thorough evaluation across three datasets: Kaggle House Price, Boston House Price, and California Housing, using various performance metrics. The results consistently show that our models outperform existing methods in terms of accuracy and reliability across all three datasets. The Pipelined Model, in particular, is notable for its ability to combine predictions from multiple models, leading to higher accuracy and impressive scalability. This scalability allows for their application in diverse fields like technology, finance, and healthcare. Furthermore, these models can be adapted for real-time and streaming data analysis, making them valuable for applications such as fraud detection, stock market prediction, and IoT sensor data analysis. Enhancements to the models also make them suitable for big data applications, ensuring their relevance for large datasets and distributed computing environments. It’s important to acknowledge some limitations of our models, including potential data biases, specific assumptions, increased complexity, and challenges related to interpretability when using them in practical scenarios. Nevertheless, these innovations advance predictive modeling, and our comprehensive evaluation underscores their potential to provide increased accuracy and reliability across a wide range of applications. The results indicate that the proposed models outperform existing models in terms of accuracy and robustness for all three datasets. The source code can be found at https://huggingface.co/DebajyotyBanik/Ensemble-Pipelined-Regression/tree/main.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Recurrent Neural Network with Bi-LSTM for Handwritten Tamil text segmentation in NLP 快速循环神经网络与 Bi-LSTM 在 NLP 中用于泰米尔语手写文本分割
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-07 DOI: 10.1145/3643808
C. Vinotheni, Lakshmana Pandian S.

Tamil text segmentation is a long-standing test in language comprehension that entails separating a record into adjacent pieces based on its semantic design. Each segment is important in its own way. The segments are organised according to the purpose of the content examination as text groups, sentences, phrases, words, characters or any other data unit. That process has been portioned using rapid tangled neural organisation in this research, which presents content segmentation methods based on deep learning in natural language processing (NLP). This study proposes a bidirectional long short-term memory (Bi-LSTM) neural network prototype in which fast recurrent neural network (FRNN) are used to learn Tamil text group embedding and phrases are fragmented using text-oriented data. As a result, this prototype is capable of handling variable measured setting data and gives a vast new dataset for naturally segmenting text in Tamil. In addition, we develop a segmentation prototype and show how well it sums up to unnoticeable regular content using this dataset as a base. With Bi-LSTM, the segmentation precision of FRNN is superior to that of other segmentation approaches; however, it is still inferior to that of certain other techniques. Every content is scaled to the required size in the proposed framework, which is immediately accessible for the preparation. This means, each word in a scaled Tamil text is employed to prepare neural organisation as fragmented content. The results reveal that the proposed framework produces high rates of segmentation for manually authored material that are nearly equivalent to segmentation-based plans.

泰米尔语文本分段是语言理解中一项历史悠久的测试,需要根据语义设计将记录分成相邻的片段。每个片段都有其自身的重要性。这些片段根据内容检查的目的组织成文本组、句子、短语、单词、字符或任何其他数据单元。本研究利用快速纠缠神经组织对这一过程进行了分段,提出了基于自然语言处理(NLP)中深度学习的内容分段方法。本研究提出了一种双向长短期记忆(Bi-LSTM)神经网络原型,其中使用了快速循环神经网络(FRNN)来学习泰米尔语文本组嵌入,并使用面向文本的数据对短语进行分割。因此,该原型能够处理可变的测量设置数据,并为自然分割泰米尔语文本提供了一个庞大的新数据集。此外,我们还开发了一个分段原型,并以该数据集为基础,展示了它对不易察觉的常规内容的总结效果。在使用 Bi-LSTM 的情况下,FRNN 的分割精度优于其他分割方法,但仍低于某些其他技术。在所提出的框架中,每个内容都被缩放为所需的大小,可立即用于准备工作。这意味着,缩放的泰米尔语文本中的每个单词都会被用作神经组织的片段内容。结果表明,对于人工撰写的材料,建议的框架能产生很高的分割率,几乎等同于基于分割的计划。
{"title":"Fast Recurrent Neural Network with Bi-LSTM for Handwritten Tamil text segmentation in NLP","authors":"C. Vinotheni, Lakshmana Pandian S.","doi":"10.1145/3643808","DOIUrl":"https://doi.org/10.1145/3643808","url":null,"abstract":"<p>Tamil text segmentation is a long-standing test in language comprehension that entails separating a record into adjacent pieces based on its semantic design. Each segment is important in its own way. The segments are organised according to the purpose of the content examination as text groups, sentences, phrases, words, characters or any other data unit. That process has been portioned using rapid tangled neural organisation in this research, which presents content segmentation methods based on deep learning in natural language processing (NLP). This study proposes a bidirectional long short-term memory (Bi-LSTM) neural network prototype in which fast recurrent neural network (FRNN) are used to learn Tamil text group embedding and phrases are fragmented using text-oriented data. As a result, this prototype is capable of handling variable measured setting data and gives a vast new dataset for naturally segmenting text in Tamil. In addition, we develop a segmentation prototype and show how well it sums up to unnoticeable regular content using this dataset as a base. With Bi-LSTM, the segmentation precision of FRNN is superior to that of other segmentation approaches; however, it is still inferior to that of certain other techniques. Every content is scaled to the required size in the proposed framework, which is immediately accessible for the preparation. This means, each word in a scaled Tamil text is employed to prepare neural organisation as fragmented content. The results reveal that the proposed framework produces high rates of segmentation for manually authored material that are nearly equivalent to segmentation-based plans.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Seq2Set2Seq: A Two-stage Disentangled Method for Reply Keyword Generation in Social Media via Multi-label Prediction and Determinantal Point Processes Seq2Set2Seq:通过多标签预测和确定性点过程在社交媒体中生成回复关键词的两阶段分离法
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-05 DOI: 10.1145/3644074
Jie Liu, Yaguang Li, Shizhu He, Shun Wu, Kang Liu, Shenping Liu, Jiong Wang, Qing Zhang

Social media produces large amounts of contents every day. How to predict the potential influences of the contents from a social reply feedback perspective is a key issue that has not been explored. Thus, we propose a novel task named reply keyword prediction in social media, which aims to predict the keywords in the potential replies as many aspects as possible. One prerequisite challenge is that the accessible social media datasets labeling such keywords remain absent. To solve this issue, we propose a new dataset, to study the reply keyword prediction in Social Media. This task could be seen as a single-turn dialogue keyword prediction for open-domain dialogue system. However, existing methods for dialogue keyword prediction cannot be adopted directly, which have two main drawbacks. First, they do not provide an explicit mechanism to model topic complementarity between keywords which is crucial in social media to controllably model all aspects of replies. Second, the collocations of keywords are not explicitly modeled, which also makes it less controllable to optimize for fine-grained prediction since the context information is much less than that in dialogue. To address these issues, we propose a two-stage disentangled framework, which can optimize the complementarity and collocation explicitly in a disentangled fashion. In the first stage, we use a sequence-to-set paradigm via multi-label prediction and determinantal point processes, to generate a set of keyword seeds satisfying the complementarity. In the second stage, we adopt a set-to-sequence paradigm via seq2seq model with the keyword seeds guidance from the set, to generate the more-fine-grained keywords with collocation. Experiments show that this method can generate not only a more diverse set of keywords but also more relevant and consistent keywords. Furthermore, the keywords obtained based on this method can achieve better reply generation results in the retrieval-based system than others.

社交媒体每天都会产生大量内容。如何从社交回复反馈的角度预测这些内容的潜在影响是一个尚未探索的关键问题。因此,我们提出了一项名为 "社交媒体中回复关键词预测 "的新任务,旨在尽可能多地预测潜在回复中的关键词。一个先决挑战是,标注此类关键词的可访问社交媒体数据集仍然缺乏。为了解决这个问题,我们提出了一个新的数据集来研究社交媒体中的回复关键词预测。这项任务可视为开放域对话系统的单轮对话关键词预测。然而,现有的对话关键词预测方法不能直接采用,它们有两个主要缺点。首先,它们没有提供明确的机制来模拟关键词之间的话题互补性,而这在社交媒体中对于可控地模拟回复的各个方面至关重要。其次,关键词的搭配没有明确建模,这也使得优化细粒度预测的可控性降低,因为上下文信息比对话中的信息要少得多。为了解决这些问题,我们提出了一个两阶段分解框架,可以分解的方式明确优化互补性和搭配。在第一阶段,我们使用序列到集合范式,通过多标签预测和行列式点过程,生成一组满足互补性的关键词种子。在第二阶段,我们通过 seq2seq 模型,采用集合到序列的范式,以集合中的关键字种子为导向,生成具有搭配性的更细粒度关键字。实验表明,这种方法不仅能生成更多样化的关键词集,还能生成更相关、更一致的关键词。此外,在基于检索的系统中,基于该方法生成的关键词能获得比其他方法更好的回复生成结果。
{"title":"Seq2Set2Seq: A Two-stage Disentangled Method for Reply Keyword Generation in Social Media via Multi-label Prediction and Determinantal Point Processes","authors":"Jie Liu, Yaguang Li, Shizhu He, Shun Wu, Kang Liu, Shenping Liu, Jiong Wang, Qing Zhang","doi":"10.1145/3644074","DOIUrl":"https://doi.org/10.1145/3644074","url":null,"abstract":"<p>Social media produces large amounts of contents every day. How to predict the potential influences of the contents from a social reply feedback perspective is a key issue that has not been explored. Thus, we propose a novel task named reply keyword prediction in social media, which aims to predict the keywords in the potential replies as many aspects as possible. One prerequisite challenge is that the accessible social media datasets labeling such keywords remain absent. To solve this issue, we propose a new dataset, to study the reply keyword prediction in Social Media. This task could be seen as a single-turn dialogue keyword prediction for open-domain dialogue system. However, existing methods for dialogue keyword prediction cannot be adopted directly, which have two main drawbacks. First, they do not provide an explicit mechanism to model topic complementarity between keywords which is crucial in social media to controllably model all aspects of replies. Second, the collocations of keywords are not explicitly modeled, which also makes it less controllable to optimize for fine-grained prediction since the context information is much less than that in dialogue. To address these issues, we propose a two-stage disentangled framework, which can optimize the complementarity and collocation explicitly in a disentangled fashion. In the first stage, we use a sequence-to-set paradigm via multi-label prediction and determinantal point processes, to generate a set of keyword seeds satisfying the complementarity. In the second stage, we adopt a set-to-sequence paradigm via seq2seq model with the keyword seeds guidance from the set, to generate the more-fine-grained keywords with collocation. Experiments show that this method can generate not only a more diverse set of keywords but also more relevant and consistent keywords. Furthermore, the keywords obtained based on this method can achieve better reply generation results in the retrieval-based system than others.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved BIO-based Chinese Automatic Abstract-generation Model 基于 BIO 的改进型中文自动摘要生成模型
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-05 DOI: 10.1145/3643695
Qing Li, Weibin Wan, Yuming Zhao, Xiaoyan Jiang

With its unique information-filtering function, text summarization technology has become a significant aspect of search engines and question-and-answer systems. However, existing models that include the copy mechanism often lack the ability to extract important fragments, resulting in generated content that suffers from thematic deviation and insufficient generalization. Specifically, Chinese automatic summarization using traditional generation methods often loses semantics because of its reliance on word lists. To address these issues, we proposed the novel BioCopy mechanism for the summarization task. By training the tags of predictive words and reducing the probability distribution range on the glossary, we enhanced the ability to generate continuous segments, which effectively solves the above problems. Additionally, we applied reinforced canonicality to the inputs to obtain better model results, making the model share the sub-network weight parameters and sparsing the model output to reduce the search space for model prediction. To further improve the model’s performance, we calculated the bilingual evaluation understudy (BLEU) score on the English dataset CNN/DailyMail to filter the thresholds and reduce the difficulty of word separation and the dependence of the output on the word list. We fully fine-tuned the model using the LCSTS dataset for the Chinese summarization task and conducted small-sample experiments using the CSL dataset. We also conducted ablation experiments on the Chinese dataset. The experimental results demonstrate that the optimized model can learn the semantic representation of the original text better than other models and performs well with small sample sizes.

凭借其独特的信息过滤功能,文本摘要技术已成为搜索引擎和问答系统的重要组成部分。然而,现有的包含复制机制的模型往往缺乏提取重要片段的能力,导致生成的内容存在主题偏离和概括性不足的问题。具体来说,使用传统生成方法进行中文自动摘要时,由于依赖词表,往往会丢失语义。为了解决这些问题,我们针对摘要任务提出了新颖的 BioCopy 机制。通过训练预测词的标签和缩小词汇表的概率分布范围,我们增强了生成连续词段的能力,从而有效地解决了上述问题。此外,为了获得更好的模型效果,我们还对输入进行了强化规范性处理,使模型共享子网络权重参数,并对模型输出进行稀疏化处理,以减少模型预测的搜索空间。为了进一步提高模型的性能,我们在英文数据集 CNN/DailyMail 上计算了双语评估劣度(BLEU)得分,以过滤阈值,降低分词难度和输出对词表的依赖性。我们使用 LCSTS 数据集对中文摘要任务的模型进行了全面微调,并使用 CSL 数据集进行了小样本实验。我们还在中文数据集上进行了消减实验。实验结果表明,优化后的模型能比其他模型更好地学习原文的语义表征,并且在样本量较小的情况下表现良好。
{"title":"Improved BIO-based Chinese Automatic Abstract-generation Model","authors":"Qing Li, Weibin Wan, Yuming Zhao, Xiaoyan Jiang","doi":"10.1145/3643695","DOIUrl":"https://doi.org/10.1145/3643695","url":null,"abstract":"<p>With its unique information-filtering function, text summarization technology has become a significant aspect of search engines and question-and-answer systems. However, existing models that include the copy mechanism often lack the ability to extract important fragments, resulting in generated content that suffers from thematic deviation and insufficient generalization. Specifically, Chinese automatic summarization using traditional generation methods often loses semantics because of its reliance on word lists. To address these issues, we proposed the novel BioCopy mechanism for the summarization task. By training the tags of predictive words and reducing the probability distribution range on the glossary, we enhanced the ability to generate continuous segments, which effectively solves the above problems. Additionally, we applied reinforced canonicality to the inputs to obtain better model results, making the model share the sub-network weight parameters and sparsing the model output to reduce the search space for model prediction. To further improve the model’s performance, we calculated the bilingual evaluation understudy (BLEU) score on the English dataset CNN/DailyMail to filter the thresholds and reduce the difficulty of word separation and the dependence of the output on the word list. We fully fine-tuned the model using the LCSTS dataset for the Chinese summarization task and conducted small-sample experiments using the CSL dataset. We also conducted ablation experiments on the Chinese dataset. The experimental results demonstrate that the optimized model can learn the semantic representation of the original text better than other models and performs well with small sample sizes.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Expert System for Indian Sign Language Recognition using Spatial Attention based Feature and Temporal Feature 利用空间注意力特征和时间特征识别印度手语的专家系统
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-03 DOI: 10.1145/3643824
Soumen Das, Saroj Kr. Biswas, Biswajit Purkayastha

Sign Language (SL) is the only means of communication for the hearing-impaired people. Normal people have difficulty understanding SL, resulting in a communication barrier between hearing impaired people and hearing community. However, the Sign Language Recognition System (SLRS) has helped to bridge the communication gap. Many SLRs are proposed for recognizing SL; however, a limited number of works are reported for Indian Sign Language (ISL). Most of the existing SLRS focus on global features other than the Region of Interest (ROI). Focusing more on the hand region and extracting local features from the ROI improves system accuracy. The attention mechanism is a widely used technique for emphasizing the ROI. However, only a few SLRS used the attention method. They employed the Convolution Block Attention Module (CBAM) and temporal attention but Spatial Attention (SA) is not utilized in previous SLRS. Therefore, a novel SA based SLRS named Spatial Attention-based Sign Language Recognition Module (SASLRM) is proposed to recognize ISL words for emergency situations. SASLRM recognizes ISL words by combining convolution features from a pretrained VGG-19 model and attention features from a SA module. The proposed model accomplished an average accuracy of 95.627% on the ISL dataset. The proposed SASLRM is further validated on LSA64, WLASL and Cambridge Hand Gesture Recognition (HGR) datasets where, the proposed model reached an accuracy of 97.84 %, 98.86% and 98.22’% respectively. The results indicate the effectiveness of the proposed SLRS in comparison with the existing SLRS.

手语是听障人士唯一的交流方式。正常人很难理解手语,导致听障人士与健听群体之间存在沟通障碍。然而,手语识别系统(SLRS)有助于消除这一沟通障碍。许多手语识别系统都是为识别手语而提出的,但针对印度手语(ISL)的报告数量有限。大多数现有的手语识别系统都侧重于兴趣区域(ROI)以外的全局特征。更多地关注手部区域并从感兴趣区域中提取局部特征可提高系统的准确性。注意力机制是一种广泛使用的强调 ROI 的技术。然而,只有少数 SLRS 使用了注意力方法。他们使用了卷积块注意力模块(CBAM)和时间注意力,但空间注意力(SA)在以前的 SLRS 中并没有使用。因此,我们提出了一种基于空间注意力的新型手语识别系统,名为基于空间注意力的手语识别模块(SASLRM),用于识别紧急情况下的 ISL 词语。SASLRM 通过结合来自预训练 VGG-19 模型的卷积特征和来自 SA 模块的注意力特征来识别 ISL 单词。所提出的模型在 ISL 数据集上的平均准确率达到 95.627%。提议的 SASLRM 在 LSA64、WLASL 和剑桥手势识别(HGR)数据集上得到进一步验证,准确率分别达到 97.84%、98.86% 和 98.22'%。这些结果表明,与现有的 SLRS 相比,所提出的 SLRS 非常有效。
{"title":"An Expert System for Indian Sign Language Recognition using Spatial Attention based Feature and Temporal Feature","authors":"Soumen Das, Saroj Kr. Biswas, Biswajit Purkayastha","doi":"10.1145/3643824","DOIUrl":"https://doi.org/10.1145/3643824","url":null,"abstract":"<p>Sign Language (SL) is the only means of communication for the hearing-impaired people. Normal people have difficulty understanding SL, resulting in a communication barrier between hearing impaired people and hearing community. However, the Sign Language Recognition System (SLRS) has helped to bridge the communication gap. Many SLRs are proposed for recognizing SL; however, a limited number of works are reported for Indian Sign Language (ISL). Most of the existing SLRS focus on global features other than the Region of Interest (ROI). Focusing more on the hand region and extracting local features from the ROI improves system accuracy. The attention mechanism is a widely used technique for emphasizing the ROI. However, only a few SLRS used the attention method. They employed the Convolution Block Attention Module (CBAM) and temporal attention but Spatial Attention (SA) is not utilized in previous SLRS. Therefore, a novel SA based SLRS named Spatial Attention-based Sign Language Recognition Module (SASLRM) is proposed to recognize ISL words for emergency situations. SASLRM recognizes ISL words by combining convolution features from a pretrained VGG-19 model and attention features from a SA module. The proposed model accomplished an average accuracy of 95.627% on the ISL dataset. The proposed SASLRM is further validated on LSA64, WLASL and Cambridge Hand Gesture Recognition (HGR) datasets where, the proposed model reached an accuracy of 97.84 %, 98.86% and 98.22’% respectively. The results indicate the effectiveness of the proposed SLRS in comparison with the existing SLRS.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139678930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Asian and Low-Resource Language Information Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1