首页 > 最新文献

ACM Transactions on Asian and Low-Resource Language Information Processing最新文献

英文 中文
Artificial Intelligence inspired method for cross-lingual cyberhate detection from low resource languages 受人工智能启发的低资源语言跨语言网络仇恨检测方法
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-11 DOI: 10.1145/3677176
Manpreet Kaur, Munish Saini
The appearance of inflammatory language on social media by college or university students is quite prevalent, inspiring platforms to engage in community safety mechanisms. Escalating hate speech entails creating sophisticated artificial intelligence-based, machine learning, and deep learning algorithms to detect offensive internet content. With a few noteworthy exceptions, the majority of the studies on automatic hate speech recognition have emphasized high-resource languages, mainly English. We bridge this gap by addressing hate speech detection in Punjabi (Gurmukhi), a low-resource Indo-Aryan language articulated in Indian educational institutions. This research identifies cross-lingual hate speech in the code-switched English-Punjabi language used on social media. It proposes an approach combining the best hate speech detection techniques to cover existing state-of-art system gaps and limitations. In this method, the Roman Punjabi is transliterated, and then Bidirectional Encoder Representations from Transformer (BERT) based models are employed for hate detection. The proposed model has achieved 0.86 precision and 0.83 recall, and various higher educational institutions could employ it to discover the issues/domains where hate prevails the most.
大专院校学生在社交媒体上出现煽动性语言的现象相当普遍,这促使各平台建立社区安全机制。仇恨言论的升级需要创建基于人工智能、机器学习和深度学习的复杂算法来检测攻击性网络内容。除了少数值得注意的例外情况,大多数关于仇恨言论自动识别的研究都强调高资源语言,主要是英语。我们通过处理旁遮普语(Gurmukhi)中的仇恨言论检测,弥补了这一空白,旁遮普语是印度教育机构中使用的一种低资源印度-雅利安语。本研究可识别社交媒体上使用的英语-旁遮普语代码转换中的跨语言仇恨言论。它提出了一种结合最佳仇恨言论检测技术的方法,以弥补现有系统的不足和局限。在这种方法中,首先对罗马旁遮普语进行音译,然后采用基于变换器的双向编码器表示(BERT)模型进行仇恨检测。所提出的模型达到了 0.86 的精确度和 0.83 的召回率,各种高等教育机构可以利用它来发现仇恨现象最普遍的问题/领域。
{"title":"Artificial Intelligence inspired method for cross-lingual cyberhate detection from low resource languages","authors":"Manpreet Kaur, Munish Saini","doi":"10.1145/3677176","DOIUrl":"https://doi.org/10.1145/3677176","url":null,"abstract":"The appearance of inflammatory language on social media by college or university students is quite prevalent, inspiring platforms to engage in community safety mechanisms. Escalating hate speech entails creating sophisticated artificial intelligence-based, machine learning, and deep learning algorithms to detect offensive internet content. With a few noteworthy exceptions, the majority of the studies on automatic hate speech recognition have emphasized high-resource languages, mainly English. We bridge this gap by addressing hate speech detection in Punjabi (Gurmukhi), a low-resource Indo-Aryan language articulated in Indian educational institutions. This research identifies cross-lingual hate speech in the code-switched English-Punjabi language used on social media. It proposes an approach combining the best hate speech detection techniques to cover existing state-of-art system gaps and limitations. In this method, the Roman Punjabi is transliterated, and then Bidirectional Encoder Representations from Transformer (BERT) based models are employed for hate detection. The proposed model has achieved 0.86 precision and 0.83 recall, and various higher educational institutions could employ it to discover the issues/domains where hate prevails the most.","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141656897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Hybrid Adaptive Sine Cosine Algorithm with Deep Learning for Arabic Poem Meter Detection 利用混合自适应正弦余弦算法和深度学习进行阿拉伯语诗歌节拍检测
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-10 DOI: 10.1145/3676963
Najla Al-shathry, Badria Al-onazi, Abdulkhaleq Q. A. Hassan, S. Alotaibi, S. Alotaibi, F. Alotaibi, M. Elbes, Mrim M. Alnfiai
Poetry is a significant aspect of any language. Many cultures and the history of nations are recognized in poems. Compared to prose, each poem has a rhythmic structure that is quite different. The language has its set of lyrical structures for poems, known as meters. Detecting the meters of Arabic poems is a complicated and lengthy procedure. The text must be encrypted using the Arudi method to classify the poem's meter, which requires complex rule-based transformation before another set of rules classifies the meters. Applying deep learning (DL) to meter classification in Arabic poems includes constructing a neural network to discern rhythmic patterns inherent in various meters. The model can extract essential features, like word lengths or syllable patterns, by tokenizing and preprocessing text datasets. Architectures such as Long Short-Term Memory Networks (LSTM) or Recurrent Neural Networks (RNNs) are fitting solutions to capture temporal relations in poetic verses. This research introduces a Hybrid Meta-heuristics with Deep Learning for the Arabic Poem Meter Detection and Classification (HMDL-APMDC) model. The main intention of the HMDL-APMDC system is to recognize various kinds of meters in Arabic poems. The HMDL-APMDC technique primarily preprocesses the input dataset to make it compatible with the classification process. Besides, the HMDL-APMDC technique applies Convolution and Attention with a Bi-directional Gated Recurrent Unit (CAT-BiGRU) for the automated recognition of meter classes. Furthermore, the adaptive sine-s-cosine particle swarm optimization (ASCA-PSO) algorithm is applied to optimize the hyperparameter tuning of the CAT-BiGRU model, enhancing the meter detection results. A detailed simulation analysis is made to highlight the improved performance of the HMDL-APMDC technique. The empirical outcomes stated that the HMDL-APMDC technique had a superior outcome of 98.53% over recent models under the MetRec dataset.
诗歌是任何语言的一个重要方面。许多文化和民族的历史都在诗歌中得以体现。与散文相比,每首诗的节奏结构都截然不同。阿拉伯语有一套诗歌的抒情结构,被称为 "韵律"。检测阿拉伯语诗歌的韵律是一个复杂而漫长的过程。必须使用阿鲁迪方法对文本进行加密,以对诗歌的韵律进行分类,这需要进行复杂的规则转换,然后再用另一组规则对韵律进行分类。将深度学习(DL)应用于阿拉伯语诗歌的韵律分类,包括构建一个神经网络来辨别各种韵律中固有的节奏模式。该模型可以通过标记化和预处理文本数据集来提取基本特征,如单词长度或音节模式。长短期记忆网络(LSTM)或递归神经网络(RNN)等架构是捕捉诗句中时间关系的合适解决方案。本研究为阿拉伯语诗歌节拍检测和分类(HMDL-APMDC)模型引入了混合元启发式深度学习。HMDL-APMDC 系统的主要目的是识别阿拉伯语诗歌中的各种韵律。HMDL-APMDC 技术主要对输入数据集进行预处理,使其与分类过程相匹配。此外,HMDL-APMDC 技术还应用了具有双向门控递归单元(CAT-BiGRU)的卷积和注意技术,用于自动识别诗歌的韵律类别。此外,还采用了自适应正余弦粒子群优化算法(ASCA-PSO)来优化 CAT-BiGRU 模型的超参数调整,从而提高了电表检测结果。详细的仿真分析突出了 HMDL-APMDC 技术的改进性能。实证结果表明,在 MetRec 数据集下,HMDL-APMDC 技术的结果比最近的模型高出 98.53%。
{"title":"Leveraging Hybrid Adaptive Sine Cosine Algorithm with Deep Learning for Arabic Poem Meter Detection","authors":"Najla Al-shathry, Badria Al-onazi, Abdulkhaleq Q. A. Hassan, S. Alotaibi, S. Alotaibi, F. Alotaibi, M. Elbes, Mrim M. Alnfiai","doi":"10.1145/3676963","DOIUrl":"https://doi.org/10.1145/3676963","url":null,"abstract":"Poetry is a significant aspect of any language. Many cultures and the history of nations are recognized in poems. Compared to prose, each poem has a rhythmic structure that is quite different. The language has its set of lyrical structures for poems, known as meters. Detecting the meters of Arabic poems is a complicated and lengthy procedure. The text must be encrypted using the Arudi method to classify the poem's meter, which requires complex rule-based transformation before another set of rules classifies the meters. Applying deep learning (DL) to meter classification in Arabic poems includes constructing a neural network to discern rhythmic patterns inherent in various meters. The model can extract essential features, like word lengths or syllable patterns, by tokenizing and preprocessing text datasets. Architectures such as Long Short-Term Memory Networks (LSTM) or Recurrent Neural Networks (RNNs) are fitting solutions to capture temporal relations in poetic verses. This research introduces a Hybrid Meta-heuristics with Deep Learning for the Arabic Poem Meter Detection and Classification (HMDL-APMDC) model. The main intention of the HMDL-APMDC system is to recognize various kinds of meters in Arabic poems. The HMDL-APMDC technique primarily preprocesses the input dataset to make it compatible with the classification process. Besides, the HMDL-APMDC technique applies Convolution and Attention with a Bi-directional Gated Recurrent Unit (CAT-BiGRU) for the automated recognition of meter classes. Furthermore, the adaptive sine-s-cosine particle swarm optimization (ASCA-PSO) algorithm is applied to optimize the hyperparameter tuning of the CAT-BiGRU model, enhancing the meter detection results. A detailed simulation analysis is made to highlight the improved performance of the HMDL-APMDC technique. The empirical outcomes stated that the HMDL-APMDC technique had a superior outcome of 98.53% over recent models under the MetRec dataset.","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141661762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictive Modeling for Arabic Fake News Detection: Leveraging Language Model Embeddings and Stacked Ensemble 阿拉伯语假新闻检测的预测建模:利用语言模型嵌入和堆叠集合
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-08 DOI: 10.1145/3677016
Muhammad Umer, Arwa A. Jamjoom, Shtwai Alsubai, Aisha Ahmed AlArfaj, E. Alabdulqader, I. Ashraf
The proliferation of fake news poses a substantial threat to information integrity, prompting the need for robust detection mechanisms. This study advances the research on Arabic fake news detection and overcomes the limitation of lower accuracy for fake news detection. This research addresses Arabic fake news detection using word embedding and a powerful stacking classifier. The proposed model combines bagging, boosting, and baseline classifiers, harnessing the strengths of each to create a robust ensemble. Extensive experiments are carried out to evaluate the proposed approach indicating remarkable results, with recall, F1 score, accuracy, and precision reaching 99%. The utilization of advanced stacking techniques, coupled with appropriate textual feature extraction, empowers the model to effectively detect Arabic fake news. Study results make a valuable contribution to fake news detection, particularly in the Arabic context, providing a valuable tool for enhancing information veracity and fostering a more informed public discourse. Furthermore, the proposed model’s accuracy is compared with other cutting-edge models from the existing literature to showcase its superior performance.
假新闻的泛滥对信息的完整性构成了巨大威胁,因此需要强有力的检测机制。本研究推进了阿拉伯语假新闻检测的研究,克服了假新闻检测准确率较低的局限。本研究利用单词嵌入和强大的堆叠分类器来检测阿拉伯语假新闻。所提出的模型结合了袋式分类器、提升分类器和基线分类器,利用每种分类器的优势创建了一个强大的组合。为评估所提出的方法,我们进行了广泛的实验,结果表明该方法效果显著,召回率、F1 分数、准确率和精确率均达到 99%。先进的堆叠技术与适当的文本特征提取相结合,使该模型能够有效地检测阿拉伯语假新闻。研究结果为假新闻检测,尤其是阿拉伯语假新闻检测做出了宝贵贡献,为提高信息真实性和促进更知情的公共讨论提供了宝贵工具。此外,还将所提出模型的准确性与现有文献中的其他前沿模型进行了比较,以展示其卓越的性能。
{"title":"Predictive Modeling for Arabic Fake News Detection: Leveraging Language Model Embeddings and Stacked Ensemble","authors":"Muhammad Umer, Arwa A. Jamjoom, Shtwai Alsubai, Aisha Ahmed AlArfaj, E. Alabdulqader, I. Ashraf","doi":"10.1145/3677016","DOIUrl":"https://doi.org/10.1145/3677016","url":null,"abstract":"The proliferation of fake news poses a substantial threat to information integrity, prompting the need for robust detection mechanisms. This study advances the research on Arabic fake news detection and overcomes the limitation of lower accuracy for fake news detection. This research addresses Arabic fake news detection using word embedding and a powerful stacking classifier. The proposed model combines bagging, boosting, and baseline classifiers, harnessing the strengths of each to create a robust ensemble. Extensive experiments are carried out to evaluate the proposed approach indicating remarkable results, with recall, F1 score, accuracy, and precision reaching 99%. The utilization of advanced stacking techniques, coupled with appropriate textual feature extraction, empowers the model to effectively detect Arabic fake news. Study results make a valuable contribution to fake news detection, particularly in the Arabic context, providing a valuable tool for enhancing information veracity and fostering a more informed public discourse. Furthermore, the proposed model’s accuracy is compared with other cutting-edge models from the existing literature to showcase its superior performance.","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141669782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BERT Inspired Progressive Stacking to Enhance Spelling Correction in Bengali Text 利用 BERT 启发的渐进式堆叠技术加强孟加拉语文本的拼写校正
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-05 DOI: 10.1145/3669941
Debajyoty Banik, Saneyika Das, Sheshikala Martha, Achyut Shankar
Common spelling checks in the current digital era have trouble reading languages like Bengali, which employ English letters differently. In response, we have created a better BERT-based spell checker that makes use of a CNN sub-model (Semantic Network). Our novelty, which we term progressive stacking, concentrates on improving BERT model training while expediting the corrective process. We discovered that, when comparing shallow and deep versions, deeper models could require less training time. There is potential for improving spelling corrections with this technique. We categorized and utilized as a test set a 6300-word dataset that Nayadiganta Mohiuddin supplied, some of which had spelling errors. The most popular terms were the same as those found in the Prothom-Alo artificial error dataset.
当前数字时代的普通拼写检查很难读懂像孟加拉语这样的语言,因为这些语言使用的英文字母各不相同。为此,我们利用 CNN 子模型(语义网络)创建了更好的基于 BERT 的拼写检查器。我们称之为渐进式堆叠的新技术集中于改进 BERT 模型的训练,同时加快纠正过程。我们发现,在比较浅层模型和深层模型时,深层模型所需的训练时间更少。这种技术有可能改进拼写纠正。我们对 Nayadiganta Mohiuddin 提供的 6300 个词的数据集进行了分类,并将其用作测试集,其中一些数据集存在拼写错误。最流行的词汇与 Prothom-Alo 人工错误数据集中的词汇相同。
{"title":"BERT Inspired Progressive Stacking to Enhance Spelling Correction in Bengali Text","authors":"Debajyoty Banik, Saneyika Das, Sheshikala Martha, Achyut Shankar","doi":"10.1145/3669941","DOIUrl":"https://doi.org/10.1145/3669941","url":null,"abstract":"Common spelling checks in the current digital era have trouble reading languages like Bengali, which employ English letters differently. In response, we have created a better BERT-based spell checker that makes use of a CNN sub-model (Semantic Network). Our novelty, which we term progressive stacking, concentrates on improving BERT model training while expediting the corrective process. We discovered that, when comparing shallow and deep versions, deeper models could require less training time. There is potential for improving spelling corrections with this technique. We categorized and utilized as a test set a 6300-word dataset that Nayadiganta Mohiuddin supplied, some of which had spelling errors. The most popular terms were the same as those found in the Prothom-Alo artificial error dataset.","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141675482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PsyChatbot: A Psychological Counseling Agent Towards Depressed Chinese Population Based on Cognitive Behavioural Therapy 心理聊天机器人基于认知行为疗法、面向中国抑郁人群的心理咨询机器人
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-05 DOI: 10.1145/3676962
Tiantian Chen, Ying Shen, Xuri Chen, Lin Zhang
Nowadays, depression has been widely concerned due to the growing depressed population. Depression is a global mental problem, the worst case of which can lead to suicide. However, factors such as high treatment costs and social stigma prevent people from obtaining effective treatments. Chatbot technology is one of the main attempts to solve the problem. But as far as we know, existing chatbot systems designed for depressed people are still sporadic, and most of them have some non-negligible limitations. Specifically, existing systems simply guide users to release their negative emotions or provide some general advice. They cannot offer personalized advice for users’ specific problems. In addition, most of them only support English speakers, despite the fact that depressed Chinese constitute a large population. Psychological counseling systems for the depressed Chinese population with improved responsiveness are temporarily lacking. As an attempt to fill in the research gap to some extent, we design a novel Chinese psychological chatbot system, namely PsyChatbot. First, we establish a counseling dialogue framework based on Cognitive Behavioral Therapy (CBT), which guides users to reflect on themselves and helps them discover their negative perceptions. Then, we propose a retrieval-based Q&A algorithm to provide suitable suggestions for users’ specific problems. Last but not least, we construct a large-scale Chinese counseling Q&A corpus, which contains nearly 89,000 psychological Q&A triples. Experimental results have demonstrated the effectiveness of PsyChatbot. The source code and data has been released at https://github.com/slptongji/PsyChatbot.
如今,由于抑郁症人口不断增加,抑郁症已受到广泛关注。抑郁症是一种全球性的精神问题,最严重的可导致自杀。然而,高昂的治疗费用和社会耻辱感等因素阻碍了人们获得有效的治疗。聊天机器人技术是解决这一问题的主要尝试之一。但据我们所知,现有的针对抑郁症患者的聊天机器人系统还很零散,而且大多数都存在一些不可忽视的局限性。具体来说,现有系统只是简单地引导用户释放负面情绪或提供一些一般性建议。它们无法针对用户的具体问题提供个性化建议。此外,尽管抑郁的中国人数量庞大,但大多数系统只支持英语使用者。目前,针对中国抑郁人群的心理咨询系统还缺乏响应速度更快的系统。为了在一定程度上填补这一研究空白,我们设计了一个新颖的中文心理聊天机器人系统,即心理聊天机器人(PsyChatbot)。首先,我们建立了一个基于认知行为疗法(CBT)的心理咨询对话框架,引导用户反思自己,帮助他们发现自己的负面认知。然后,我们提出了一种基于检索的问答算法,针对用户的具体问题提供合适的建议。最后,我们构建了一个大规模的中文心理咨询问答语料库,其中包含近 89,000 个心理问答三元组。实验结果证明了 PsyChatbot 的有效性。源代码和数据已在 https://github.com/slptongji/PsyChatbot 上发布。
{"title":"PsyChatbot: A Psychological Counseling Agent Towards Depressed Chinese Population Based on Cognitive Behavioural Therapy","authors":"Tiantian Chen, Ying Shen, Xuri Chen, Lin Zhang","doi":"10.1145/3676962","DOIUrl":"https://doi.org/10.1145/3676962","url":null,"abstract":"Nowadays, depression has been widely concerned due to the growing depressed population. Depression is a global mental problem, the worst case of which can lead to suicide. However, factors such as high treatment costs and social stigma prevent people from obtaining effective treatments. Chatbot technology is one of the main attempts to solve the problem. But as far as we know, existing chatbot systems designed for depressed people are still sporadic, and most of them have some non-negligible limitations. Specifically, existing systems simply guide users to release their negative emotions or provide some general advice. They cannot offer personalized advice for users’ specific problems. In addition, most of them only support English speakers, despite the fact that depressed Chinese constitute a large population. Psychological counseling systems for the depressed Chinese population with improved responsiveness are temporarily lacking. As an attempt to fill in the research gap to some extent, we design a novel Chinese psychological chatbot system, namely PsyChatbot. First, we establish a counseling dialogue framework based on Cognitive Behavioral Therapy (CBT), which guides users to reflect on themselves and helps them discover their negative perceptions. Then, we propose a retrieval-based Q&A algorithm to provide suitable suggestions for users’ specific problems. Last but not least, we construct a large-scale Chinese counseling Q&A corpus, which contains nearly 89,000 psychological Q&A triples. Experimental results have demonstrated the effectiveness of PsyChatbot. The source code and data has been released at https://github.com/slptongji/PsyChatbot.","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141675787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vi-AbSQA: Multi-task Prompt Instruction Tuning Model for Vietnamese Aspect-based Sentiment Quadruple Analysis Vi-AbSQA:基于越南语方面的情感四元分析的多任务提示指令调整模型
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-04 DOI: 10.1145/3676886
T. Dang, D. Hao, Ngan Nguyen
Aspect-based sentiment analysis (ABSA) has recently received considerable attention within the Natural Language Processing (NLP) community, especially for complex tasks like triplet extraction or quadruplet prediction. However, most existing studies focus on high-resource languages. In this paper, we construct a challenging benchmark dataset for Vietnamese Aspect-based Sentiment Quadruple Analysis (AbSQA), where each sentence can contain explicit and implicit aspects and opinion terms. Moreover, each sample includes at least two aspect categories with different sentiments. We release this dataset for free research purposes, believing it will push forward research in this field. In addition, we present a generative-based approach to address the AbSQA task using a multitask instruction prompt tuning framework. Specifically, we design an effective generation paradigm that leverages instruction prompts to provide more information about the task. Besides, our model leverages relational information by designing separate sub-tasks based on the quadruplet elements and fine-tunes the transformer-based pretrained generative models in a multi-task manner. The experimental results demonstrate that our approach outperforms previously established extraction-based and generative-based methods, as well as the baseline variants.
基于方面的情感分析(ABSA)最近在自然语言处理(NLP)领域受到了广泛关注,尤其是在三元组提取或四元组预测等复杂任务方面。然而,现有的大多数研究都集中在高资源语言上。在本文中,我们为越南语基于方面的情感四元分析(AbSQA)构建了一个具有挑战性的基准数据集,其中每个句子都可以包含显性和隐性方面以及意见术语。此外,每个样本至少包含两个具有不同情感的方面类别。我们发布该数据集是出于免费研究目的,相信它将推动该领域的研究。此外,我们还提出了一种基于生成的方法,利用多任务指令提示调整框架来解决 AbSQA 任务。具体来说,我们设计了一种有效的生成范式,利用指令提示提供更多有关任务的信息。此外,我们的模型还通过设计基于四元组元素的独立子任务来利用关系信息,并以多任务方式对基于变换器的预训练生成模型进行微调。实验结果表明,我们的方法优于以前建立的基于提取和生成的方法以及基线变体。
{"title":"Vi-AbSQA: Multi-task Prompt Instruction Tuning Model for Vietnamese Aspect-based Sentiment Quadruple Analysis","authors":"T. Dang, D. Hao, Ngan Nguyen","doi":"10.1145/3676886","DOIUrl":"https://doi.org/10.1145/3676886","url":null,"abstract":"Aspect-based sentiment analysis (ABSA) has recently received considerable attention within the Natural Language Processing (NLP) community, especially for complex tasks like triplet extraction or quadruplet prediction. However, most existing studies focus on high-resource languages. In this paper, we construct a challenging benchmark dataset for Vietnamese Aspect-based Sentiment Quadruple Analysis (AbSQA), where each sentence can contain explicit and implicit aspects and opinion terms. Moreover, each sample includes at least two aspect categories with different sentiments. We release this dataset for free research purposes, believing it will push forward research in this field. In addition, we present a generative-based approach to address the AbSQA task using a multitask instruction prompt tuning framework. Specifically, we design an effective generation paradigm that leverages instruction prompts to provide more information about the task. Besides, our model leverages relational information by designing separate sub-tasks based on the quadruplet elements and fine-tunes the transformer-based pretrained generative models in a multi-task manner. The experimental results demonstrate that our approach outperforms previously established extraction-based and generative-based methods, as well as the baseline variants.","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141678887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TinyCheXReport: Compressed deep neural network for Chest X-ray report generation TinyCheXReport:用于生成胸部 X 光报告的压缩深度神经网络
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-03 DOI: 10.1145/3676166
F. Alotaibi, K. Alyoubi, Ajay Mittal, Vishal Gupta, Navdeep Kaur
Increase in chest x-ray (CXR) imaging tests has burdened radiologists, thereby posing significant challenges in writing radiological reports on time. Although several deep learning-based automatic report generation methods have been developed till date, most are over-parameterized. For deployment on edge devices with constrained processing power or limited resources, over-parameterized models are often too large. This paper presents a compressed deep learning-based model that is 30% space efficient as compared to the non-compressed base model while both have comparable performance. The model comprising of VGG19 and hierarchical long short-term memory (LSTM) equipped with contextual word embedding layer is used as the base model. The redundant weight parameters are removed from the base model using unstructured one-shot pruning. To overcome the performance degradation, the lightweight pruned model is fine-tuned over publically available OpenI dataset. The quantitative evaluation metric scores demonstrate that proposed model surpasses the performance of state-of-the-art models. Additionally, the proposed model, being 30% space efficient, is easily deployable in resource-limited settings. Thus, this study serves as baseline for development of compressed models to generate radiologicalreports from CXR images.
胸部 X 光(CXR)成像测试的增加加重了放射科医生的负担,从而给按时撰写放射报告带来了巨大挑战。尽管迄今为止已开发出几种基于深度学习的自动报告生成方法,但大多数方法都参数过高。对于部署在处理能力受限或资源有限的边缘设备上,过度参数化的模型往往过于庞大。本文提出了一种基于深度学习的压缩模型,与非压缩基础模型相比,空间效率提高了 30%,而两者的性能相当。基础模型由 VGG19 和配备上下文词嵌入层的分层长短期记忆(LSTM)组成。使用非结构化的单次剪枝,从基础模型中去除冗余权重参数。为了克服性能下降问题,在公开的 OpenI 数据集上对轻量级剪枝模型进行了微调。定量评估指标得分表明,所提出的模型超越了最先进模型的性能。此外,所提出的模型节省了 30% 的空间,很容易部署到资源有限的环境中。因此,本研究可作为开发压缩模型的基线,以便从 CXR 图像生成放射报告。
{"title":"TinyCheXReport: Compressed deep neural network for Chest X-ray report generation","authors":"F. Alotaibi, K. Alyoubi, Ajay Mittal, Vishal Gupta, Navdeep Kaur","doi":"10.1145/3676166","DOIUrl":"https://doi.org/10.1145/3676166","url":null,"abstract":"Increase in chest x-ray (CXR) imaging tests has burdened radiologists, thereby posing significant challenges in writing radiological reports on time. Although several deep learning-based automatic report generation methods have been developed till date, most are over-parameterized. For deployment on edge devices with constrained processing power or limited resources, over-parameterized models are often too large. This paper presents a compressed deep learning-based model that is 30% space efficient as compared to the non-compressed base model while both have comparable performance. The model comprising of VGG19 and hierarchical long short-term memory (LSTM) equipped with contextual word embedding layer is used as the base model. The redundant weight parameters are removed from the base model using unstructured one-shot pruning. To overcome the performance degradation, the lightweight pruned model is fine-tuned over publically available OpenI dataset. The quantitative evaluation metric scores demonstrate that proposed model surpasses the performance of state-of-the-art models. Additionally, the proposed model, being 30% space efficient, is easily deployable in resource-limited settings. Thus, this study serves as baseline for development of compressed models to generate radiologicalreports from CXR images.","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141684119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In-memory database load balancing optimization for massive information processing of the Internet of Things 针对物联网海量信息处理的内存数据库负载平衡优化
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-06-10 DOI: 10.1145/3670996
Huixiang Xu
In order to improve the operation effect of the in-memory database for massive information processing of the Internet of Things, this paper combines the load balancing signal processing algorithm to carry out the load balancing optimization analysis of the in-memory database. According to the local transformation characteristics of non-stationary multi-component signals, an adaptive FSST algorithm is proposed in this paper. According to the signal separability condition, this paper uses the local Rayleigh entropy to estimate the window function parameters of the adaptive FSST and the adaptive FSST2. In addition, this paper adopts an adaptive window function to automatically match the local changes of the signal, so that the signal has the optimal energy aggregation in any part. The results show that when the number of concurrent users is the same, the time consumption, throughput and bandwidth of the proposed method are always higher than the method proposed in reference [10]. When the number of concurrent books is 97, the time of the proposed method is 45000ms, the time of the proposed method is 40000ms, the highest throughput of the proposed method is 2.30 MB/s, the highest bandwidth is 11.9MB/s, the highest throughput of the method proposed in reference [10] is 2.2 MB/s, and the highest bandwidth is 11.8MB/s. The load balancing optimization algorithm of the memory database for massive information processing of the Internet of Things has good results.
为了提高内存数据库在物联网海量信息处理中的运行效果,本文结合负载均衡信号处理算法,对内存数据库进行了负载均衡优化分析。根据非平稳多分量信号的局部变换特性,本文提出了一种自适应 FSST 算法。根据信号可分性条件,本文利用局部瑞利熵来估计自适应 FSST 和自适应 FSST2 的窗函数参数。此外,本文还采用自适应窗函数自动匹配信号的局部变化,使信号在任意部分都具有最优的能量聚集。结果表明,当并发用户数相同时,本文提出的方法的耗时、吞吐量和带宽始终高于参考文献[10]中提出的方法。当并发本数为 97 本时,所提方法的耗时为 45000ms,所提方法的耗时为 40000ms,所提方法的最高吞吐量为 2.30MB/s,最高带宽为 11.9MB/s,参考文献[10]所提方法的最高吞吐量为 2.2MB/s,最高带宽为 11.8MB/s。面向物联网海量信息处理的内存数据库负载均衡优化算法效果良好。
{"title":"In-memory database load balancing optimization for massive information processing of the Internet of Things","authors":"Huixiang Xu","doi":"10.1145/3670996","DOIUrl":"https://doi.org/10.1145/3670996","url":null,"abstract":"In order to improve the operation effect of the in-memory database for massive information processing of the Internet of Things, this paper combines the load balancing signal processing algorithm to carry out the load balancing optimization analysis of the in-memory database. According to the local transformation characteristics of non-stationary multi-component signals, an adaptive FSST algorithm is proposed in this paper. According to the signal separability condition, this paper uses the local Rayleigh entropy to estimate the window function parameters of the adaptive FSST and the adaptive FSST2. In addition, this paper adopts an adaptive window function to automatically match the local changes of the signal, so that the signal has the optimal energy aggregation in any part. The results show that when the number of concurrent users is the same, the time consumption, throughput and bandwidth of the proposed method are always higher than the method proposed in reference [10]. When the number of concurrent books is 97, the time of the proposed method is 45000ms, the time of the proposed method is 40000ms, the highest throughput of the proposed method is 2.30 MB/s, the highest bandwidth is 11.9MB/s, the highest throughput of the method proposed in reference [10] is 2.2 MB/s, and the highest bandwidth is 11.8MB/s. The load balancing optimization algorithm of the memory database for massive information processing of the Internet of Things has good results.","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141365701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedREAS: A Robust Efficient Aggregation and Selection Framework for Federated Learning FedREAS:联盟学习的稳健高效聚合和选择框架
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-06-04 DOI: 10.1145/3670689
Shuming Fan, Chenpei Wang, Xinyu Ruan, Hongjian Shi, Ruhui Ma, Haibing Guan

In the field of Natural Language Processing (NLP), Deep Learning (DL) and Neural Network (NN) technologies have been widely applied to machine translation and sentiment analysis and have demonstrated outstanding performance. In recent years, NLP applications have also combined multimodal data, such as visual and audio, continuously improving language processing performance. At the same time, the size of Neural Network models is increasing, and many models cannot be deployed on devices with limited computing resources. Deploying models on cloud platforms has become a trend. However, deploying models in the cloud introduces new privacy risks for endpoint data, despite overcoming computational limitations. Federated Learning (FL) methods protect local data by keeping the data on the client side and only sending local updates to the central server. However, the FL architecture still has problems, such as vulnerability to adversarial attacks and non-IID data distribution. In this work, we propose a Federated Learning aggregation method called FedREAS. The server uses a benchmark dataset to train a global model and obtains benchmark updates in this method. Before aggregating local updates, the server adjusts the local updates using the benchmark updates and then returns the adjusted benchmark updates. Then, based on the similarity between the adjusted local updates and the adjusted benchmark updates, the server aggregates these local updates to obtain a more robust update. This method also improves the client selection process. FedREAS selects suitable clients for training at the beginning of each round based on specific strategies, the similarity of the previous round’s updates, and the submitted data. We conduct experiments on different datasets and compare FedREAS with other Federated Learning methods. The results show that FedREAS outperforms other methods regarding model performance and resistance to attacks.

在自然语言处理(NLP)领域,深度学习(DL)和神经网络(NN)技术已被广泛应用于机器翻译和情感分析,并表现出卓越的性能。近年来,NLP 应用还结合了视觉和音频等多模态数据,不断提高语言处理性能。与此同时,神经网络模型的规模也在不断扩大,许多模型无法部署在计算资源有限的设备上。在云平台上部署模型已成为一种趋势。然而,尽管克服了计算上的限制,但在云平台上部署模型会给终端数据带来新的隐私风险。联合学习(FL)方法通过将数据保存在客户端并只向中央服务器发送本地更新来保护本地数据。然而,FL 架构仍存在一些问题,如容易受到对抗性攻击和非 IID 数据分发。在这项工作中,我们提出了一种名为 FedREAS 的联邦学习聚合方法。服务器使用基准数据集训练全局模型,并通过这种方法获得基准更新。在聚合本地更新之前,服务器使用基准更新调整本地更新,然后返回调整后的基准更新。然后,根据调整后的本地更新和调整后的基准更新之间的相似性,服务器会聚合这些本地更新,以获得更稳健的更新。这种方法还改进了客户端选择过程。FedREAS 在每一轮开始时都会根据特定策略、上一轮更新的相似性和提交的数据选择合适的客户端进行训练。我们在不同的数据集上进行了实验,并将 FedREAS 与其他联合学习方法进行了比较。结果表明,在模型性能和抗攻击能力方面,FedREAS 优于其他方法。
{"title":"FedREAS: A Robust Efficient Aggregation and Selection Framework for Federated Learning","authors":"Shuming Fan, Chenpei Wang, Xinyu Ruan, Hongjian Shi, Ruhui Ma, Haibing Guan","doi":"10.1145/3670689","DOIUrl":"https://doi.org/10.1145/3670689","url":null,"abstract":"<p>In the field of Natural Language Processing (NLP), Deep Learning (DL) and Neural Network (NN) technologies have been widely applied to machine translation and sentiment analysis and have demonstrated outstanding performance. In recent years, NLP applications have also combined multimodal data, such as visual and audio, continuously improving language processing performance. At the same time, the size of Neural Network models is increasing, and many models cannot be deployed on devices with limited computing resources. Deploying models on cloud platforms has become a trend. However, deploying models in the cloud introduces new privacy risks for endpoint data, despite overcoming computational limitations. Federated Learning (FL) methods protect local data by keeping the data on the client side and only sending local updates to the central server. However, the FL architecture still has problems, such as vulnerability to adversarial attacks and non-IID data distribution. In this work, we propose a Federated Learning aggregation method called FedREAS. The server uses a benchmark dataset to train a global model and obtains benchmark updates in this method. Before aggregating local updates, the server adjusts the local updates using the benchmark updates and then returns the adjusted benchmark updates. Then, based on the similarity between the adjusted local updates and the adjusted benchmark updates, the server aggregates these local updates to obtain a more robust update. This method also improves the client selection process. FedREAS selects suitable clients for training at the beginning of each round based on specific strategies, the similarity of the previous round’s updates, and the submitted data. We conduct experiments on different datasets and compare FedREAS with other Federated Learning methods. The results show that FedREAS outperforms other methods regarding model performance and resistance to attacks.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141256215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Study on Intelligent Scoring of English Composition Based on Machine Learning from the Perspective of Natural Language Processing 自然语言处理视角下基于机器学习的英语作文智能评分研究
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-06-04 DOI: 10.1145/3625545
Jing Tang

Knowledge management is crucial to the teaching and learning process in the current era of digitalization. The idea of "learning via working together" is making Natural Language Processing a popular tool to improve the learning process based on the intelligent system for evaluating the composition. English language learning is highly dependent on the composition written by the students under various topics. Teachers are facing huge difficulties in the evaluation of the composition as the level of writing by the students will vary for individual. In this research, Natural Language Processing concept is utilized for getting trained with the student's writing skills and Multiprocessor Learning Algorithm (MLA) combined with Convolutional Neural Network (CNN) (MLA-CNN) for evaluating the composition and declaring the scores for the students. The model's composition scoring rate is validated using a range of learning rate settings. Some theoretical notions for smart teaching are proposed, and it is hoped that this automatic composition scoring model would be used to grade student writing in English classes. When applied to the automatic scoring of students' English composition in schools, the suggested composition scoring system trained by the MLP-CNN has great performance and lays the groundwork for the educational applications of ML inside AI. The study results proved that the proposed model has provided an accuracy of 98%.

在当前的数字化时代,知识管理对教学过程至关重要。通过合作学习 "的理念正使自然语言处理成为一种流行的工具,用于改进基于智能系统的作文评价的学习过程。英语学习在很大程度上依赖于学生根据不同主题所写的作文。由于学生的写作水平因人而异,教师在作文评价方面面临巨大困难。本研究利用自然语言处理概念对学生的写作技巧进行训练,并结合多处理器学习算法(MLA)和卷积神经网络(CNN)(MLA-CNN)对学生的作文进行评估和打分。该模型的作文评分率通过一系列学习率设置进行了验证。提出了一些智能教学的理论概念,并希望该自动作文评分模型能用于英语课堂的学生作文评分。在应用于学校学生英语作文的自动评分时,建议的由 MLP-CNN 训练的作文评分系统表现出色,为人工智能中的 ML 在教育领域的应用奠定了基础。研究结果证明,所提出的模型准确率高达 98%。
{"title":"Study on Intelligent Scoring of English Composition Based on Machine Learning from the Perspective of Natural Language Processing","authors":"Jing Tang","doi":"10.1145/3625545","DOIUrl":"https://doi.org/10.1145/3625545","url":null,"abstract":"<p>Knowledge management is crucial to the teaching and learning process in the current era of digitalization. The idea of \"learning via working together\" is making Natural Language Processing a popular tool to improve the learning process based on the intelligent system for evaluating the composition. English language learning is highly dependent on the composition written by the students under various topics. Teachers are facing huge difficulties in the evaluation of the composition as the level of writing by the students will vary for individual. In this research, Natural Language Processing concept is utilized for getting trained with the student's writing skills and Multiprocessor Learning Algorithm (MLA) combined with Convolutional Neural Network (CNN) (MLA-CNN) for evaluating the composition and declaring the scores for the students. The model's composition scoring rate is validated using a range of learning rate settings. Some theoretical notions for smart teaching are proposed, and it is hoped that this automatic composition scoring model would be used to grade student writing in English classes. When applied to the automatic scoring of students' English composition in schools, the suggested composition scoring system trained by the MLP-CNN has great performance and lays the groundwork for the educational applications of ML inside AI. The study results proved that the proposed model has provided an accuracy of 98%.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141256047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Asian and Low-Resource Language Information Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1