首页 > 最新文献

ACM Transactions on Asian and Low-Resource Language Information Processing最新文献

英文 中文
Handling Imbalance and Limited Data in Thyroid Ultrasound and Diabetic Retinopathy Datasets Using Discrete Levy Flights Grey Wolf Optimizer Based Random Forest for Robust Medical Data Classification 使用基于灰狼优化器的离散利维飞行随机森林处理甲状腺超声和糖尿病视网膜病变数据集中的不平衡和有限数据,实现可靠的医疗数据分类
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-16 DOI: 10.1145/3648363
Shobha Aswal, Neelu Jyothi Ahuja, Ritika Mehra

In the field of disease diagnosis, medical image classification faces an inherent challenge due to various factors involving data imbalance, image quality variability, annotation variability, and limited data availability and data representativeness. Such challenges affect the algorithm's classification ability on the medical images in an adverse way, which leads to biased model outcomes and inaccurate interpretations. In this paper, a novel Discrete Levy Flight Grey Wolf Optimizer (DLFGWO) is combined with the Random Forest (RF) classifier to address the above limitations on the biomedical datasets and to achieve better classification rate. The DLFGWO-RF resolves the image quality variability in ultrasound images and limits the inaccuracies on classification using RF by handling the incomplete and noisy data. The sheer focus on the majority class may lead to unequal distribution of classes and thus leads to data imbalance. The DLFGWO balances such distribution by leveraging grey wolves and its exploration and exploitation capabilities are improved using Discrete Levy Flight (DLF). It further optimizes the classifier's performance to achieve balanced classification rate. DLFGWO-RF is designed to perform classification even on limited datasets, thereby the requirement of numerous expert annotations can thus be reduced. In diabetic retinopathy grading, the DLFGWO-RF reduces disagreements in annotation variability using subjective interpretations. However, the representativeness of the diabetic retinopathy dataset fails to capture the entire population diversity, which limits the generalization ability of the proposed DLFGWO-RF. Thus, fine-tuning of RF can robustly adapt to the subgroups in the dataset, enhancing its overall performance. The experiments are conducted on two widely used medical image datasets to test the efficacy of the model. The experimental results show that the DLFGWO-RF classifier achieves improved classification accuracy between 90-95%, which outperforms the existing techniques for various imbalanced datasets.

在疾病诊断领域,由于数据不平衡、图像质量变化、注释变化以及数据可用性和数据代表性有限等各种因素,医学图像分类面临着固有的挑战。这些挑战会对算法对医学图像的分类能力产生不利影响,从而导致模型结果有偏差和解释不准确。本文将新颖的离散李维灰狼优化器(DLFGWO)与随机森林(RF)分类器相结合,以解决生物医学数据集的上述局限性,并获得更好的分类率。DLFGWO-RF 解决了超声图像中的图像质量变异问题,并通过处理不完整和有噪声的数据限制了 RF 分类的不准确性。只关注大多数类别可能会导致类别分布不均,从而导致数据失衡。DLFGWO 通过利用灰狼来平衡这种分布,并利用离散列维飞行(DLF)提高了探索和利用能力。它进一步优化了分类器的性能,以实现均衡的分类率。DLFGWO-RF 即使在有限的数据集上也能进行分类,因此可以减少对大量专家注释的需求。在糖尿病视网膜病变分级中,DLFGWO-RF 利用主观解释减少了注释差异中的分歧。然而,糖尿病视网膜病变数据集的代表性无法捕捉整个人群的多样性,这限制了所提出的 DLFGWO-RF 的泛化能力。因此,对射频进行微调可以稳健地适应数据集中的亚群,从而提高其整体性能。实验在两个广泛使用的医学图像数据集上进行,以检验模型的有效性。实验结果表明,DLFGWO-RF 分类器的分类准确率提高了 90-95% 之间,在各种不平衡数据集上优于现有技术。
{"title":"Handling Imbalance and Limited Data in Thyroid Ultrasound and Diabetic Retinopathy Datasets Using Discrete Levy Flights Grey Wolf Optimizer Based Random Forest for Robust Medical Data Classification","authors":"Shobha Aswal, Neelu Jyothi Ahuja, Ritika Mehra","doi":"10.1145/3648363","DOIUrl":"https://doi.org/10.1145/3648363","url":null,"abstract":"<p>In the field of disease diagnosis, medical image classification faces an inherent challenge due to various factors involving data imbalance, image quality variability, annotation variability, and limited data availability and data representativeness. Such challenges affect the algorithm's classification ability on the medical images in an adverse way, which leads to biased model outcomes and inaccurate interpretations. In this paper, a novel Discrete Levy Flight Grey Wolf Optimizer (DLFGWO) is combined with the Random Forest (RF) classifier to address the above limitations on the biomedical datasets and to achieve better classification rate. The DLFGWO-RF resolves the image quality variability in ultrasound images and limits the inaccuracies on classification using RF by handling the incomplete and noisy data. The sheer focus on the majority class may lead to unequal distribution of classes and thus leads to data imbalance. The DLFGWO balances such distribution by leveraging grey wolves and its exploration and exploitation capabilities are improved using Discrete Levy Flight (DLF). It further optimizes the classifier's performance to achieve balanced classification rate. DLFGWO-RF is designed to perform classification even on limited datasets, thereby the requirement of numerous expert annotations can thus be reduced. In diabetic retinopathy grading, the DLFGWO-RF reduces disagreements in annotation variability using subjective interpretations. However, the representativeness of the diabetic retinopathy dataset fails to capture the entire population diversity, which limits the generalization ability of the proposed DLFGWO-RF. Thus, fine-tuning of RF can robustly adapt to the subgroups in the dataset, enhancing its overall performance. The experiments are conducted on two widely used medical image datasets to test the efficacy of the model. The experimental results show that the DLFGWO-RF classifier achieves improved classification accuracy between 90-95%, which outperforms the existing techniques for various imbalanced datasets.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":"176 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enriching Urdu NER with BERT Embedding, Data Augmentation, and Hybrid Encoder-CNN Architecture 利用 BERT 嵌入、数据增强和混合编码器-CNN 架构丰富乌尔都语 NER
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-15 DOI: 10.1145/3648362
Anil Ahmed, Degen Huang, Syed Yasser Arafat, Imran Hameed

Named Entity Recognition (NER) is an indispensable component of Natural Language Processing (NLP), which aims to identify and classify entities within text data. While Deep Learning (DL) models have excelled in NER for well-resourced languages like English, Spanish, and Chinese, they face significant hurdles when dealing with low-resource languages like Urdu. These challenges stem from the intricate linguistic characteristics of Urdu, including morphological diversity, context-dependent lexicon, and the scarcity of training data. This study addresses these issues by focusing on Urdu Named Entity Recognition (U-NER) and introducing three key contributions. First, various pre-trained embedding methods are employed, encompassing Word2vec (W2V), GloVe, FastText, Bidirectional Encoder Representations from Transformers (BERT), and Embeddings from language models (ELMo). In particular, fine-tuning is performed on BERTBASE and ELMo using Urdu Wikipedia and news articles. Secondly, a novel generative Data Augmentation (DA) technique replaces Named Entities (NEs) with mask tokens, employing pre-trained masked language models to predict masked tokens, effectively expanding the training dataset. Finally, the study introduces a novel hybrid model combining a Transformer Encoder with a Convolutional Neural Network (CNN) to capture the intricate morphology of Urdu. These modules enable the model to handle polysemy, extract short and long-range dependencies, and enhance learning capacity. Empirical experiments demonstrate that the proposed model, incorporating BERT embeddings and an innovative DA approach, attains the highest F1-Score of 93.99%, highlighting its efficacy for the U-NER task.

命名实体识别(NER)是自然语言处理(NLP)不可或缺的组成部分,旨在识别文本数据中的实体并对其进行分类。虽然深度学习(DL)模型在英语、西班牙语和中文等资源丰富的语言的 NER 中表现出色,但在处理乌尔都语等资源匮乏的语言时却面临巨大障碍。这些挑战源于乌尔都语错综复杂的语言特点,包括形态多样性、上下文相关词汇以及训练数据的稀缺性。本研究通过关注乌尔都语命名实体识别(U-NER)来解决这些问题,并引入了三个关键贡献。首先,采用了多种预训练嵌入方法,包括 Word2vec (W2V)、GloVe、FastText、来自变换器的双向编码器表示法 (BERT) 和来自语言模型的嵌入法 (ELMo)。其中,利用乌尔都语维基百科和新闻文章对 BERTBASE 和 ELMo 进行了微调。其次,一种新颖的生成性数据增强(DA)技术用掩码标记取代了命名实体(NE),利用预先训练好的掩码语言模型来预测掩码标记,从而有效地扩展了训练数据集。最后,该研究引入了一种新型混合模型,该模型结合了变换器编码器和卷积神经网络(CNN),以捕捉乌尔都语复杂的形态。这些模块使模型能够处理多义词,提取短程和长程依赖关系,并增强学习能力。实证实验表明,所提出的模型结合了 BERT 嵌入和创新的 DA 方法,达到了最高的 F1-Score 93.99%,突显了其在 U-NER 任务中的功效。
{"title":"Enriching Urdu NER with BERT Embedding, Data Augmentation, and Hybrid Encoder-CNN Architecture","authors":"Anil Ahmed, Degen Huang, Syed Yasser Arafat, Imran Hameed","doi":"10.1145/3648362","DOIUrl":"https://doi.org/10.1145/3648362","url":null,"abstract":"<p>Named Entity Recognition (NER) is an indispensable component of Natural Language Processing (NLP), which aims to identify and classify entities within text data. While Deep Learning (DL) models have excelled in NER for well-resourced languages like English, Spanish, and Chinese, they face significant hurdles when dealing with low-resource languages like Urdu. These challenges stem from the intricate linguistic characteristics of Urdu, including morphological diversity, context-dependent lexicon, and the scarcity of training data. This study addresses these issues by focusing on Urdu Named Entity Recognition (U-NER) and introducing three key contributions. First, various pre-trained embedding methods are employed, encompassing Word2vec (W2V), GloVe, FastText, Bidirectional Encoder Representations from Transformers (BERT), and Embeddings from language models (ELMo). In particular, fine-tuning is performed on BERT<sub>BASE</sub> and ELMo using Urdu Wikipedia and news articles. Secondly, a novel generative Data Augmentation (DA) technique replaces Named Entities (NEs) with mask tokens, employing pre-trained masked language models to predict masked tokens, effectively expanding the training dataset. Finally, the study introduces a novel hybrid model combining a Transformer Encoder with a Convolutional Neural Network (CNN) to capture the intricate morphology of Urdu. These modules enable the model to handle polysemy, extract short and long-range dependencies, and enhance learning capacity. Empirical experiments demonstrate that the proposed model, incorporating BERT embeddings and an innovative DA approach, attains the highest F1-Score of 93.99%, highlighting its efficacy for the U-NER task.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":"223 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sentiment Analysis Method of Epidemic-related Microblog Based on Hesitation Theory 基于犹豫不决理论的疫情相关微博情感分析方法
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-14 DOI: 10.1145/3648360
Yang Yu, Dong Qiu, HuanYu Wan

The COVID-19 pandemic in 2020 brought an unprecedented global crisis. After two years of control efforts, life gradually returned to the pre-pandemic state, but localized outbreaks continued to occur. Towards the end of 2022, COVID-19 resurged in China, leading to another disruption of people’s lives and work. Many pieces of information on social media reflected people’s views and emotions towards the second outbreak, which showed distinct differences compared to the first outbreak in 2020. To explore people’s emotional attitudes towards the pandemic at different stages and the underlying reasons, this study collected microblog data from November 2022 to January 2023 and from January to June 2020, encompassing Chinese reactions to the COVID-19 pandemic. Based on hesitancy and the Fuzzy Intuition theory, we proposed a hypothesis: hesitancy can be integrated into machine learning models to select suitable corpora for training, which not only improves accuracy but also enhances model efficiency. Based on this hypothesis, we designed a hesitancy-integrated model. The experimental results demonstrated the model’s positive performance on a self-constructed database. By applying this model to analyze people’s attitudes towards the pandemic, we obtained their sentiments in different months. We found that the most negative emotions appeared at the beginning of the pandemic, followed by emotional fluctuations influenced by social events, ultimately showing an overall positive trend. Combining word cloud techniques and the Latent Dirichlet Allocation (LDA) model effectively helped explore the reasons behind the changes in pandemic attitude.

2020 年的 COVID-19 大流行带来了前所未有的全球性危机。经过两年的控制努力,人们的生活逐渐恢复到疫情爆发前的状态,但局部地区仍有疫情爆发。2022 年底,COVID-19 在中国卷土重来,再次扰乱了人们的生活和工作。社交媒体上的许多信息反映了人们对第二次疫情的看法和情绪,与 2020 年的第一次疫情相比有明显差异。为了探究人们在不同阶段对疫情的情感态度及其背后的原因,本研究收集了2022年11月至2023年1月以及2020年1月至6月的微博数据,涵盖了中国人对COVID-19疫情的反应。基于犹豫不决和模糊直觉理论,我们提出了一个假设:犹豫不决可以被集成到机器学习模型中,以选择合适的语料进行训练,这不仅能提高准确率,还能提高模型效率。基于这一假设,我们设计了一个犹豫整合模型。实验结果表明,该模型在自建数据库中表现良好。通过应用该模型分析人们对大流行病的态度,我们获得了他们在不同月份的情绪。我们发现,最消极的情绪出现在大流行的初期,随后受社会事件影响出现情绪波动,最终呈现出整体积极的趋势。将词云技术与潜在德里希勒分配(LDA)模型相结合,有效地帮助探索了大流行病态度变化背后的原因。
{"title":"Sentiment Analysis Method of Epidemic-related Microblog Based on Hesitation Theory","authors":"Yang Yu, Dong Qiu, HuanYu Wan","doi":"10.1145/3648360","DOIUrl":"https://doi.org/10.1145/3648360","url":null,"abstract":"<p>The COVID-19 pandemic in 2020 brought an unprecedented global crisis. After two years of control efforts, life gradually returned to the pre-pandemic state, but localized outbreaks continued to occur. Towards the end of 2022, COVID-19 resurged in China, leading to another disruption of people’s lives and work. Many pieces of information on social media reflected people’s views and emotions towards the second outbreak, which showed distinct differences compared to the first outbreak in 2020. To explore people’s emotional attitudes towards the pandemic at different stages and the underlying reasons, this study collected microblog data from November 2022 to January 2023 and from January to June 2020, encompassing Chinese reactions to the COVID-19 pandemic. Based on hesitancy and the Fuzzy Intuition theory, we proposed a hypothesis: hesitancy can be integrated into machine learning models to select suitable corpora for training, which not only improves accuracy but also enhances model efficiency. Based on this hypothesis, we designed a hesitancy-integrated model. The experimental results demonstrated the model’s positive performance on a self-constructed database. By applying this model to analyze people’s attitudes towards the pandemic, we obtained their sentiments in different months. We found that the most negative emotions appeared at the beginning of the pandemic, followed by emotional fluctuations influenced by social events, ultimately showing an overall positive trend. Combining word cloud techniques and the Latent Dirichlet Allocation (LDA) model effectively helped explore the reasons behind the changes in pandemic attitude.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":"198 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSEConv: A Unified Warping Framework for Video Frame Interpolation MSEConv:用于视频帧插值的统一经编框架
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-14 DOI: 10.1145/3648364
Xiangling Ding, Pu Huang, Dengyong Zhang, Wei Liang, Feng Li, Gaobo Yang, Xin Liao, Yue Li

Within the context of video frame interpolation, complex motion modeling is the task of capturing, in a video sequence, where the moving objects are located in the interpolated frame, and how to maintain the temporal consistency of motion. Existing video frame interpolation methods typically assign either a fixed size of the motion kernel or a refined optical flow to model complex motions. However, they have the limitation of data redundancy and inaccuracy representation of motion. This paper introduces a unified warping framework, named multi-scale expandable deformable convolution (MSEConv), for simultaneously performing complex motion modeling and frame interpolation. In the proposed framework, a deep fully convolutional neural network with global attention is proposed to estimate multiple small-scale kernel weights with different expansion degrees and adaptive weight allocation for each pixel synthesis. Moreover, most of the kernel-based interpolation methods can be treated as the special case of the proposed MSEConv, thus, MSEConv can be easily transferred to other kernel-based frame interpolation methods for performance improvement. To further improve the robustness of motion occlusions, an operation of mask occlusion is introduced. As a consequence, our proposed MSEConv shows strong performance on par or even better than the state-of-the-art kernel-based frame interpolation works on public datasets. Our source code and visual comparable results are available at https://github.com/Pumpkin123709/MSEConv.

在视频帧插值中,复杂运动建模的任务是在视频序列中捕捉运动物体在插值帧中的位置,以及如何保持运动的时间一致性。现有的视频帧插值方法通常采用固定大小的运动核或精细光流来建立复杂运动模型。然而,这些方法都存在数据冗余和运动表示不准确的局限性。本文介绍了一种统一的扭曲框架,名为多尺度可扩展变形卷积(MSEConv),可同时执行复杂运动建模和帧插值。在该框架中,提出了一种具有全局注意力的深度全卷积神经网络,用于估计具有不同扩展度的多个小尺度内核权重,并为每个像素合成进行自适应权重分配。此外,大多数基于内核的插值方法都可以被视为 MSEConv 的特例,因此 MSEConv 可以很容易地移植到其他基于内核的帧插值方法中以提高性能。为了进一步提高运动遮挡的鲁棒性,我们引入了遮挡操作。因此,我们提出的 MSEConv 在公共数据集上显示出与最先进的基于内核的帧插值方法相当甚至更好的性能。我们的源代码和可视化比较结果可在 https://github.com/Pumpkin123709/MSEConv 上获取。
{"title":"MSEConv: A Unified Warping Framework for Video Frame Interpolation","authors":"Xiangling Ding, Pu Huang, Dengyong Zhang, Wei Liang, Feng Li, Gaobo Yang, Xin Liao, Yue Li","doi":"10.1145/3648364","DOIUrl":"https://doi.org/10.1145/3648364","url":null,"abstract":"<p>Within the context of video frame interpolation, complex motion modeling is the task of capturing, in a video sequence, where the moving objects are located in the interpolated frame, and how to maintain the temporal consistency of motion. Existing video frame interpolation methods typically assign either a fixed size of the motion kernel or a refined optical flow to model complex motions. However, they have the limitation of data redundancy and inaccuracy representation of motion. This paper introduces a unified warping framework, named multi-scale expandable deformable convolution (MSEConv), for simultaneously performing complex motion modeling and frame interpolation. In the proposed framework, a deep fully convolutional neural network with global attention is proposed to estimate multiple small-scale kernel weights with different expansion degrees and adaptive weight allocation for each pixel synthesis. Moreover, most of the kernel-based interpolation methods can be treated as the special case of the proposed MSEConv, thus, MSEConv can be easily transferred to other kernel-based frame interpolation methods for performance improvement. To further improve the robustness of motion occlusions, an operation of mask occlusion is introduced. As a consequence, our proposed MSEConv shows strong performance on par or even better than the state-of-the-art kernel-based frame interpolation works on public datasets. Our source code and visual comparable results are available at https://github.com/Pumpkin123709/MSEConv.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":"78 3 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boundary-Aware Abstractive Summarization with Entity-Augmented Attention for Enhancing Faithfulness 利用实体增强注意力进行边界感知抽象总结以提高忠实度
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-13 DOI: 10.1145/3641278
Jiuyi Li, Junpeng Liu, Jianjun Ma, Wei Yang, Degen Huang

With the successful application of deep learning, document summarization systems can produce more readable results. However, abstractive summarization still suffers from unfaithful outputs and factual errors, especially in named entities. Current approaches tend to employ external knowledge to improve model performance while neglecting the boundary information and the semantics of the entities. In this paper, we propose an entity-augmented method (EAM) to encourage the model to make full use of the entity boundary information and pay more attention to the critical entities. Experimental results on three Chinese and English summarization datasets show that our method outperforms several strong baselines and achieves state-of-the-art performance on the CLTS dataset. Our method can also improve the faithfulness of the summary and generalize well to different pre-trained language models. Moreover, we propose a method to evaluate the integrity of generated entities. Besides, we adapt the data augmentation method in the FactCC model according to the difference between Chinese and English in grammar and train a new evaluation model for factual consistency evaluation in Chinese summarization.

随着深度学习的成功应用,文档摘要系统可以产生更具可读性的结果。然而,抽象摘要仍然存在输出不真实和事实错误的问题,尤其是在命名实体方面。目前的方法倾向于利用外部知识来提高模型性能,却忽视了实体的边界信息和语义。在本文中,我们提出了一种实体增强方法(EAM),鼓励模型充分利用实体边界信息,并对关键实体给予更多关注。在三个中英文摘要数据集上的实验结果表明,我们的方法优于几种强基线方法,并在 CLTS 数据集上达到了最先进的性能。我们的方法还能提高摘要的忠实度,并能很好地泛化到不同的预训练语言模型中。此外,我们还提出了一种评估生成实体完整性的方法。此外,我们还根据中英文语法差异调整了 FactCC 模型中的数据增强方法,并训练了一个新的评估模型,用于中文摘要中的事实一致性评估。
{"title":"Boundary-Aware Abstractive Summarization with Entity-Augmented Attention for Enhancing Faithfulness","authors":"Jiuyi Li, Junpeng Liu, Jianjun Ma, Wei Yang, Degen Huang","doi":"10.1145/3641278","DOIUrl":"https://doi.org/10.1145/3641278","url":null,"abstract":"<p>With the successful application of deep learning, document summarization systems can produce more readable results. However, abstractive summarization still suffers from unfaithful outputs and factual errors, especially in named entities. Current approaches tend to employ external knowledge to improve model performance while neglecting the boundary information and the semantics of the entities. In this paper, we propose an entity-augmented method (EAM) to encourage the model to make full use of the entity boundary information and pay more attention to the critical entities. Experimental results on three Chinese and English summarization datasets show that our method outperforms several strong baselines and achieves state-of-the-art performance on the CLTS dataset. Our method can also improve the faithfulness of the summary and generalize well to different pre-trained language models. Moreover, we propose a method to evaluate the integrity of generated entities. Besides, we adapt the data augmentation method in the FactCC model according to the difference between Chinese and English in grammar and train a new evaluation model for factual consistency evaluation in Chinese summarization.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":"63 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Study for Enhancing Low-resource Thai-Myanmar-English Neural Machine Translation 加强低资源泰缅英神经机器翻译的研究
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-13 DOI: 10.1145/3645111
Mya Ei San, Sasiporn Usanavasin, Ye Kyaw Thu, Manabu Okumura

Several methodologies have recently been proposed to enhance the performance of low-resource Neural Machine Translation (NMT). However, these techniques have yet to be explored thoroughly in low-resource Thai and Myanmar languages. Therefore, we first applied augmentation techniques such as SwitchOut and Ciphertext Based Data Augmentation (CipherDAug) to improve NMT performance in these languages. We secondly enhanced the NMT performance by fine-tuning the pre-trained Multilingual Denoising BART model (mBART), where BART denotes Bidirectional and Auto-Regressive Transformer. We implemented three NMT systems: namely, Transformer+SwitchOut, Multi-source Transformer+CipherDAug, and fine-tuned mBART in the bidirectional translations of Thai-English-Myanmar language pairs from the ASEAN-MT corpus. Experimental results showed that Multi-source Transformer+CipherDAug significantly improved BLEU, ChrF, and TER scores over the first baseline Transformer and second baseline Edit-Based Transformer (EDITOR). The model achieved notable BLEU scores: 37.9 (English-to-Thai), 42.7 (Thai-to-English), 28.9 (English-to-Myanmar), 31.2 (Myanmar-to-English), 25.3 (Thai-to-Myanmar), and 25.5 (Myanmar-to-Thai). The fine-tuned mBART model also considerably outperformed the two baselines, except for the Myanmar-to-English pair. SwitchOut improved over the second baseline in all pairs and performed similarly to the first baseline in most cases. Lastly, we performed detailed analyses verifying that the CipherDAug and mBART models potentially facilitate improving low-resource NMT performance in Thai and Myanmar languages.

最近提出了几种方法来提高低资源神经机器翻译(NMT)的性能。但是,这些技术在低资源泰语和缅甸语中还没有得到深入探讨。因此,我们首先应用了增强技术,如 SwitchOut 和基于密文的数据增强(CipherDAug),以提高这些语言的 NMT 性能。其次,我们通过微调预先训练好的多语言去噪 BART 模型(mBART)来提高 NMT 性能,其中 BART 表示双向和自动回归变换器(Bidirectional and Auto-Regressive Transformer)。我们在 ASEAN-MT 语料库的泰英缅语言对的双向翻译中实施了三种 NMT 系统:即 Transformer+SwitchOut、Multi-source Transformer+CipherDAug,以及微调后的 mBART。实验结果表明,与第一基线转换器和第二基线基于编辑的转换器(EDITOR)相比,多源转换器+CipherDAug 显著提高了 BLEU、ChrF 和 TER 分数。该模型取得了显著的 BLEU 分数:37.9(英译泰)、42.7(泰译英)、28.9(英译缅)、31.2(缅译英)、25.3(泰译缅)和 25.5(缅译泰)。微调后的 mBART 模型也大大优于两个基线模型,但缅甸语对英语除外。SwitchOut 在所有语音对中的表现都优于第二基线,在大多数情况下与第一基线的表现相似。最后,我们进行了详细的分析,验证了 CipherDAug 和 mBART 模型可能有助于提高泰语和缅甸语的低资源 NMT 性能。
{"title":"A Study for Enhancing Low-resource Thai-Myanmar-English Neural Machine Translation","authors":"Mya Ei San, Sasiporn Usanavasin, Ye Kyaw Thu, Manabu Okumura","doi":"10.1145/3645111","DOIUrl":"https://doi.org/10.1145/3645111","url":null,"abstract":"<p>Several methodologies have recently been proposed to enhance the performance of low-resource Neural Machine Translation (NMT). However, these techniques have yet to be explored thoroughly in low-resource Thai and Myanmar languages. Therefore, we first applied augmentation techniques such as SwitchOut and Ciphertext Based Data Augmentation (CipherDAug) to improve NMT performance in these languages. We secondly enhanced the NMT performance by fine-tuning the pre-trained Multilingual Denoising BART model (mBART), where BART denotes Bidirectional and Auto-Regressive Transformer. We implemented three NMT systems: namely, Transformer+SwitchOut, Multi-source Transformer+CipherDAug, and fine-tuned mBART in the bidirectional translations of Thai-English-Myanmar language pairs from the ASEAN-MT corpus. Experimental results showed that Multi-source Transformer+CipherDAug significantly improved BLEU, ChrF, and TER scores over the first baseline Transformer and second baseline Edit-Based Transformer (EDITOR). The model achieved notable BLEU scores: 37.9 (English-to-Thai), 42.7 (Thai-to-English), 28.9 (English-to-Myanmar), 31.2 (Myanmar-to-English), 25.3 (Thai-to-Myanmar), and 25.5 (Myanmar-to-Thai). The fine-tuned mBART model also considerably outperformed the two baselines, except for the Myanmar-to-English pair. SwitchOut improved over the second baseline in all pairs and performed similarly to the first baseline in most cases. Lastly, we performed detailed analyses verifying that the CipherDAug and mBART models potentially facilitate improving low-resource NMT performance in Thai and Myanmar languages.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":"176 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Disambiguation of Isolated Manipuri Tonal Contrast Word Pairs using Acoustic Features 利用声学特征消歧孤立的曼尼普尔音调对比词对
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-12 DOI: 10.1145/3643830
Thiyam Susma Devi, Pradip K. Das

Manipuri is a low-resource, Tibeto-Burman tonal language spoken mainly in Manipur, a northeastern state of India. Tone identification is crucial to speech comprehension for tonal languages, where tone defines the word’s meaning. Automatic Speech Recognition for those languages can perform better by including tonal information from a powerful tone detection system. While significant research has been conducted on tonal languages like Mandarin, Thai, Cantonese and Vietnamese, a notable gap exists in exploring Manipuri within this context. To address this gap, this work expands our previously developed handcrafted speech corpus, ManiTo, which comprises of isolated Manipuri tonal contrast word pairs to study the tones of Manipuri. This extension includes contributions from twenty native speakers. Preliminary findings have confirmed that Manipuri has two unique tones, Falling and Level. The study then conducts a comprehensive acoustic feature analysis. Two sets of features based on Pitch contours, Jitter and Shimmer measurements are investigated to distinguish the two tones of Manipuri. Support Vector Machine, Long Short-Term Memory, Random Forest and k-Nearest Neighbors are the classifiers adopted to validate the selected feature sets. The results indicate that the second set of features consistently outperformed the first set, demonstrating higher accuracy, particularly when utilizing the Random Forest classifier, which provides valuable insights for further advancements in speech recognition technology for low-resource tonal language Manipuri.

曼尼普尔语是一种资源匮乏的藏缅语调语言,主要在印度东北部的曼尼普尔邦使用。音调识别对于音调语言的语音理解至关重要,因为音调决定了单词的含义。如果将强大的音调检测系统提供的音调信息包括在内,这些语言的自动语音识别功能就能发挥得更好。虽然对普通话、泰语、粤语和越南语等声调语言进行了大量研究,但在探索曼尼普里语方面还存在明显差距。为了填补这一空白,这项工作扩展了我们之前开发的手工制作语音语料库 ManiTo,该语料库由孤立的曼尼普尔语声调对比词对组成,用于研究曼尼普尔语的声调。这一扩展包括来自 20 位母语人士的贡献。初步研究结果证实,曼尼普尔语有两种独特的音调,即 "下降 "和 "水平"。研究随后进行了全面的声学特征分析。研究了基于音高轮廓、抖动和微光测量的两组特征,以区分曼尼普里语的两种音调。支持向量机、长短期记忆、随机森林和 k 近邻是验证所选特征集的分类器。结果表明,第二组特征始终优于第一组特征,尤其是在使用随机森林分类器时,表现出更高的准确性,这为进一步提高低资源音调语言曼尼普尔语的语音识别技术提供了宝贵的见解。
{"title":"Disambiguation of Isolated Manipuri Tonal Contrast Word Pairs using Acoustic Features","authors":"Thiyam Susma Devi, Pradip K. Das","doi":"10.1145/3643830","DOIUrl":"https://doi.org/10.1145/3643830","url":null,"abstract":"<p>Manipuri is a low-resource, Tibeto-Burman tonal language spoken mainly in Manipur, a northeastern state of India. Tone identification is crucial to speech comprehension for tonal languages, where tone defines the word’s meaning. Automatic Speech Recognition for those languages can perform better by including tonal information from a powerful tone detection system. While significant research has been conducted on tonal languages like Mandarin, Thai, Cantonese and Vietnamese, a notable gap exists in exploring Manipuri within this context. To address this gap, this work expands our previously developed handcrafted speech corpus, ManiTo, which comprises of isolated Manipuri tonal contrast word pairs to study the tones of Manipuri. This extension includes contributions from twenty native speakers. Preliminary findings have confirmed that Manipuri has two unique tones, Falling and Level. The study then conducts a comprehensive acoustic feature analysis. Two sets of features based on Pitch contours, Jitter and Shimmer measurements are investigated to distinguish the two tones of Manipuri. Support Vector Machine, Long Short-Term Memory, Random Forest and k-Nearest Neighbors are the classifiers adopted to validate the selected feature sets. The results indicate that the second set of features consistently outperformed the first set, demonstrating higher accuracy, particularly when utilizing the Random Forest classifier, which provides valuable insights for further advancements in speech recognition technology for low-resource tonal language Manipuri.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":"38 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CodeKGC: Code Language Model for Generative Knowledge Graph Construction CodeKGC:生成式知识图谱构建的代码语言模型
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-09 DOI: 10.1145/3641850
Zhen Bi, Jing Chen, Yinuo Jiang, Feiyu Xiong, Wei Guo, Huajun Chen, Ningyu Zhang

Current generative knowledge graph construction approaches usually fail to capture structural knowledge by simply flattening natural language into serialized texts or a specification language. However, large generative language model trained on structured data such as code has demonstrated impressive capability in understanding natural language for structural prediction and reasoning tasks. Intuitively, we address the task of generative knowledge graph construction with code language model: given a code-format natural language input, the target is to generate triples which can be represented as code completion tasks. Specifically, we develop schema-aware prompts that effectively utilize the semantic structure within the knowledge graph. As code inherently possesses structure, such as class and function definitions, it serves as a useful model for prior semantic structural knowledge. Furthermore, we employ a rationale-enhanced generation method to boost the performance. Rationales provide intermediate steps, thereby improving knowledge extraction abilities. Experimental results indicate that the proposed approach can obtain better performance on benchmark datasets compared with baselines.

当前的生成式知识图谱构建方法通常只是将自然语言扁平化为序列化文本或规范语言,从而无法捕捉结构性知识。然而,在代码等结构化数据上训练的大型生成语言模型在理解自然语言以完成结构预测和推理任务方面表现出了令人印象深刻的能力。直观地说,我们利用代码语言模型来完成生成知识图谱的构建任务:给定代码格式的自然语言输入,目标是生成可表示为代码完成任务的三元组。具体来说,我们开发了可有效利用知识图谱内语义结构的模式感知提示。由于代码本身具有结构(如类和函数定义),因此它可以作为先验语义结构知识的有用模型。此外,我们还采用了增强理由生成方法来提高性能。理由提供了中间步骤,从而提高了知识提取能力。实验结果表明,与基线方法相比,所提出的方法在基准数据集上可以获得更好的性能。
{"title":"CodeKGC: Code Language Model for Generative Knowledge Graph Construction","authors":"Zhen Bi, Jing Chen, Yinuo Jiang, Feiyu Xiong, Wei Guo, Huajun Chen, Ningyu Zhang","doi":"10.1145/3641850","DOIUrl":"https://doi.org/10.1145/3641850","url":null,"abstract":"<p>Current generative knowledge graph construction approaches usually fail to capture structural knowledge by simply flattening natural language into serialized texts or a specification language. However, large generative language model trained on structured data such as code has demonstrated impressive capability in understanding natural language for structural prediction and reasoning tasks. Intuitively, we address the task of generative knowledge graph construction with code language model: given a code-format natural language input, the target is to generate triples which can be represented as code completion tasks. Specifically, we develop schema-aware prompts that effectively utilize the semantic structure within the knowledge graph. As code inherently possesses structure, such as class and function definitions, it serves as a useful model for prior semantic structural knowledge. Furthermore, we employ a rationale-enhanced generation method to boost the performance. Rationales provide intermediate steps, thereby improving knowledge extraction abilities. Experimental results indicate that the proposed approach can obtain better performance on benchmark datasets compared with baselines.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":"12 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrastive Language-Knowledge Graph Pre-training 对比语言知识图谱预培训
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-09 DOI: 10.1145/3644820
Xiaowei Yuan, Kang Liu, Yequan Wang

Recent years have witnessed a surge of academic interest in knowledge-enhanced pre-trained language models (PLMs) that incorporate factual knowledge to enhance knowledge-driven applications. Nevertheless, existing studies primarily focus on shallow, static, and separately pre-trained entity embeddings, with few delving into the potential of deep contextualized knowledge representation for knowledge incorporation. Consequently, the performance gains of such models remain limited. In this paper, we introduce a simple yet effective knowledge-enhanced model, College (Contrastive Language-Knowledge Graph Pre-training), which leverages contrastive learning to incorporate factual knowledge into PLMs. This approach maintains the knowledge in its original graph structure to provide the most available information and circumvents the issue of heterogeneous embedding fusion. Experimental results demonstrate that our approach achieves more effective results on several knowledge-intensive tasks compared to previous state-of-the-art methods. Our code and trained models are available at https://github.com/Stacy027/COLLEGE.

近年来,学术界对知识增强型预训练语言模型(PLMs)的兴趣激增,这些模型结合事实知识来增强知识驱动型应用。然而,现有的研究主要集中在浅层、静态和单独预训练的实体嵌入上,很少有人深入研究深度语境化知识表征在知识整合方面的潜力。因此,此类模型的性能提升仍然有限。在本文中,我们介绍了一种简单而有效的知识增强模型--College(对比语言-知识图谱预训练),它利用对比学习将事实知识纳入 PLM。这种方法将知识保持在原始图结构中,以提供最可用的信息,并避免了异构嵌入融合的问题。实验结果表明,与之前最先进的方法相比,我们的方法在一些知识密集型任务中取得了更有效的结果。我们的代码和训练有素的模型可在 https://github.com/Stacy027/COLLEGE 上获取。
{"title":"Contrastive Language-Knowledge Graph Pre-training","authors":"Xiaowei Yuan, Kang Liu, Yequan Wang","doi":"10.1145/3644820","DOIUrl":"https://doi.org/10.1145/3644820","url":null,"abstract":"<p>Recent years have witnessed a surge of academic interest in knowledge-enhanced pre-trained language models (PLMs) that incorporate factual knowledge to enhance knowledge-driven applications. Nevertheless, existing studies primarily focus on shallow, static, and separately pre-trained entity embeddings, with few delving into the potential of deep contextualized knowledge representation for knowledge incorporation. Consequently, the performance gains of such models remain limited. In this paper, we introduce a simple yet effective knowledge-enhanced model, <span>College</span> (<b>Co</b>ntrastive <b>L</b>anguage-Know<b>le</b>dge <b>G</b>raph Pr<b>e</b>-training), which leverages contrastive learning to incorporate factual knowledge into PLMs. This approach maintains the knowledge in its original graph structure to provide the most available information and circumvents the issue of heterogeneous embedding fusion. Experimental results demonstrate that our approach achieves more effective results on several knowledge-intensive tasks compared to previous state-of-the-art methods. Our code and trained models are available at https://github.com/Stacy027/COLLEGE.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":"176 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Regression Analysis with Ensemble Pipeline Approach for Applications Across Multiple Domains 利用集合管道法改进回归分析,实现跨领域应用
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-08 DOI: 10.1145/3645110
Debajyoty Banik, Rahul Paul, Rajkumar Singh Rathore, Rutvij H. Jhaveri

In this research, we introduce two new machine learning regression methods: the Ensemble Average and the Pipelined Model. These methods aim to enhance traditional regression analysis for predictive tasks and have undergone thorough evaluation across three datasets: Kaggle House Price, Boston House Price, and California Housing, using various performance metrics. The results consistently show that our models outperform existing methods in terms of accuracy and reliability across all three datasets. The Pipelined Model, in particular, is notable for its ability to combine predictions from multiple models, leading to higher accuracy and impressive scalability. This scalability allows for their application in diverse fields like technology, finance, and healthcare. Furthermore, these models can be adapted for real-time and streaming data analysis, making them valuable for applications such as fraud detection, stock market prediction, and IoT sensor data analysis. Enhancements to the models also make them suitable for big data applications, ensuring their relevance for large datasets and distributed computing environments. It’s important to acknowledge some limitations of our models, including potential data biases, specific assumptions, increased complexity, and challenges related to interpretability when using them in practical scenarios. Nevertheless, these innovations advance predictive modeling, and our comprehensive evaluation underscores their potential to provide increased accuracy and reliability across a wide range of applications. The results indicate that the proposed models outperform existing models in terms of accuracy and robustness for all three datasets. The source code can be found at https://huggingface.co/DebajyotyBanik/Ensemble-Pipelined-Regression/tree/main.

在这项研究中,我们介绍了两种新的机器学习回归方法:集合平均法和流水线模型。这些方法旨在增强预测任务的传统回归分析,并在三个数据集上进行了全面评估:我们使用各种性能指标对 Kaggle 房价、波士顿房价和加州住房三个数据集进行了全面评估。结果一致表明,在所有三个数据集上,我们的模型在准确性和可靠性方面都优于现有方法。特别是管道化模型,它能够结合多个模型的预测结果,从而获得更高的准确性和令人印象深刻的可扩展性。这种可扩展性使其能够应用于技术、金融和医疗保健等不同领域。此外,这些模型还可用于实时和流数据分析,因此在欺诈检测、股市预测和物联网传感器数据分析等应用中非常有价值。对模型的改进还使其适用于大数据应用,确保其适用于大型数据集和分布式计算环境。必须承认我们的模型存在一些局限性,包括潜在的数据偏差、特定的假设、复杂性的增加以及在实际场景中使用时与可解释性相关的挑战。然而,这些创新推动了预测建模的发展,我们的综合评估强调了它们在广泛应用中提供更高精度和可靠性的潜力。结果表明,就所有三个数据集而言,所提出的模型在准确性和稳健性方面都优于现有模型。源代码见 https://huggingface.co/DebajyotyBanik/Ensemble-Pipelined-Regression/tree/main。
{"title":"Improved Regression Analysis with Ensemble Pipeline Approach for Applications Across Multiple Domains","authors":"Debajyoty Banik, Rahul Paul, Rajkumar Singh Rathore, Rutvij H. Jhaveri","doi":"10.1145/3645110","DOIUrl":"https://doi.org/10.1145/3645110","url":null,"abstract":"<p>In this research, we introduce two new machine learning regression methods: the Ensemble Average and the Pipelined Model. These methods aim to enhance traditional regression analysis for predictive tasks and have undergone thorough evaluation across three datasets: Kaggle House Price, Boston House Price, and California Housing, using various performance metrics. The results consistently show that our models outperform existing methods in terms of accuracy and reliability across all three datasets. The Pipelined Model, in particular, is notable for its ability to combine predictions from multiple models, leading to higher accuracy and impressive scalability. This scalability allows for their application in diverse fields like technology, finance, and healthcare. Furthermore, these models can be adapted for real-time and streaming data analysis, making them valuable for applications such as fraud detection, stock market prediction, and IoT sensor data analysis. Enhancements to the models also make them suitable for big data applications, ensuring their relevance for large datasets and distributed computing environments. It’s important to acknowledge some limitations of our models, including potential data biases, specific assumptions, increased complexity, and challenges related to interpretability when using them in practical scenarios. Nevertheless, these innovations advance predictive modeling, and our comprehensive evaluation underscores their potential to provide increased accuracy and reliability across a wide range of applications. The results indicate that the proposed models outperform existing models in terms of accuracy and robustness for all three datasets. The source code can be found at https://huggingface.co/DebajyotyBanik/Ensemble-Pipelined-Regression/tree/main.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":"16 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139752692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Asian and Low-Resource Language Information Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1