首页 > 最新文献

Transactions of the Association for Computational Linguistics最新文献

英文 中文
Beyond Boundaries: A Human-like Approach for Question Answering over Structured and Unstructured Information Sources 超越界限:结构化和非结构化信息源问题解答的类人方法
Pub Date : 2024-06-01 DOI: 10.1162/tacl_a_00671
Jens Lehmann, Dhananjay Bhandiwad, Preetam Gattogi, S. Vahdati
Abstract Answering factual questions from heterogenous sources, such as graphs and text, is a key capacity of intelligent systems. Current approaches either (i) perform question answering over text and structured sources as separate pipelines followed by a merge step or (ii) provide an early integration, giving up the strengths of particular information sources. To solve this problem, we present “HumanIQ”, a method that teaches language models to dynamically combine retrieved information by imitating how humans use retrieval tools. Our approach couples a generic method for gathering human demonstrations of tool use with adaptive few-shot learning for tool augmented models. We show that HumanIQ confers significant benefits, including i) reducing the error rate of our strongest baseline (GPT-4) by over 50% across 3 benchmarks, (ii) improving human preference over responses from vanilla GPT-4 (45.3% wins, 46.7% ties, 8.0% loss), and (iii) outperforming numerous task-specific baselines.
摘要 从图形和文本等异源回答事实问题是智能系统的一项关键能力。目前的方法要么是:(i) 将文本和结构化信息源作为单独的管道执行问题解答,然后再进行合并;要么是:提供早期整合,放弃特定信息源的优势。为了解决这个问题,我们提出了 "HumanIQ",这是一种通过模仿人类使用检索工具的方式,教语言模型动态合并检索信息的方法。我们的方法将收集人类使用工具演示的通用方法与工具增强模型的自适应少量学习相结合。我们的研究表明,HumanIQ 具有显著的优势,包括 i) 在 3 个基准测试中将我们最强基线(GPT-4)的错误率降低了 50%以上;ii) 提高了人类对普通 GPT-4 响应的偏好度(45.3% 的胜率、46.7% 的平局率、8.0% 的损失率);iii) 优于众多特定任务基线。
{"title":"Beyond Boundaries: A Human-like Approach for Question Answering over Structured and Unstructured Information Sources","authors":"Jens Lehmann, Dhananjay Bhandiwad, Preetam Gattogi, S. Vahdati","doi":"10.1162/tacl_a_00671","DOIUrl":"https://doi.org/10.1162/tacl_a_00671","url":null,"abstract":"Abstract Answering factual questions from heterogenous sources, such as graphs and text, is a key capacity of intelligent systems. Current approaches either (i) perform question answering over text and structured sources as separate pipelines followed by a merge step or (ii) provide an early integration, giving up the strengths of particular information sources. To solve this problem, we present “HumanIQ”, a method that teaches language models to dynamically combine retrieved information by imitating how humans use retrieval tools. Our approach couples a generic method for gathering human demonstrations of tool use with adaptive few-shot learning for tool augmented models. We show that HumanIQ confers significant benefits, including i) reducing the error rate of our strongest baseline (GPT-4) by over 50% across 3 benchmarks, (ii) improving human preference over responses from vanilla GPT-4 (45.3% wins, 46.7% ties, 8.0% loss), and (iii) outperforming numerous task-specific baselines.","PeriodicalId":506323,"journal":{"name":"Transactions of the Association for Computational Linguistics","volume":"14 4","pages":"786-802"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141411182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Energy-based Model for Word-level AutoCompletion in Computer-aided Translation 基于能量的计算机辅助翻译单词级自动补全模型
Pub Date : 2024-02-01 DOI: 10.1162/tacl_a_00637
Cheng Yang, Guoping Huang, Mo Yu, Zhirui Zhang, Siheng Li, Mingming Yang, Shuming Shi, Yujiu Yang, Lemao Liu
Abstract Word-level AutoCompletion (WLAC) is a rewarding yet challenging task in Computer-aided Translation. Existing work addresses this task through a classification model based on a neural network that maps the hidden vector of the input context into its corresponding label (i.e., the candidate target word is treated as a label). Since the context hidden vector itself does not take the label into account and it is projected to the label through a linear classifier, the model cannot sufficiently leverage valuable information from the source sentence as verified in our experiments, which eventually hinders its overall performance. To alleviate this issue, this work proposes an energy-based model for WLAC, which enables the context hidden vector to capture crucial information from the source sentence. Unfortunately, training and inference suffer from efficiency and effectiveness challenges, therefore we employ three simple yet effective strategies to put our model into practice. Experiments on four standard benchmarks demonstrate that our reranking-based approach achieves substantial improvements (about 6.07%) over the previous state-of-the-art model. Further analyses show that each strategy of our approach contributes to the final performance.1
摘要 单词级自动完成(WLAC)是计算机辅助翻译中一项既有价值又具有挑战性的任务。现有工作通过基于神经网络的分类模型来完成这项任务,该模型将输入上下文的隐藏向量映射到相应的标签(即候选目标词被视为标签)。由于上下文隐藏向量本身并不考虑标签,而且它是通过线性分类器投射到标签上的,因此该模型无法充分利用源句中的有价值信息,这在我们的实验中得到了验证,最终影响了其整体性能。为了缓解这一问题,本研究提出了一种基于能量的 WLAC 模型,它能使上下文隐藏向量捕捉到源句中的关键信息。不幸的是,训练和推理在效率和有效性方面都面临挑战,因此我们采用了三种简单而有效的策略来实践我们的模型。在四个标准基准上进行的实验表明,我们基于重排的方法比以前的先进模型有了大幅提高(约 6.07%)。进一步的分析表明,我们方法中的每种策略都对最终性能有所贡献1。
{"title":"An Energy-based Model for Word-level AutoCompletion in Computer-aided Translation","authors":"Cheng Yang, Guoping Huang, Mo Yu, Zhirui Zhang, Siheng Li, Mingming Yang, Shuming Shi, Yujiu Yang, Lemao Liu","doi":"10.1162/tacl_a_00637","DOIUrl":"https://doi.org/10.1162/tacl_a_00637","url":null,"abstract":"Abstract Word-level AutoCompletion (WLAC) is a rewarding yet challenging task in Computer-aided Translation. Existing work addresses this task through a classification model based on a neural network that maps the hidden vector of the input context into its corresponding label (i.e., the candidate target word is treated as a label). Since the context hidden vector itself does not take the label into account and it is projected to the label through a linear classifier, the model cannot sufficiently leverage valuable information from the source sentence as verified in our experiments, which eventually hinders its overall performance. To alleviate this issue, this work proposes an energy-based model for WLAC, which enables the context hidden vector to capture crucial information from the source sentence. Unfortunately, training and inference suffer from efficiency and effectiveness challenges, therefore we employ three simple yet effective strategies to put our model into practice. Experiments on four standard benchmarks demonstrate that our reranking-based approach achieves substantial improvements (about 6.07%) over the previous state-of-the-art model. Further analyses show that each strategy of our approach contributes to the final performance.1","PeriodicalId":506323,"journal":{"name":"Transactions of the Association for Computational Linguistics","volume":"24 1","pages":"137-156"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139891085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Energy-based Model for Word-level AutoCompletion in Computer-aided Translation 基于能量的计算机辅助翻译单词级自动补全模型
Pub Date : 2024-02-01 DOI: 10.1162/tacl_a_00637
Cheng Yang, Guoping Huang, Mo Yu, Zhirui Zhang, Siheng Li, Mingming Yang, Shuming Shi, Yujiu Yang, Lemao Liu
Abstract Word-level AutoCompletion (WLAC) is a rewarding yet challenging task in Computer-aided Translation. Existing work addresses this task through a classification model based on a neural network that maps the hidden vector of the input context into its corresponding label (i.e., the candidate target word is treated as a label). Since the context hidden vector itself does not take the label into account and it is projected to the label through a linear classifier, the model cannot sufficiently leverage valuable information from the source sentence as verified in our experiments, which eventually hinders its overall performance. To alleviate this issue, this work proposes an energy-based model for WLAC, which enables the context hidden vector to capture crucial information from the source sentence. Unfortunately, training and inference suffer from efficiency and effectiveness challenges, therefore we employ three simple yet effective strategies to put our model into practice. Experiments on four standard benchmarks demonstrate that our reranking-based approach achieves substantial improvements (about 6.07%) over the previous state-of-the-art model. Further analyses show that each strategy of our approach contributes to the final performance.1
摘要 单词级自动完成(WLAC)是计算机辅助翻译中一项既有价值又具有挑战性的任务。现有工作通过基于神经网络的分类模型来完成这项任务,该模型将输入上下文的隐藏向量映射到相应的标签(即候选目标词被视为标签)。由于上下文隐藏向量本身并不考虑标签,而且它是通过线性分类器投射到标签上的,因此该模型无法充分利用源句中的有价值信息,这在我们的实验中得到了验证,最终影响了其整体性能。为了缓解这一问题,本研究提出了一种基于能量的 WLAC 模型,它能使上下文隐藏向量捕捉到源句中的关键信息。不幸的是,训练和推理在效率和有效性方面都面临挑战,因此我们采用了三种简单而有效的策略来实践我们的模型。在四个标准基准上进行的实验表明,我们基于重排的方法比以前的先进模型有了大幅提高(约 6.07%)。进一步的分析表明,我们方法中的每种策略都对最终性能有所贡献1。
{"title":"An Energy-based Model for Word-level AutoCompletion in Computer-aided Translation","authors":"Cheng Yang, Guoping Huang, Mo Yu, Zhirui Zhang, Siheng Li, Mingming Yang, Shuming Shi, Yujiu Yang, Lemao Liu","doi":"10.1162/tacl_a_00637","DOIUrl":"https://doi.org/10.1162/tacl_a_00637","url":null,"abstract":"Abstract Word-level AutoCompletion (WLAC) is a rewarding yet challenging task in Computer-aided Translation. Existing work addresses this task through a classification model based on a neural network that maps the hidden vector of the input context into its corresponding label (i.e., the candidate target word is treated as a label). Since the context hidden vector itself does not take the label into account and it is projected to the label through a linear classifier, the model cannot sufficiently leverage valuable information from the source sentence as verified in our experiments, which eventually hinders its overall performance. To alleviate this issue, this work proposes an energy-based model for WLAC, which enables the context hidden vector to capture crucial information from the source sentence. Unfortunately, training and inference suffer from efficiency and effectiveness challenges, therefore we employ three simple yet effective strategies to put our model into practice. Experiments on four standard benchmarks demonstrate that our reranking-based approach achieves substantial improvements (about 6.07%) over the previous state-of-the-art model. Further analyses show that each strategy of our approach contributes to the final performance.1","PeriodicalId":506323,"journal":{"name":"Transactions of the Association for Computational Linguistics","volume":"1447 ","pages":"137-156"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139831090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing the Binning Problem in Calibration Assessment through Scalar Annotations 通过标量注释解决校准评估中的分选问题
Pub Date : 2024-02-01 DOI: 10.1162/tacl_a_00636
Zhengping Jiang, Anqi Liu, Benjamnin Van Durme
Abstract Computational linguistics models commonly target the prediction of discrete—categorical—labels. When assessing how well-calibrated these model predictions are, popular evaluation schemes require practitioners to manually determine a binning scheme: grouping labels into bins to approximate true label posterior. The problem is that these metrics are sensitive to binning decisions. We consider two solutions to the binning problem that apply at the stage of data annotation: collecting either distributed (redundant) labels or direct scalar value assignment. In this paper, we show that although both approaches address the binning problem by evaluating instance-level calibration, direct scalar assignment is significantly more cost-effective. We provide theoretical analysis and empirical evidence to support our proposal for dataset creators to adopt scalar annotation protocols to enable a higher-quality assessment of model calibration.
摘要 计算语言学模型通常以离散分类标签预测为目标。在评估这些模型预测的校准程度时,流行的评估方案要求从业人员手动确定分选方案:将标签分组,以近似真实标签后验。问题在于,这些指标对分选决策非常敏感。我们考虑了两种适用于数据注释阶段的分档问题解决方案:收集分布式(冗余)标签或直接标量值赋值。在本文中,我们表明尽管这两种方法都是通过评估实例级校准来解决分选问题,但直接标量赋值的成本效益要高得多。我们提供了理论分析和经验证据,以支持我们的建议,即数据集创建者应采用标量注释协议,以便对模型校准进行更高质量的评估。
{"title":"Addressing the Binning Problem in Calibration Assessment through Scalar Annotations","authors":"Zhengping Jiang, Anqi Liu, Benjamnin Van Durme","doi":"10.1162/tacl_a_00636","DOIUrl":"https://doi.org/10.1162/tacl_a_00636","url":null,"abstract":"Abstract Computational linguistics models commonly target the prediction of discrete—categorical—labels. When assessing how well-calibrated these model predictions are, popular evaluation schemes require practitioners to manually determine a binning scheme: grouping labels into bins to approximate true label posterior. The problem is that these metrics are sensitive to binning decisions. We consider two solutions to the binning problem that apply at the stage of data annotation: collecting either distributed (redundant) labels or direct scalar value assignment. In this paper, we show that although both approaches address the binning problem by evaluating instance-level calibration, direct scalar assignment is significantly more cost-effective. We provide theoretical analysis and empirical evidence to support our proposal for dataset creators to adopt scalar annotation protocols to enable a higher-quality assessment of model calibration.","PeriodicalId":506323,"journal":{"name":"Transactions of the Association for Computational Linguistics","volume":"46 8","pages":"120-136"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139823555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing the Binning Problem in Calibration Assessment through Scalar Annotations 通过标量注释解决校准评估中的分选问题
Pub Date : 2024-02-01 DOI: 10.1162/tacl_a_00636
Zhengping Jiang, Anqi Liu, Benjamnin Van Durme
Abstract Computational linguistics models commonly target the prediction of discrete—categorical—labels. When assessing how well-calibrated these model predictions are, popular evaluation schemes require practitioners to manually determine a binning scheme: grouping labels into bins to approximate true label posterior. The problem is that these metrics are sensitive to binning decisions. We consider two solutions to the binning problem that apply at the stage of data annotation: collecting either distributed (redundant) labels or direct scalar value assignment. In this paper, we show that although both approaches address the binning problem by evaluating instance-level calibration, direct scalar assignment is significantly more cost-effective. We provide theoretical analysis and empirical evidence to support our proposal for dataset creators to adopt scalar annotation protocols to enable a higher-quality assessment of model calibration.
摘要 计算语言学模型通常以离散分类标签预测为目标。在评估这些模型预测的校准程度时,流行的评估方案要求从业人员手动确定分选方案:将标签分组,以近似真实标签后验。问题在于,这些指标对分选决策非常敏感。我们考虑了两种适用于数据注释阶段的分档问题解决方案:收集分布式(冗余)标签或直接标量值赋值。在本文中,我们表明尽管这两种方法都是通过评估实例级校准来解决分选问题,但直接标量赋值的成本效益要高得多。我们提供了理论分析和经验证据,以支持我们的建议,即数据集创建者应采用标量注释协议,以便对模型校准进行更高质量的评估。
{"title":"Addressing the Binning Problem in Calibration Assessment through Scalar Annotations","authors":"Zhengping Jiang, Anqi Liu, Benjamnin Van Durme","doi":"10.1162/tacl_a_00636","DOIUrl":"https://doi.org/10.1162/tacl_a_00636","url":null,"abstract":"Abstract Computational linguistics models commonly target the prediction of discrete—categorical—labels. When assessing how well-calibrated these model predictions are, popular evaluation schemes require practitioners to manually determine a binning scheme: grouping labels into bins to approximate true label posterior. The problem is that these metrics are sensitive to binning decisions. We consider two solutions to the binning problem that apply at the stage of data annotation: collecting either distributed (redundant) labels or direct scalar value assignment. In this paper, we show that although both approaches address the binning problem by evaluating instance-level calibration, direct scalar assignment is significantly more cost-effective. We provide theoretical analysis and empirical evidence to support our proposal for dataset creators to adopt scalar annotation protocols to enable a higher-quality assessment of model calibration.","PeriodicalId":506323,"journal":{"name":"Transactions of the Association for Computational Linguistics","volume":"8 1","pages":"120-136"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139883328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Transactions of the Association for Computational Linguistics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1