首页 > 最新文献

ACM Transactions on Information Systems最新文献

英文 中文
Multi-grained Document Modeling for Search Result Diversification 多级文档建模促进搜索结果多样化
IF 5.6 2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2024-03-15 DOI: 10.1145/3652852
Zhirui Deng, Zhicheng Dou, Zhan Su, Ji-Rong Wen

Search result diversification plays a crucial role in improving users’ search experience by providing users with documents covering more subtopics. Previous studies have made great progress in leveraging inter-document interactions to measure the similarity among documents. However, different parts of the document may embody different subtopics and existing models ignore the subtle similarities and differences of content within each document. In this paper, we propose a hierarchical attention framework to combine intra-document interactions with inter-document interactions in a complementary manner in order to conduct multi-grained document modeling. Specifically, we separate the document into passages to model the document content from multi-grained perspectives. Then, we design stacked interaction blocks to conduct inter-document and intra-document interactions. Moreover, to measure the subtopic coverage of each document more accurately, we propose a passage-aware document-subtopic interaction to perform fine-grained document-subtopic interaction. Experimental results demonstrate that our model achieves state-of-the-art performance compared with existing methods.

通过向用户提供涵盖更多子主题的文档,搜索结果多样化在改善用户搜索体验方面发挥着至关重要的作用。以往的研究在利用文档间交互来衡量文档相似性方面取得了很大进展。然而,文档的不同部分可能包含不同的子主题,现有模型忽略了每个文档内部内容的细微异同。在本文中,我们提出了一个分层注意力框架,以互补的方式将文档内交互与文档间交互结合起来,从而进行多粒度文档建模。具体来说,我们将文档分成若干段落,从多级视角对文档内容进行建模。然后,我们设计了堆叠交互块来进行文档间和文档内的交互。此外,为了更准确地衡量每篇文档的子主题覆盖率,我们提出了一种段落感知的文档-子主题交互,以实现细粒度的文档-子主题交互。实验结果表明,与现有方法相比,我们的模型达到了最先进的性能。
{"title":"Multi-grained Document Modeling for Search Result Diversification","authors":"Zhirui Deng, Zhicheng Dou, Zhan Su, Ji-Rong Wen","doi":"10.1145/3652852","DOIUrl":"https://doi.org/10.1145/3652852","url":null,"abstract":"<p>Search result diversification plays a crucial role in improving users’ search experience by providing users with documents covering more subtopics. Previous studies have made great progress in leveraging inter-document interactions to measure the similarity among documents. However, different parts of the document may embody different subtopics and existing models ignore the subtle similarities and differences of content within each document. In this paper, we propose a hierarchical attention framework to combine intra-document interactions with inter-document interactions in a complementary manner in order to conduct multi-grained document modeling. Specifically, we separate the document into passages to model the document content from multi-grained perspectives. Then, we design stacked interaction blocks to conduct inter-document and intra-document interactions. Moreover, to measure the subtopic coverage of each document more accurately, we propose a passage-aware document-subtopic interaction to perform fine-grained document-subtopic interaction. Experimental results demonstrate that our model achieves state-of-the-art performance compared with existing methods.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cooking with Conversation: Enhancing User Engagement and Learning with a Knowledge-Enhancing Assistant 用对话烹饪:利用知识强化助手提高用户参与度和学习效果
IF 5.6 2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2024-03-15 DOI: 10.1145/3649500
Alexander Frummet, Alessandro Speggiorin, David Elsweiler, Anton Leuski, Jeff Dalton

We present two empirical studies to investigate users’ expectations and behaviours when using digital assistants, such as Alexa and Google Home, in a kitchen context: First, a survey (N=200) queries participants on their expectations for the kinds of information that such systems should be able to provide. While consensus exists on expecting information about cooking steps and processes, younger participants who enjoy cooking express a higher likelihood of expecting details on food history or the science of cooking. In a follow-up Wizard-of-Oz study (N = 48), users were guided through the steps of a recipe either by an active wizard that alerted participants to information it could provide or a passive wizard who only answered questions that were provided by the user. The active policy led to almost double the number of conversational utterances and 1.5 times more knowledge-related user questions compared to the passive policy. Also, it resulted in 1.7 times more knowledge communicated than the passive policy. We discuss the findings in the context of related work and reveal implications for the design and use of such assistants for cooking and other purposes such as DIY and craft tasks, as well as the lessons we learned for evaluating such systems.

我们介绍了两项实证研究,以调查用户在厨房环境中使用 Alexa 和 Google Home 等数字助理时的期望和行为:首先,一项调查(N=200)询问了参与者对此类系统应能提供的信息种类的期望。虽然对烹饪步骤和流程信息的期望已达成共识,但喜欢烹饪的年轻参与者表示更有可能期望获得有关食物历史或烹饪科学的详细信息。在一项 "向导"(Wizard-of-Oz)的后续研究(N = 48)中,用户在菜谱步骤中的指导可以是主动向导(提醒参与者它可以提供的信息),也可以是被动向导(只回答用户提出的问题)。与被动向导相比,主动向导所产生的对话语句数量几乎是被动向导的两倍,而与知识相关的用户提问数量则是被动向导的 1.5 倍。此外,主动政策所传播的知识也是被动政策的 1.7 倍。我们将在相关工作的背景下讨论这些研究结果,并揭示设计和使用此类烹饪助手及其他用途(如 DIY 和手工任务)的意义,以及我们在评估此类系统时学到的经验。
{"title":"Cooking with Conversation: Enhancing User Engagement and Learning with a Knowledge-Enhancing Assistant","authors":"Alexander Frummet, Alessandro Speggiorin, David Elsweiler, Anton Leuski, Jeff Dalton","doi":"10.1145/3649500","DOIUrl":"https://doi.org/10.1145/3649500","url":null,"abstract":"<p>We present two empirical studies to investigate users’ expectations and behaviours when using digital assistants, such as Alexa and Google Home, in a kitchen context: First, a survey (N=200) queries participants on their expectations for the kinds of information that such systems should be able to provide. While consensus exists on expecting information about cooking steps and processes, younger participants who enjoy cooking express a higher likelihood of expecting details on food history or the science of cooking. In a follow-up Wizard-of-Oz study (N = 48), users were guided through the steps of a recipe either by an <i>active</i> wizard that alerted participants to information it could provide or a <i>passive</i> wizard who only answered questions that were provided by the user. The <i>active</i> policy led to almost double the number of conversational utterances and 1.5 times more knowledge-related user questions compared to the <i>passive</i> policy. Also, it resulted in 1.7 times more knowledge communicated than the <i>passive</i> policy. We discuss the findings in the context of related work and reveal implications for the design and use of such assistants for cooking and other purposes such as DIY and craft tasks, as well as the lessons we learned for evaluating such systems.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative Sequential Recommendations via Multi-View GNN-Transformers 通过多视图 GNN 变换器进行协作式顺序推荐
IF 5.6 2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2024-03-15 DOI: 10.1145/3649436
Tianze Luo, Yong Liu, Sinno Jialin Pan

Sequential recommendation systems aim to exploit users’ sequential behavior patterns to capture their interaction intentions and improve recommendation accuracy. Existing sequential recommendation methods mainly focus on modeling the items’ chronological relationships in each individual user behavior sequence, which may not be effective in making accurate and robust recommendations. On one hand, the performance of existing sequential recommendation methods is usually sensitive to the length of a user’s behavior sequence (i.e., the list of a user’s historically interacted items). On the other hand, besides the context information in each individual user behavior sequence, the collaborative information among different users’ behavior sequences is also crucial to make accurate recommendations. However, this kind of information is usually ignored by existing sequential recommendation methods. In this work, we propose a new sequential recommendation framework, which encodes the context information in each individual user behavior sequence as well as the collaborative information among the behavior sequences of different users, through building a local dependency graph for each item. We conduct extensive experiments to compare the proposed model with state-of-the-art sequential recommendation methods on five benchmark datasets. The experimental results demonstrate that the proposed model is able to achieve better recommendation performance than existing methods, by incorporating collaborative information.

顺序推荐系统旨在利用用户的顺序行为模式来捕捉用户的交互意图并提高推荐的准确性。现有的顺序推荐方法主要侧重于对每个用户行为序列中的项目时序关系建模,这可能无法有效地做出准确而稳健的推荐。一方面,现有顺序推荐方法的性能通常对用户行为序列(即用户历史交互项目列表)的长度很敏感。另一方面,除了每个用户行为序列中的上下文信息外,不同用户行为序列之间的协同信息也是进行准确推荐的关键。然而,现有的顺序推荐方法通常会忽略这类信息。在这项工作中,我们提出了一种新的顺序推荐框架,它通过为每个项目建立局部依赖图来编码每个用户行为序列中的上下文信息以及不同用户行为序列之间的协作信息。我们进行了大量实验,在五个基准数据集上比较了所提出的模型和最先进的顺序推荐方法。实验结果表明,通过结合协作信息,所提出的模型能够实现比现有方法更好的推荐性能。
{"title":"Collaborative Sequential Recommendations via Multi-View GNN-Transformers","authors":"Tianze Luo, Yong Liu, Sinno Jialin Pan","doi":"10.1145/3649436","DOIUrl":"https://doi.org/10.1145/3649436","url":null,"abstract":"<p>Sequential recommendation systems aim to exploit users’ sequential behavior patterns to capture their interaction intentions and improve recommendation accuracy. Existing sequential recommendation methods mainly focus on modeling the items’ chronological relationships in each individual user behavior sequence, which may not be effective in making accurate and robust recommendations. On one hand, the performance of existing sequential recommendation methods is usually sensitive to the length of a user’s behavior sequence (<i>i.e.</i>, the list of a user’s historically interacted items). On the other hand, besides the context information in each individual user behavior sequence, the collaborative information among different users’ behavior sequences is also crucial to make accurate recommendations. However, this kind of information is usually ignored by existing sequential recommendation methods. In this work, we propose a new sequential recommendation framework, which encodes the context information in each individual user behavior sequence as well as the collaborative information among the behavior sequences of different users, through building a local dependency graph for each item. We conduct extensive experiments to compare the proposed model with state-of-the-art sequential recommendation methods on five benchmark datasets. The experimental results demonstrate that the proposed model is able to achieve better recommendation performance than existing methods, by incorporating collaborative information.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language Understanding 跨模型比较损失,提高神经元在语言理解中的效用
IF 5.6 2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2024-03-15 DOI: 10.1145/3652599
Yunchang Zhu, Liang Pang, Kangxi Wu, Yanyan Lan, Huawei Shen, Xueqi Cheng

Current natural language understanding (NLU) models have been continuously scaling up, both in terms of model size and input context, introducing more hidden and input neurons. While this generally improves performance on average, the extra neurons do not yield a consistent improvement for all instances. This is because some hidden neurons are redundant, and the noise mixed in input neurons tends to distract the model. Previous work mainly focuses on extrinsically reducing low-utility neurons by additional post- or pre-processing, such as network pruning and context selection, to avoid this problem. Beyond that, can we make the model reduce redundant parameters and suppress input noise by intrinsically enhancing the utility of each neuron? If a model can efficiently utilize neurons, no matter which neurons are ablated (disabled), the ablated submodel should perform no better than the original full model. Based on such a comparison principle between models, we propose a cross-model comparative loss for a broad range of tasks. Comparative loss is essentially a ranking loss on top of the task-specific losses of the full and ablated models, with the expectation that the task-specific loss of the full model is minimal. We demonstrate the universal effectiveness of comparative loss through extensive experiments on 14 datasets from 3 distinct NLU tasks based on 5 widely used pretrained language models and find it particularly superior for models with few parameters or long input.

当前的自然语言理解(NLU)模型在模型规模和输入语境方面都在不断扩大,引入了更多的隐藏神经元和输入神经元。虽然这通常能提高平均性能,但额外的神经元并不能对所有实例产生一致的改进。这是因为一些隐藏神经元是冗余的,而输入神经元中混杂的噪声往往会分散模型的注意力。以往的工作主要集中在通过额外的后处理或预处理(如网络剪枝和上下文选择)从外部减少低效用神经元,以避免这一问题。除此以外,我们能否通过内在提高每个神经元的效用,使模型减少冗余参数,抑制输入噪声呢?如果一个模型能有效利用神经元,那么无论哪些神经元被消减(禁用),被消减的子模型的表现都不应该比原来的完整模型更好。基于这种模型间的比较原则,我们提出了一种适用于多种任务的跨模型比较损失法。比较损失本质上是在完整模型和消减模型的特定任务损失基础上的排名损失,期望完整模型的特定任务损失最小。我们在基于 5 个广泛使用的预训练语言模型的 3 个不同 NLU 任务的 14 个数据集上进行了大量实验,证明了比较损失的普遍有效性,并发现它对于参数较少或输入较长的模型尤为优越。
{"title":"Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language Understanding","authors":"Yunchang Zhu, Liang Pang, Kangxi Wu, Yanyan Lan, Huawei Shen, Xueqi Cheng","doi":"10.1145/3652599","DOIUrl":"https://doi.org/10.1145/3652599","url":null,"abstract":"<p>Current natural language understanding (NLU) models have been continuously scaling up, both in terms of model size and input context, introducing more hidden and input neurons. While this generally improves performance on average, the extra neurons do not yield a consistent improvement for all instances. This is because some hidden neurons are redundant, and the noise mixed in input neurons tends to distract the model. Previous work mainly focuses on extrinsically reducing low-utility neurons by additional post- or pre-processing, such as network pruning and context selection, to avoid this problem. Beyond that, can we make the model reduce redundant parameters and suppress input noise by intrinsically enhancing the utility of each neuron? If a model can efficiently utilize neurons, no matter which neurons are ablated (disabled), the ablated submodel should perform no better than the original full model. Based on such a comparison principle between models, we propose a cross-model comparative loss for a broad range of tasks. Comparative loss is essentially a ranking loss on top of the task-specific losses of the full and ablated models, with the expectation that the task-specific loss of the full model is minimal. We demonstrate the universal effectiveness of comparative loss through extensive experiments on 14 datasets from 3 distinct NLU tasks based on 5 widely used pretrained language models and find it particularly superior for models with few parameters or long input.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ELAKT: Enhancing Locality for Attentive Knowledge Tracing ELAKT:增强定位能力,实现专注的知识追踪
IF 5.6 2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2024-03-14 DOI: 10.1145/3652601
Yanjun Pu, Fang Liu, Rongye Shi, Haitao Yuan, Ruibo Chen, Tianhao Peng, WenJun Wu

Knowledge tracing models based on deep learning can achieve impressive predictive performance by leveraging attention mechanisms. However, there still exist two challenges in attentive knowledge tracing: First, the mechanism of classical models of attentive knowledge tracing demonstrates relatively low attention when processing exercise sequences with shifting knowledge concepts, making it difficult to capture the comprehensive state of knowledge across sequences. Second, classical models do not consider stochastic behaviors, which negatively affects models of attentive knowledge tracing in terms of capturing anomalous knowledge states. This paper proposes a model of attentive knowledge tracing, called Enhancing Locality for Attentive Knowledge Tracing (ELAKT), that is a variant of the deep knowledge tracing model. The proposed model leverages the encoder module of the transformer to aggregate knowledge embedding generated by both exercises and responses over all timesteps. In addition, it uses causal convolutions to aggregate and smooth the states of local knowledge. The ELAKT model uses the states of comprehensive knowledge concepts to introduce a prediction correction module to forecast the future responses of students to deal with noise caused by stochastic behaviors. The results of experiments demonstrated that the ELAKT model consistently outperforms state-of-the-art baseline knowledge tracing models.

基于深度学习的知识追踪模型可以利用注意力机制实现令人印象深刻的预测性能。然而,注意力知识追踪仍然存在两个挑战:首先,在处理知识概念不断变化的练习序列时,经典的注意力知识追踪模型的机制表现出相对较低的注意力,因此难以捕捉跨序列的综合知识状态。其次,经典模型没有考虑随机行为,这对专注知识追踪模型捕捉异常知识状态产生了负面影响。本文提出了一种细心知识追踪模型,称为 "增强细心知识追踪的位置性(ELAKT)",它是深度知识追踪模型的一种变体。该模型利用变换器的编码器模块来汇总所有时间步上由练习和响应产生的知识嵌入。此外,它还使用因果卷积来聚合和平滑局部知识的状态。ELAKT 模型利用综合知识概念的状态引入预测修正模块,预测学生未来的反应,以处理随机行为造成的噪声。实验结果表明,ELAKT 模型的性能始终优于最先进的基线知识追踪模型。
{"title":"ELAKT: Enhancing Locality for Attentive Knowledge Tracing","authors":"Yanjun Pu, Fang Liu, Rongye Shi, Haitao Yuan, Ruibo Chen, Tianhao Peng, WenJun Wu","doi":"10.1145/3652601","DOIUrl":"https://doi.org/10.1145/3652601","url":null,"abstract":"<p>Knowledge tracing models based on deep learning can achieve impressive predictive performance by leveraging attention mechanisms. However, there still exist two challenges in attentive knowledge tracing: First, the mechanism of classical models of attentive knowledge tracing demonstrates relatively low attention when processing exercise sequences with shifting knowledge concepts, making it difficult to capture the comprehensive state of knowledge across sequences. Second, classical models do not consider stochastic behaviors, which negatively affects models of attentive knowledge tracing in terms of capturing anomalous knowledge states. This paper proposes a model of attentive knowledge tracing, called Enhancing Locality for Attentive Knowledge Tracing (ELAKT), that is a variant of the deep knowledge tracing model. The proposed model leverages the encoder module of the transformer to aggregate knowledge embedding generated by both exercises and responses over all timesteps. In addition, it uses causal convolutions to aggregate and smooth the states of local knowledge. The ELAKT model uses the states of comprehensive knowledge concepts to introduce a prediction correction module to forecast the future responses of students to deal with noise caused by stochastic behaviors. The results of experiments demonstrated that the ELAKT model consistently outperforms state-of-the-art baseline knowledge tracing models.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140129402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Target-constrained Bidirectional Planning for Generation of Target-oriented Proactive Dialogue 生成目标导向型主动对话的目标约束双向规划
IF 5.6 2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2024-03-13 DOI: 10.1145/3652598
Jian Wang, Dongding Lin, Wenjie Li

Target-oriented proactive dialogue systems aim to lead conversations from a dialogue context toward a pre-determined target, such as making recommendations on designated items or introducing new specific topics. To this end, it is critical for such dialogue systems to plan reasonable actions to drive the conversation proactively, and meanwhile, to plan appropriate topics to move the conversation forward to the target topic smoothly. In this work, we mainly focus on effective dialogue planning for target-oriented dialogue generation. Inspired by decision-making theories in cognitive science, we propose a novel target-constrained bidirectional planning (TRIP) approach, which plans an appropriate dialogue path by looking ahead and looking back. By formulating the planning as a generation task, our TRIP bidirectionally generates a dialogue path consisting of a sequence of <action, topic> pairs using two Transformer decoders. They are expected to supervise each other and converge on consistent actions and topics by minimizing the decision gap and contrastive generation of targets. Moreover, we propose a target-constrained decoding algorithm with a bidirectional agreement to better control the planning process. Subsequently, we adopt the planned dialogue paths to guide dialogue generation in a pipeline manner, where we explore two variants: prompt-based generation and plan-controlled generation. Extensive experiments are conducted on two challenging dialogue datasets, which are re-purposed for exploring target-oriented dialogue. Our automatic and human evaluations demonstrate that the proposed methods significantly outperform various baseline models.

以目标为导向的主动对话系统旨在将对话从对话情境引向预先确定的目标,如就指定项目提出建议或引入新的特定话题。为此,这类对话系统必须规划合理的行动来主动推动对话,同时规划适当的话题来推动对话顺利进入目标话题。在这项工作中,我们主要关注面向目标对话生成的有效对话规划。受认知科学决策理论的启发,我们提出了一种新颖的目标受限双向规划(TRIP)方法,通过前瞻和后顾之忧来规划合适的对话路径。通过将规划制定为一项生成任务,我们的 TRIP 利用两个变换器解码器双向生成对话路径,该路径由一系列动作、话题对组成。我们希望这两个解码器能相互监督,并通过最小化决策差距和目标的对比生成来趋同于一致的行动和话题。此外,我们还提出了一种具有双向协议的目标受限解码算法,以更好地控制计划过程。随后,我们采用规划好的对话路径,以流水线方式指导对话生成,并探索了两种变体:基于提示的生成和计划控制的生成。我们在两个具有挑战性的对话数据集上进行了广泛的实验,这些数据集被重新用于探索面向目标的对话。我们的自动和人工评估结果表明,所提出的方法明显优于各种基线模型。
{"title":"Target-constrained Bidirectional Planning for Generation of Target-oriented Proactive Dialogue","authors":"Jian Wang, Dongding Lin, Wenjie Li","doi":"10.1145/3652598","DOIUrl":"https://doi.org/10.1145/3652598","url":null,"abstract":"<p>Target-oriented proactive dialogue systems aim to lead conversations from a dialogue context toward a pre-determined target, such as making recommendations on designated items or introducing new specific topics. To this end, it is critical for such dialogue systems to plan reasonable actions to drive the conversation proactively, and meanwhile, to plan appropriate topics to move the conversation forward to the target topic smoothly. In this work, we mainly focus on effective dialogue planning for target-oriented dialogue generation. Inspired by decision-making theories in cognitive science, we propose a novel target-constrained bidirectional planning (TRIP) approach, which plans an appropriate dialogue path by looking ahead and looking back. By formulating the planning as a generation task, our TRIP bidirectionally generates a dialogue path consisting of a sequence of &lt;action, topic&gt; pairs using two Transformer decoders. They are expected to supervise each other and converge on consistent actions and topics by minimizing the decision gap and contrastive generation of targets. Moreover, we propose a target-constrained decoding algorithm with a bidirectional agreement to better control the planning process. Subsequently, we adopt the planned dialogue paths to guide dialogue generation in a pipeline manner, where we explore two variants: prompt-based generation and plan-controlled generation. Extensive experiments are conducted on two challenging dialogue datasets, which are re-purposed for exploring target-oriented dialogue. Our automatic and human evaluations demonstrate that the proposed methods significantly outperform various baseline models.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140129444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Unified Representation Learning for Career Mobility Analysis with Trajectory Hypergraph 利用轨迹超图实现职业流动性分析的统一表征学习
IF 5.6 2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2024-03-06 DOI: 10.1145/3651158
Rui Zha, Ying Sun, Chuan Qin, Le Zhang, Tong Xu, Hengshu Zhu, Enhong Chen

Career mobility analysis aims at understanding the occupational movement patterns of talents across distinct labor market entities, which enables a wide range of talent-centered applications, such as job recommendation, labor demand forecasting, and company competitive analysis. Existing studies in this field mainly focus on a single fixed scale, either investigating individual trajectories at the micro-level or crowd flows among market entities at the macro-level. Consequently, the intrinsic cross-scale interactions between talents and the labor market are largely overlooked. To bridge this gap, we propose UniTRep, a novel unified representation learning framework for cross-scale career mobility analysis. Specifically, we first introduce a trajectory hypergraph structure to organize the career mobility patterns in a low-information-loss manner, where market entities and talent trajectories are represented as nodes and hyperedges, respectively. Then, for learning the market-aware talent representations, we attentively propagate the node information to the hyperedges and incorporate the market contextual features into the process of individual trajectory modeling. For learning the trajectory-enhanced market representations, we aggregate the message from hyperedges associated with a specific node to integrate the fine-grained semantics of trajectories into labor market modeling. Moreover, we design two auxiliary tasks to optimize both intra-scale and cross-scale learning with a self-supervised strategy. Extensive experiments on a real-world dataset clearly validate that UniTRep can significantly outperform state-of-the-art baselines for various tasks.

职业流动分析旨在了解人才在不同劳动力市场主体间的职业流动模式,从而实现以人才为中心的广泛应用,如职位推荐、劳动力需求预测和企业竞争力分析等。该领域的现有研究主要集中在一个固定的尺度上,要么研究微观层面的个体轨迹,要么研究宏观层面的市场主体之间的人群流动。因此,人才与劳动力市场之间内在的跨尺度互动在很大程度上被忽视了。为了弥补这一缺陷,我们提出了一个用于跨尺度职业流动分析的新型统一表征学习框架--UniTRep。具体来说,我们首先引入一个轨迹超图结构,以低信息损耗的方式组织职业流动模式,其中市场实体和人才轨迹分别表示为节点和超边。然后,在学习市场感知人才表征时,我们会将节点信息传播到超图中,并将市场背景特征纳入个人轨迹建模过程。在学习轨迹增强型市场表征时,我们汇总与特定节点相关的超节点信息,将细粒度的轨迹语义整合到劳动力市场建模中。此外,我们还设计了两个辅助任务,以自我监督策略优化尺度内和跨尺度学习。在真实世界数据集上进行的大量实验清楚地验证了 UniTRep 在各种任务中的表现明显优于最先进的基线方法。
{"title":"Towards Unified Representation Learning for Career Mobility Analysis with Trajectory Hypergraph","authors":"Rui Zha, Ying Sun, Chuan Qin, Le Zhang, Tong Xu, Hengshu Zhu, Enhong Chen","doi":"10.1145/3651158","DOIUrl":"https://doi.org/10.1145/3651158","url":null,"abstract":"<p>Career mobility analysis aims at understanding the occupational movement patterns of talents across distinct labor market entities, which enables a wide range of talent-centered applications, such as job recommendation, labor demand forecasting, and company competitive analysis. Existing studies in this field mainly focus on a single fixed scale, either investigating individual trajectories at the micro-level or crowd flows among market entities at the macro-level. Consequently, the intrinsic cross-scale interactions between talents and the labor market are largely overlooked. To bridge this gap, we propose <b>UniTRep</b>, a novel unified representation learning framework for cross-scale career mobility analysis. Specifically, we first introduce a trajectory hypergraph structure to organize the career mobility patterns in a low-information-loss manner, where market entities and talent trajectories are represented as nodes and hyperedges, respectively. Then, for learning the <i>market-aware talent representations</i>, we attentively propagate the node information to the hyperedges and incorporate the market contextual features into the process of individual trajectory modeling. For learning the <i>trajectory-enhanced market representations</i>, we aggregate the message from hyperedges associated with a specific node to integrate the fine-grained semantics of trajectories into labor market modeling. Moreover, we design two auxiliary tasks to optimize both intra-scale and cross-scale learning with a self-supervised strategy. Extensive experiments on a real-world dataset clearly validate that UniTRep can significantly outperform state-of-the-art baselines for various tasks.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140045451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Invisible Black-Box Backdoor Attack against Deep Cross-Modal Hashing Retrieval 针对深度跨模态哈希检索的隐形黑盒后门攻击
IF 5.6 2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2024-03-02 DOI: 10.1145/3650205
Tianshi Wang, Fengling Li, Lei Zhu, Jingjing Li, Zheng Zhang, Heng Tao Shen

Deep cross-modal hashing has promoted the field of multi-modal retrieval due to its excellent efficiency and storage, but its vulnerability to backdoor attacks is rarely studied. Notably, current deep cross-modal hashing methods inevitably require large-scale training data, resulting in poisoned samples with imperceptible triggers that can easily be camouflaged into the training data to bury backdoors in the victim model. Nevertheless, existing backdoor attacks focus on the uni-modal vision domain, while the multi-modal gap and hash quantization weaken their attack performance. In addressing the aforementioned challenges, we undertake an invisible black-box backdoor attack against deep cross-modal hashing retrieval in this paper. To the best of our knowledge, this is the first attempt in this research field. Specifically, we develop a flexible trigger generator to generate the attacker’s specified triggers, which learns the sample semantics of the non-poisoned modality to bridge the cross-modal attack gap. Then, we devise an input-aware injection network, which embeds the generated triggers into benign samples in the form of sample-specific stealth and realizes cross-modal semantic interaction between triggers and poisoned samples. Owing to the knowledge-agnostic of victim models, we enable any cross-modal hashing knockoff to facilitate the black-box backdoor attack and alleviate the attack weakening of hash quantization. Moreover, we propose a confusing perturbation and mask strategy to induce the high-performance victim models to focus on imperceptible triggers in poisoned samples. Extensive experiments on benchmark datasets demonstrate that our method has a state-of-the-art attack performance against deep cross-modal hashing retrieval. Besides, we investigate the influences of transferable attacks, few-shot poisoning, multi-modal poisoning, perceptibility, and potential defenses on backdoor attacks. Our codes and datasets are available at https://github.com/tswang0116/IB3A.

深度跨模态哈希算法因其出色的效率和存储能力,推动了多模态检索领域的发展,但其对后门攻击的脆弱性却鲜有研究。值得注意的是,目前的深度跨模态散列方法不可避免地需要大规模的训练数据,从而导致样本中毒,其触发因素不易察觉,很容易伪装成训练数据,在受害者模型中埋下后门。然而,现有的后门攻击主要集中在单模态视觉领域,而多模态差距和哈希量化削弱了它们的攻击性能。针对上述挑战,我们在本文中针对深度跨模态哈希检索进行了隐形黑盒后门攻击。据我们所知,这是该研究领域的首次尝试。具体来说,我们开发了一种灵活的触发器生成器来生成攻击者指定的触发器,它可以学习非中毒模态的样本语义,从而弥补跨模态攻击的差距。然后,我们设计了一个输入感知注入网络,它以特定样本隐身的形式将生成的触发器嵌入良性样本中,并实现触发器与中毒样本之间的跨模态语义交互。由于受害者模型的知识不可知性,我们使任何跨模态哈希山寨版都能促进黑盒后门攻击,并减轻哈希量化的攻击削弱。此外,我们还提出了一种混淆扰动和掩码策略,以诱导高性能受害者模型关注中毒样本中不易察觉的触发点。在基准数据集上进行的大量实验表明,我们的方法对深度跨模态哈希检索的攻击性能达到了一流水平。此外,我们还研究了可转移攻击、少量中毒、多模态中毒、可感知性以及后门攻击的潜在防御等因素的影响。我们的代码和数据集可在 https://github.com/tswang0116/IB3A 上获取。
{"title":"Invisible Black-Box Backdoor Attack against Deep Cross-Modal Hashing Retrieval","authors":"Tianshi Wang, Fengling Li, Lei Zhu, Jingjing Li, Zheng Zhang, Heng Tao Shen","doi":"10.1145/3650205","DOIUrl":"https://doi.org/10.1145/3650205","url":null,"abstract":"<p>Deep cross-modal hashing has promoted the field of multi-modal retrieval due to its excellent efficiency and storage, but its vulnerability to backdoor attacks is rarely studied. Notably, current deep cross-modal hashing methods inevitably require large-scale training data, resulting in poisoned samples with imperceptible triggers that can easily be camouflaged into the training data to bury backdoors in the victim model. Nevertheless, existing backdoor attacks focus on the uni-modal vision domain, while the multi-modal gap and hash quantization weaken their attack performance. In addressing the aforementioned challenges, we undertake an invisible black-box backdoor attack against deep cross-modal hashing retrieval in this paper. To the best of our knowledge, this is the first attempt in this research field. Specifically, we develop a flexible trigger generator to generate the attacker’s specified triggers, which learns the sample semantics of the non-poisoned modality to bridge the cross-modal attack gap. Then, we devise an input-aware injection network, which embeds the generated triggers into benign samples in the form of sample-specific stealth and realizes cross-modal semantic interaction between triggers and poisoned samples. Owing to the knowledge-agnostic of victim models, we enable any cross-modal hashing knockoff to facilitate the black-box backdoor attack and alleviate the attack weakening of hash quantization. Moreover, we propose a confusing perturbation and mask strategy to induce the high-performance victim models to focus on imperceptible triggers in poisoned samples. Extensive experiments on benchmark datasets demonstrate that our method has a state-of-the-art attack performance against deep cross-modal hashing retrieval. Besides, we investigate the influences of transferable attacks, few-shot poisoning, multi-modal poisoning, perceptibility, and potential defenses on backdoor attacks. Our codes and datasets are available at https://github.com/tswang0116/IB3A.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140018530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Few-shot Learning for Heterogeneous Information Networks 异构信息网络的少量学习
IF 5.6 2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2024-02-27 DOI: 10.1145/3649311
Yang Fang, Xiang Zhao, Weidong Xiao, Maarten de Rijke

Heterogeneous information networks (HINs) are a key resource in many domain-specific retrieval and recommendation scenarios, and in conversational environments. Current approaches to mining graph data often rely on abundant supervised information. However, supervised signals for graph learning tend to be scarce for a new task and only a handful of labeled nodes may be available. Meta-learning mechanisms are able to harness prior knowledge that can be adapted to new tasks.

In this paper, we design a meta-learning framework, called META-HIN, for few-shot learning problems on HINs. To the best of our knowledge, we are among the first to design a unified framework to realize the few-shot learning of HINs and facilitate different downstream tasks across different domains of graphs. Unlike most previous models, which focus on a single task on a single graph, META-HIN is able to deal with different tasks (node classification, link prediction, and anomaly detection are used as examples) across multiple graphs. Subgraphs are sampled to build the support and query set. Before being processed by the meta-learning module, subgraphs are modeled via a structure module to capture structural features. Then, a heterogeneous GNN module is used as the base model to express the features of subgraphs. We also design a GAN-based contrastive learning module that is able to exploit unsupervised information of the subgraphs.

In our experiments, we fuse several datasets from multiple domains to verify META-HIN’s broad applicability in a multiple-graph scenario. META-HIN consistently and significantly outperforms state-of-the-art alternatives on every task and across all datasets that we consider.

异构信息网络(HIN)是许多特定领域检索和推荐场景以及对话环境中的关键资源。目前挖掘图数据的方法通常依赖于丰富的监督信息。然而,对于一项新任务来说,图学习的监督信号往往很稀缺,而且可能只有少数标注节点可用。元学习机制能够利用可适应新任务的先验知识。在本文中,我们设计了一个元学习框架,称为 META-HIN,用于解决 HIN 上的少量学习问题。据我们所知,我们是第一批设计出统一框架来实现 HINs 少量学习并促进不同图领域下游任务的人。以往的大多数模型只关注单个图上的单一任务,而 META-HIN 则不同,它能处理多个图上的不同任务(以节点分类、链接预测和异常检测为例)。对子图进行采样,以建立支持和查询集。在由元学习模块处理之前,先通过结构模块对子图进行建模,以捕捉结构特征。然后,使用异构 GNN 模块作为基础模型来表达子图的特征。我们还设计了一个基于 GAN 的对比学习模块,该模块能够利用子图的无监督信息。在实验中,我们融合了多个领域的数据集,以验证 META-HIN 在多图场景中的广泛适用性。在我们考虑的所有任务和数据集上,META-HIN 的性能始终显著优于最先进的替代方案。
{"title":"Few-shot Learning for Heterogeneous Information Networks","authors":"Yang Fang, Xiang Zhao, Weidong Xiao, Maarten de Rijke","doi":"10.1145/3649311","DOIUrl":"https://doi.org/10.1145/3649311","url":null,"abstract":"<p>Heterogeneous information networks (HINs) are a key resource in many domain-specific retrieval and recommendation scenarios, and in conversational environments. Current approaches to mining graph data often rely on abundant supervised information. However, supervised signals for graph learning tend to be scarce for a new task and only a handful of labeled nodes may be available. Meta-learning mechanisms are able to harness prior knowledge that can be adapted to new tasks. </p><p>In this paper, we design a meta-learning framework, called <sans-serif>META-HIN</sans-serif>, for few-shot learning problems on HINs. To the best of our knowledge, we are among the first to design a unified framework to realize the few-shot learning of HINs and facilitate different downstream tasks across different domains of graphs. Unlike most previous models, which focus on a single task on a single graph, <sans-serif>META-HIN</sans-serif> is able to deal with different tasks (node classification, link prediction, and anomaly detection are used as examples) across multiple graphs. Subgraphs are sampled to build the support and query set. Before being processed by the meta-learning module, subgraphs are modeled via a structure module to capture structural features. Then, a heterogeneous GNN module is used as the base model to express the features of subgraphs. We also design a GAN-based contrastive learning module that is able to exploit unsupervised information of the subgraphs. </p><p>In our experiments, we fuse several datasets from multiple domains to verify <sans-serif>META-HIN</sans-serif>’s broad applicability in a multiple-graph scenario. <sans-serif>META-HIN</sans-serif> consistently and significantly outperforms state-of-the-art alternatives on every task and across all datasets that we consider.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139980284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Filter-based Stance Network for Rumor Verification 基于过滤器的谣言验证立场网络
IF 5.6 2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2024-02-26 DOI: 10.1145/3649462
Jun Li, Yi Bin, Yunshan Ma, Yang Yang, Zi Huang, Tat-Seng Chua

Rumor verification on social media aims to identify the truth value of a rumor, which is important to decrease the detrimental public effects. A rumor might arouse heated discussions and replies, conveying different stances of users that could be helpful in identifying the rumor. Thus, several works have been proposed to verify a rumor by modelling its entire stance sequence in the time domain. However, these works ignore that such a stance sequence could be decomposed into controversies with different intensities, which could be used to cluster the stance sequences with the same consensus. Besides, the existing stance extractors fail to consider both the impact of all the previously posted tweets and the reply chain on obtaining the stance of a new reply. To address the above problems, in this paper, we propose a novel stance-based network to aggregate the controversies of the stance sequence for rumor verification, termed Filter-based Stance Network (FSNet). As controversies with different intensities are reflected as the different changes of stances, it is convenient to represent different controversies in the frequency domain, but it is hard in the time domain. Our proposed FSNet decomposes the stance sequence into multiple controversies in the frequency domain and obtains the weighted aggregation of them. In specific, FSNet consists of two modules: the stance extractor and the filter block. To obtain better stance features toward the source, the stance extractor contains two stages. In the first stage, the tweet representation of each reply is obtained by aggregating information from all previously posted tweets in a conversation. Then, the features of stance toward the source, i.e., rumor-aware stance, are extracted with the reply chains in the second stage. In the filter block module, a rumor-aware stance sequence is constructed by sorting all the tweets of a conversation in chronological order. Fourier Transform thereafter is employed to convert the stance sequence into the frequency domain, where different frequency components reflect controversies of different intensities. Finally, a frequency filter is applied to explore the different contributions of controversies. We supervise our FSNet with both stance labels and rumor labels to strengthen the relations between rumor veracity and crowd stances. Extensive experiments on two benchmark datasets demonstrate that our model substantially outperforms all the baselines.

社交媒体上的谣言验证旨在识别谣言的真实价值,这对减少有害的公众影响非常重要。谣言可能会引起激烈的讨论和回复,传递出用户的不同立场,这可能有助于识别谣言。因此,有几项研究提出通过在时域中模拟谣言的整个立场序列来验证谣言。然而,这些工作忽略了这样一个立场序列可以分解成不同强度的争议,而这些争议可以用来聚类具有相同共识的立场序列。此外,现有的立场提取器未能同时考虑之前发布的所有推文和回复链对获取新回复立场的影响。针对上述问题,我们在本文中提出了一种新颖的基于立场的网络来聚合立场序列的争议,用于谣言验证,称为基于过滤器的立场网络(FSNet)。由于不同强度的争议反映为不同的立场变化,因此在频域表示不同的争议很方便,但在时域表示却很困难。我们提出的 FSNet 将立场序列分解为频域中的多个争议,并得到它们的加权聚合。具体而言,FSNet 由两个模块组成:立场提取器和过滤块。为了更好地获得针对来源的立场特征,立场提取器包括两个阶段。在第一阶段,通过汇总对话中所有先前发布的推文信息,获得每个回复的推文表示。然后,在第二阶段利用回复链提取对消息来源的立场特征,即谣言感知立场。在过滤块模块中,按照时间顺序对对话中的所有推文进行排序,从而构建出谣言感知立场序列。之后,采用傅立叶变换将立场序列转换为频域,其中不同的频率成分反映了不同强度的争议。最后,应用频率滤波器来探索争议的不同贡献。我们利用立场标签和谣言标签对 FSNet 进行监督,以加强谣言真实性与人群立场之间的关系。在两个基准数据集上进行的广泛实验表明,我们的模型大大优于所有基线模型。
{"title":"Filter-based Stance Network for Rumor Verification","authors":"Jun Li, Yi Bin, Yunshan Ma, Yang Yang, Zi Huang, Tat-Seng Chua","doi":"10.1145/3649462","DOIUrl":"https://doi.org/10.1145/3649462","url":null,"abstract":"<p>Rumor verification on social media aims to identify the truth value of a rumor, which is important to decrease the detrimental public effects. A rumor might arouse heated discussions and replies, conveying different stances of users that could be helpful in identifying the rumor. Thus, several works have been proposed to verify a rumor by modelling its entire stance sequence in the time domain. However, these works ignore that such a stance sequence could be decomposed into controversies with different intensities, which could be used to cluster the stance sequences with the same consensus. Besides, the existing stance extractors fail to consider both the impact of all the previously posted tweets and the reply chain on obtaining the stance of a new reply. To address the above problems, in this paper, we propose a novel stance-based network to aggregate the controversies of the stance sequence for rumor verification, termed Filter-based Stance Network (FSNet). As controversies with different intensities are reflected as the different changes of stances, it is convenient to represent different controversies in the frequency domain, but it is hard in the time domain. Our proposed FSNet decomposes the stance sequence into multiple controversies in the frequency domain and obtains the weighted aggregation of them. In specific, FSNet consists of two modules: the stance extractor and the filter block. To obtain better stance features toward the source, the stance extractor contains two stages. In the first stage, the tweet representation of each reply is obtained by aggregating information from all previously posted tweets in a conversation. Then, the features of stance toward the source, <i>i.e.</i>, rumor-aware stance, are extracted with the reply chains in the second stage. In the filter block module, a rumor-aware stance sequence is constructed by sorting all the tweets of a conversation in chronological order. Fourier Transform thereafter is employed to convert the stance sequence into the frequency domain, where different frequency components reflect controversies of different intensities. Finally, a frequency filter is applied to explore the different contributions of controversies. We supervise our FSNet with both stance labels and rumor labels to strengthen the relations between rumor veracity and crowd stances. Extensive experiments on two benchmark datasets demonstrate that our model substantially outperforms all the baselines.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139968649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Information Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1