首页 > 最新文献

ArXiv最新文献

英文 中文
LightSword: A Customized Virtual Reality Exergame for Long-Term Cognitive Inhibition Training in Older Adults 光剑:用于老年人长期认知抑制训练的定制虚拟现实游戏
Pub Date : 2024-03-08 DOI: 10.1145/3613904.3642187
Qiuxin Du, Zhen Song, Haiyan Jiang, Xiaoying Wei, Dongdong Weng, Mingming Fan
The decline of cognitive inhibition significantly impacts older adults' quality of life and well-being, making it a vital public health problem in today's aging society. Previous research has demonstrated that Virtual reality (VR) exergames have great potential to enhance cognitive inhibition among older adults. However, existing commercial VR exergames were unsuitable for older adults' long-term cognitive training due to the inappropriate cognitive activation paradigm, unnecessary complexity, and unbefitting difficulty levels. To bridge these gaps, we developed a customized VR cognitive training exergame (LightSword) based on Dual-task and Stroop paradigms for long-term cognitive inhibition training among healthy older adults. Subsequently, we conducted an eight-month longitudinal user study with 12 older adults aged 60 years and above to demonstrate the effectiveness of LightSword in improving cognitive inhibition. After the training, the cognitive inhibition abilities of older adults were significantly enhanced, with benefits persisting for 6 months. This result indicated that LightSword has both short-term and long-term effects in enhancing cognitive inhibition. Furthermore, qualitative feedback revealed that older adults exhibited a positive attitude toward long-term training with LightSword, which enhanced their motivation and compliance.
认知抑制能力的下降严重影响了老年人的生活质量和幸福感,成为当今老龄化社会的一个重要公共卫生问题。以往的研究表明,虚拟现实(VR)外延游戏在增强老年人的认知抑制能力方面具有巨大潜力。然而,由于认知激活范式不恰当、不必要的复杂性和不合适的难度水平,现有的商业虚拟现实游戏并不适合老年人的长期认知训练。为了弥补这些不足,我们开发了基于双任务和 Stroop 范式的定制 VR 认知训练外设(LightSword),用于对健康老年人进行长期认知抑制训练。随后,我们对 12 名 60 岁及以上的老年人进行了为期 8 个月的纵向用户研究,以证明 LightSword 在提高认知抑制能力方面的有效性。训练结束后,老年人的认知抑制能力明显增强,且效果持续了 6 个月。这一结果表明,LightSword 在提高认知抑制能力方面既有短期效果,也有长期效果。此外,定性反馈显示,老年人对长期使用 LightSword 进行训练表现出积极的态度,这提高了他们的积极性和依从性。
{"title":"LightSword: A Customized Virtual Reality Exergame for Long-Term Cognitive Inhibition Training in Older Adults","authors":"Qiuxin Du, Zhen Song, Haiyan Jiang, Xiaoying Wei, Dongdong Weng, Mingming Fan","doi":"10.1145/3613904.3642187","DOIUrl":"https://doi.org/10.1145/3613904.3642187","url":null,"abstract":"The decline of cognitive inhibition significantly impacts older adults' quality of life and well-being, making it a vital public health problem in today's aging society. Previous research has demonstrated that Virtual reality (VR) exergames have great potential to enhance cognitive inhibition among older adults. However, existing commercial VR exergames were unsuitable for older adults' long-term cognitive training due to the inappropriate cognitive activation paradigm, unnecessary complexity, and unbefitting difficulty levels. To bridge these gaps, we developed a customized VR cognitive training exergame (LightSword) based on Dual-task and Stroop paradigms for long-term cognitive inhibition training among healthy older adults. Subsequently, we conducted an eight-month longitudinal user study with 12 older adults aged 60 years and above to demonstrate the effectiveness of LightSword in improving cognitive inhibition. After the training, the cognitive inhibition abilities of older adults were significantly enhanced, with benefits persisting for 6 months. This result indicated that LightSword has both short-term and long-term effects in enhancing cognitive inhibition. Furthermore, qualitative feedback revealed that older adults exhibited a positive attitude toward long-term training with LightSword, which enhanced their motivation and compliance.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140396707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling Dynamic (De)Allocations of Local Memory for Translation Validation 为翻译验证建立本地内存动态(去)分配模型
Pub Date : 2024-03-08 DOI: 10.1145/3649863
Abhishek Rose, Sorav Bansal
End-to-End Translation Validation is the problem of verifying the executable code generated by a compiler against the corresponding input source code for a single compilation. This becomes particularly hard in the presence of dynamically-allocated local memory where addresses of local memory may be observed by the program. In the context of validating the translation of a C procedure to executable code, a validator needs to tackle constant-length local arrays, address-taken local variables, address-taken formal parameters, variable-length local arrays, procedure-call arguments (including variadic arguments), and the alloca() operator. We provide an execution model, a definition of refinement, and an algorithm to soundly convert a refinement check into first-order logic queries that an off-the-shelf SMT solver can handle efficiently. In our experiments, we perform blackbox translation validation of C procedures (with up to 100+ SLOC), involving these local memory allocation constructs, against their corresponding assembly implementations (with up to 200+ instructions) generated by an optimizing compiler with complex loop and vectorizing transformations.
端到端翻译验证是将编译器生成的可执行代码与相应的输入源代码进行一次编译验证的问题。在动态分配本地内存的情况下,程序可能会观察到本地内存的地址,这就变得尤为困难。在验证将 C 语言过程翻译为可执行代码的过程中,验证器需要处理常长局部数组、寻址局部变量、寻址形式参数、变长局部数组、过程调用参数(包括可变参数)以及 alloca() 操作符。我们提供了一个执行模型、一个细化定义和一种算法,可以将细化检查合理地转换为一阶逻辑查询,现成的 SMT 解算器可以高效地处理这些查询。在实验中,我们对涉及这些本地内存分配结构的 C 程序(最多 100 多条 SLOC)进行了黑盒翻译验证,并将其与由优化编译器通过复杂的循环和矢量化转换生成的相应汇编实现(最多 200 多条指令)进行了对比。
{"title":"Modeling Dynamic (De)Allocations of Local Memory for Translation Validation","authors":"Abhishek Rose, Sorav Bansal","doi":"10.1145/3649863","DOIUrl":"https://doi.org/10.1145/3649863","url":null,"abstract":"End-to-End Translation Validation is the problem of verifying the executable code generated by a compiler against the corresponding input source code for a single compilation. This becomes particularly hard in the presence of dynamically-allocated local memory where addresses of local memory may be observed by the program. In the context of validating the translation of a C procedure to executable code, a validator needs to tackle constant-length local arrays, address-taken local variables, address-taken formal parameters, variable-length local arrays, procedure-call arguments (including variadic arguments), and the alloca() operator. We provide an execution model, a definition of refinement, and an algorithm to soundly convert a refinement check into first-order logic queries that an off-the-shelf SMT solver can handle efficiently. In our experiments, we perform blackbox translation validation of C procedures (with up to 100+ SLOC), involving these local memory allocation constructs, against their corresponding assembly implementations (with up to 200+ instructions) generated by an optimizing compiler with complex loop and vectorizing transformations.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140396805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AQuA: Automated Question-Answering in Software Tutorial Videos with Visual Anchors AQuA:利用视觉锚点在软件教程视频中自动答题
Pub Date : 2024-03-08 DOI: 10.1145/3613904.3642752
Saelyne Yang, Jo Vermeulen, G. Fitzmaurice, Justin Matejka
Tutorial videos are a popular help source for learning feature-rich software. However, getting quick answers to questions about tutorial videos is difficult. We present an automated approach for responding to tutorial questions. By analyzing 633 questions found in 5,944 video comments, we identified different question types and observed that users frequently described parts of the video in questions. We then asked participants (N=24) to watch tutorial videos and ask questions while annotating the video with relevant visual anchors. Most visual anchors referred to UI elements and the application workspace. Based on these insights, we built AQuA, a pipeline that generates useful answers to questions with visual anchors. We demonstrate this for Fusion 360, showing that we can recognize UI elements in visual anchors and generate answers using GPT-4 augmented with that visual information and software documentation. An evaluation study (N=16) demonstrates that our approach provides better answers than baseline methods.
教程视频是学习功能丰富的软件的常用帮助资源。然而,要快速回答有关教程视频的问题却很困难。我们提出了一种自动回复教程问题的方法。通过分析 5944 条视频评论中的 633 个问题,我们确定了不同的问题类型,并观察到用户经常在问题中描述视频的部分内容。然后,我们让参与者(24 人)观看教程视频并提问,同时在视频中标注相关的视觉锚点。大多数视觉锚点指的是用户界面元素和应用程序工作区。基于这些见解,我们建立了 AQuA,这是一个通过视觉锚点为问题生成有用答案的管道。我们为 Fusion 360 演示了这一功能,表明我们可以识别视觉锚点中的用户界面元素,并使用 GPT-4 生成带有视觉信息和软件文档的答案。一项评估研究(N=16)表明,与基线方法相比,我们的方法能提供更好的答案。
{"title":"AQuA: Automated Question-Answering in Software Tutorial Videos with Visual Anchors","authors":"Saelyne Yang, Jo Vermeulen, G. Fitzmaurice, Justin Matejka","doi":"10.1145/3613904.3642752","DOIUrl":"https://doi.org/10.1145/3613904.3642752","url":null,"abstract":"Tutorial videos are a popular help source for learning feature-rich software. However, getting quick answers to questions about tutorial videos is difficult. We present an automated approach for responding to tutorial questions. By analyzing 633 questions found in 5,944 video comments, we identified different question types and observed that users frequently described parts of the video in questions. We then asked participants (N=24) to watch tutorial videos and ask questions while annotating the video with relevant visual anchors. Most visual anchors referred to UI elements and the application workspace. Based on these insights, we built AQuA, a pipeline that generates useful answers to questions with visual anchors. We demonstrate this for Fusion 360, showing that we can recognize UI elements in visual anchors and generate answers using GPT-4 augmented with that visual information and software documentation. An evaluation study (N=16) demonstrates that our approach provides better answers than baseline methods.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140397054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ShuffleBench: A Benchmark for Large-Scale Data Shuffling Operations with Distributed Stream Processing Frameworks ShuffleBench:使用分布式流处理框架的大规模数据洗牌操作基准
Pub Date : 2024-03-07 DOI: 10.1145/3629526.3645036
Sören Henning, Adriano Vogel, Michael Leichtfried, Otmar Ertl, Rick Rabiser
Distributed stream processing frameworks help building scalable and reliable applications that perform transformations and aggregations on continuous data streams. This paper introduces ShuffleBench, a novel benchmark to evaluate the performance of modern stream processing frameworks. In contrast to other benchmarks, it focuses on use cases where stream processing frameworks are mainly employed for shuffling (i.e., re-distributing) data records to perform state-local aggregations, while the actual aggregation logic is considered as black-box software components. ShuffleBench is inspired by requirements for near real-time analytics of a large cloud observability platform and takes up benchmarking metrics and methods for latency, throughput, and scalability established in the performance engineering research community. Although inspired by a real-world observability use case, it is highly configurable to allow domain-independent evaluations. ShuffleBench comes as a ready-to-use open-source software utilizing existing Kubernetes tooling and providing implementations for four state-of-the-art frameworks. Therefore, we expect ShuffleBench to be a valuable contribution to both industrial practitioners building stream processing applications and researchers working on new stream processing approaches. We complement this paper with an experimental performance evaluation that employs ShuffleBench with various configurations on Flink, Hazelcast, Kafka Streams, and Spark in a cloud-native environment. Our results show that Flink achieves the highest throughput while Hazelcast processes data streams with the lowest latency.
分布式流处理框架有助于构建可扩展的可靠应用程序,对连续数据流进行转换和聚合。本文介绍了 ShuffleBench,这是一种用于评估现代流处理框架性能的新型基准。与其他基准测试不同的是,它侧重于流处理框架主要用于洗牌(即重新分配)数据记录以执行状态本地聚合的使用案例,而实际的聚合逻辑则被视为黑盒软件组件。ShuffleBench 受大型云观测平台近实时分析需求的启发,采用了性能工程研究界确立的延迟、吞吐量和可扩展性基准指标和方法。虽然灵感来自真实世界的可观测性用例,但它具有高度可配置性,可进行独立于领域的评估。ShuffleBench 是一款即开即用的开源软件,它利用现有的 Kubernetes 工具,为四个最先进的框架提供了实现方法。因此,我们希望 ShuffleBench 能为构建流处理应用的行业从业者和研究新型流处理方法的科研人员做出有价值的贡献。作为本文的补充,我们在云原生环境中使用 ShuffleBench 对 Flink、Hazelcast、Kafka Streams 和 Spark 进行了不同配置的实验性能评估。结果表明,Flink 的吞吐量最高,而 Hazelcast 处理数据流的延迟最低。
{"title":"ShuffleBench: A Benchmark for Large-Scale Data Shuffling Operations with Distributed Stream Processing Frameworks","authors":"Sören Henning, Adriano Vogel, Michael Leichtfried, Otmar Ertl, Rick Rabiser","doi":"10.1145/3629526.3645036","DOIUrl":"https://doi.org/10.1145/3629526.3645036","url":null,"abstract":"Distributed stream processing frameworks help building scalable and reliable applications that perform transformations and aggregations on continuous data streams. This paper introduces ShuffleBench, a novel benchmark to evaluate the performance of modern stream processing frameworks. In contrast to other benchmarks, it focuses on use cases where stream processing frameworks are mainly employed for shuffling (i.e., re-distributing) data records to perform state-local aggregations, while the actual aggregation logic is considered as black-box software components. ShuffleBench is inspired by requirements for near real-time analytics of a large cloud observability platform and takes up benchmarking metrics and methods for latency, throughput, and scalability established in the performance engineering research community. Although inspired by a real-world observability use case, it is highly configurable to allow domain-independent evaluations. ShuffleBench comes as a ready-to-use open-source software utilizing existing Kubernetes tooling and providing implementations for four state-of-the-art frameworks. Therefore, we expect ShuffleBench to be a valuable contribution to both industrial practitioners building stream processing applications and researchers working on new stream processing approaches. We complement this paper with an experimental performance evaluation that employs ShuffleBench with various configurations on Flink, Hazelcast, Kafka Streams, and Spark in a cloud-native environment. Our results show that Flink achieves the highest throughput while Hazelcast processes data streams with the lowest latency.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140397434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrastive Continual Learning with Importance Sampling and Prototype-Instance Relation Distillation 利用重要性采样和原型-实例关系提炼进行对比式持续学习
Pub Date : 2024-03-07 DOI: 10.1609/aaai.v38i12.29259
Jiyong Li, Dilshod Azizov, Yang Li, Shangsong Liang
Recently, because of the high-quality representations of contrastive learning methods, rehearsal-based contrastive continual learning has been proposed to explore how to continually learn transferable representation embeddings to avoid the catastrophic forgetting issue in traditional continual settings. Based on this framework, we propose Contrastive Continual Learning via Importance Sampling (CCLIS) to preserve knowledge by recovering previous data distributions with a new strategy for Replay Buffer Selection (RBS), which minimize estimated variance to save hard negative samples for representation learning with high quality. Furthermore, we present the Prototype-instance Relation Distillation (PRD) loss, a technique designed to maintain the relationship between prototypes and sample representations using a self-distillation process. Experiments on standard continual learning benchmarks reveal that our method notably outperforms existing baselines in terms of knowledge preservation and thereby effectively counteracts catastrophic forgetting in online contexts. The code is available at https://github.com/lijy373/CCLIS.
最近,由于对比学习方法具有高质量的表征,人们提出了基于排演的对比持续学习,以探索如何持续学习可转移的表征嵌入,从而避免传统持续学习设置中的灾难性遗忘问题。在此框架基础上,我们提出了通过重要度采样进行对比式连续学习(CCLIS),通过一种新的重放缓冲区选择(RBS)策略恢复以前的数据分布,从而保存知识,这种策略能最大限度地减少估计方差,为高质量的表征学习保存硬负样本。此外,我们还提出了原型-实例关系蒸馏(PRD)损失,这是一种旨在利用自蒸馏过程保持原型与样本表示之间关系的技术。在标准的持续学习基准上进行的实验表明,我们的方法在知识保存方面明显优于现有的基准,从而有效地抵消了在线环境下的灾难性遗忘。代码见 https://github.com/lijy373/CCLIS。
{"title":"Contrastive Continual Learning with Importance Sampling and Prototype-Instance Relation Distillation","authors":"Jiyong Li, Dilshod Azizov, Yang Li, Shangsong Liang","doi":"10.1609/aaai.v38i12.29259","DOIUrl":"https://doi.org/10.1609/aaai.v38i12.29259","url":null,"abstract":"Recently, because of the high-quality representations of contrastive learning methods, rehearsal-based contrastive continual learning has been proposed to explore how to continually learn transferable representation embeddings to avoid the catastrophic forgetting issue in traditional continual settings. Based on this framework, we propose Contrastive Continual Learning via Importance Sampling (CCLIS) to preserve knowledge by recovering previous data distributions with a new strategy for Replay Buffer Selection (RBS), which minimize estimated variance to save hard negative samples for representation learning with high quality. Furthermore, we present the Prototype-instance Relation Distillation (PRD) loss, a technique designed to maintain the relationship between prototypes and sample representations using a self-distillation process. Experiments on standard continual learning benchmarks reveal that our method notably outperforms existing baselines in terms of knowledge preservation and thereby effectively counteracts catastrophic forgetting in online contexts. The code is available at https://github.com/lijy373/CCLIS.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140397021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video-Driven Animation of Neural Head Avatars 神经头像的视频驱动动画
Pub Date : 2024-03-07 DOI: 10.2312/vmv.20231237
Wolfgang Paier, Paul Hinzer, A. Hilsmann, P. Eisert
We present a new approach for video-driven animation of high-quality neural 3D head models, addressing the challenge of person-independent animation from video input. Typically, high-quality generative models are learned for specific individuals from multi-view video footage, resulting in person-specific latent representations that drive the generation process. In order to achieve person-independent animation from video input, we introduce an LSTM-based animation network capable of translating person-independent expression features into personalized animation parameters of person-specific 3D head models. Our approach combines the advantages of personalized head models (high quality and realism) with the convenience of video-driven animation employing multi-person facial performance capture. We demonstrate the effectiveness of our approach on synthesized animations with high quality based on different source videos as well as an ablation study.
我们提出了一种以视频为驱动的高质量神经三维头部模型动画制作新方法,解决了通过视频输入制作与人无关的动画这一难题。通常情况下,高质量生成模型是从多视角视频片段中针对特定个人学习的,从而产生了驱动生成过程的特定个人潜在表征。为了从视频输入中实现与人无关的动画,我们引入了基于 LSTM 的动画网络,该网络能够将与人无关的表情特征转化为特定人三维头部模型的个性化动画参数。我们的方法将个性化头部模型的优势(高质量和逼真度)与视频驱动动画(采用多人面部表情捕捉)的便利性相结合。我们在基于不同源视频的高质量合成动画以及一项消融研究中展示了我们方法的有效性。
{"title":"Video-Driven Animation of Neural Head Avatars","authors":"Wolfgang Paier, Paul Hinzer, A. Hilsmann, P. Eisert","doi":"10.2312/vmv.20231237","DOIUrl":"https://doi.org/10.2312/vmv.20231237","url":null,"abstract":"We present a new approach for video-driven animation of high-quality neural 3D head models, addressing the challenge of person-independent animation from video input. Typically, high-quality generative models are learned for specific individuals from multi-view video footage, resulting in person-specific latent representations that drive the generation process. In order to achieve person-independent animation from video input, we introduce an LSTM-based animation network capable of translating person-independent expression features into personalized animation parameters of person-specific 3D head models. Our approach combines the advantages of personalized head models (high quality and realism) with the convenience of video-driven animation employing multi-person facial performance capture. We demonstrate the effectiveness of our approach on synthesized animations with high quality based on different source videos as well as an ablation study.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140397080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automating the Information Extraction from Semi-Structured Interview Transcripts 从半结构化访谈记录中自动提取信息
Pub Date : 2024-03-07 DOI: 10.1145/3589335.3651230
Angelina Parfenova
This paper explores the development and application of an automated system designed to extract information from semi-structured interview transcripts. Given the labor-intensive nature of traditional qualitative analysis methods, such as coding, there exists a significant demand for tools that can facilitate the analysis process. Our research investigates various topic modeling techniques and concludes that the best model for analyzing interview texts is a combination of BERT embeddings and HDBSCAN clustering. We present a user-friendly software prototype that enables researchers, including those without programming skills, to efficiently process and visualize the thematic structure of interview data. This tool not only facilitates the initial stages of qualitative analysis but also offers insights into the interconnectedness of topics revealed, thereby enhancing the depth of qualitative analysis.
本文探讨了从半结构化访谈记录中提取信息的自动化系统的开发和应用。鉴于传统定性分析方法(如编码)的劳动密集性质,人们对能够促进分析过程的工具有很大的需求。我们的研究调查了各种主题建模技术,得出的结论是,分析访谈文本的最佳模型是 BERT 嵌入和 HDBSCAN 聚类的组合。我们介绍了一个用户友好型软件原型,它能让研究人员(包括没有编程技能的研究人员)高效地处理访谈数据的主题结构并将其可视化。该工具不仅有助于定性分析的初始阶段,还能让人深入了解所揭示的主题之间的相互联系,从而提高定性分析的深度。
{"title":"Automating the Information Extraction from Semi-Structured Interview Transcripts","authors":"Angelina Parfenova","doi":"10.1145/3589335.3651230","DOIUrl":"https://doi.org/10.1145/3589335.3651230","url":null,"abstract":"This paper explores the development and application of an automated system designed to extract information from semi-structured interview transcripts. Given the labor-intensive nature of traditional qualitative analysis methods, such as coding, there exists a significant demand for tools that can facilitate the analysis process. Our research investigates various topic modeling techniques and concludes that the best model for analyzing interview texts is a combination of BERT embeddings and HDBSCAN clustering. We present a user-friendly software prototype that enables researchers, including those without programming skills, to efficiently process and visualize the thematic structure of interview data. This tool not only facilitates the initial stages of qualitative analysis but also offers insights into the interconnectedness of topics revealed, thereby enhancing the depth of qualitative analysis.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140397120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncovering the Deep Filter Bubble: Narrow Exposure in Short-Video Recommendation 揭开深度滤镜的泡沫:短视频推荐中的窄曝光率
Pub Date : 2024-03-07 DOI: 10.1145/3589334.3648159
Nicholas Sukiennik, Chen Gao, Nian Li
Filter bubbles have been studied extensively within the context of online content platforms due to their potential to cause undesirable outcomes such as user dissatisfaction or polarization. With the rise of short-video platforms, the filter bubble has been given extra attention because these platforms rely on an unprecedented use of the recommender system to provide relevant content. In our work, we investigate the deep filter bubble, which refers to the user being exposed to narrow content within their broad interests. We accomplish this using one-year interaction data from a top short-video platform in China, which includes hierarchical data with three levels of categories for each video. We formalize our definition of a"deep"filter bubble within this context, and then explore various correlations within the data: first understanding the evolution of the deep filter bubble over time, and later revealing some of the factors that give rise to this phenomenon, such as specific categories, user demographics, and feedback type. We observe that while the overall proportion of users in a filter bubble remains largely constant over time, the depth composition of their filter bubble changes. In addition, we find that some demographic groups that have a higher likelihood of seeing narrower content and implicit feedback signals can lead to less bubble formation. Finally, we propose some ways in which recommender systems can be designed to reduce the risk of a user getting caught in a bubble.
由于过滤泡沫有可能导致用户不满或两极分化等不良后果,因此在网络内容平台中对过滤泡沫进行了广泛的研究。随着短视频平台的兴起,过滤泡沫受到了格外关注,因为这些平台前所未有地依赖推荐系统来提供相关内容。在我们的工作中,我们研究了深度过滤泡沫,它指的是用户在其广泛兴趣范围内接触到的狭窄内容。我们使用中国顶级短视频平台一年的交互数据来完成这一研究,这些数据包括每个视频的三级分类的分层数据。在此背景下,我们正式确定了 "深度 "过滤泡沫的定义,然后探索了数据中的各种相关性:首先了解了深度过滤泡沫随时间的演变,随后揭示了导致这一现象的一些因素,如特定类别、用户人口统计学和反馈类型。我们观察到,虽然过滤泡沫中用户的总体比例随着时间的推移基本保持不变,但其过滤泡沫的深度构成却在发生变化。此外,我们还发现,一些人口群体更有可能看到较窄的内容和隐含的反馈信号,这可能会导致较少的气泡形成。最后,我们提出了一些设计推荐系统的方法,以降低用户陷入泡沫的风险。
{"title":"Uncovering the Deep Filter Bubble: Narrow Exposure in Short-Video Recommendation","authors":"Nicholas Sukiennik, Chen Gao, Nian Li","doi":"10.1145/3589334.3648159","DOIUrl":"https://doi.org/10.1145/3589334.3648159","url":null,"abstract":"Filter bubbles have been studied extensively within the context of online content platforms due to their potential to cause undesirable outcomes such as user dissatisfaction or polarization. With the rise of short-video platforms, the filter bubble has been given extra attention because these platforms rely on an unprecedented use of the recommender system to provide relevant content. In our work, we investigate the deep filter bubble, which refers to the user being exposed to narrow content within their broad interests. We accomplish this using one-year interaction data from a top short-video platform in China, which includes hierarchical data with three levels of categories for each video. We formalize our definition of a\"deep\"filter bubble within this context, and then explore various correlations within the data: first understanding the evolution of the deep filter bubble over time, and later revealing some of the factors that give rise to this phenomenon, such as specific categories, user demographics, and feedback type. We observe that while the overall proportion of users in a filter bubble remains largely constant over time, the depth composition of their filter bubble changes. In addition, we find that some demographic groups that have a higher likelihood of seeing narrower content and implicit feedback signals can lead to less bubble formation. Finally, we propose some ways in which recommender systems can be designed to reduce the risk of a user getting caught in a bubble.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140397143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Persona Extraction Through Semantic Similarity for Emotional Support Conversation Generation 通过语义相似性提取角色,生成情感支持对话
Pub Date : 2024-03-07 DOI: 10.1109/icassp48485.2024.10445957
Seunghee Han, Se Jin Park, Chae Won Kim, Y. Ro
Providing emotional support through dialogue systems is becoming increasingly important in today's world, as it can support both mental health and social interactions in many conversation scenarios. Previous works have shown that using persona is effective for generating empathetic and supportive responses. They have often relied on pre-provided persona rather than inferring them during conversations. However, it is not always possible to obtain a user persona before the conversation begins. To address this challenge, we propose PESS (Persona Extraction through Semantic Similarity), a novel framework that can automatically infer informative and consistent persona from dialogues. We devise completeness loss and consistency loss based on semantic similarity scores. The completeness loss encourages the model to generate missing persona information, and the consistency loss guides the model to distinguish between consistent and inconsistent persona. Our experimental results demonstrate that high-quality persona information inferred by PESS is effective in generating emotionally supportive responses.
在当今世界,通过对话系统提供情感支持正变得越来越重要,因为它可以在许多对话场景中为心理健康和社交互动提供支持。以往的研究表明,使用角色可以有效地产生富有同情心和支持性的回应。它们通常依赖于预先提供的角色,而不是在对话过程中推断出来。然而,在对话开始前获取用户角色并不总是可能的。为了应对这一挑战,我们提出了 PESS(通过语义相似性提取角色),这是一个新颖的框架,可以从对话中自动推断出信息丰富且一致的角色。我们设计了基于语义相似性得分的完整性损失和一致性损失。完整性损失鼓励模型生成缺失的角色信息,而一致性损失则引导模型区分一致和不一致的角色。我们的实验结果表明,由 PESS 推断出的高质量角色信息能有效地生成情感支持性回应。
{"title":"Persona Extraction Through Semantic Similarity for Emotional Support Conversation Generation","authors":"Seunghee Han, Se Jin Park, Chae Won Kim, Y. Ro","doi":"10.1109/icassp48485.2024.10445957","DOIUrl":"https://doi.org/10.1109/icassp48485.2024.10445957","url":null,"abstract":"Providing emotional support through dialogue systems is becoming increasingly important in today's world, as it can support both mental health and social interactions in many conversation scenarios. Previous works have shown that using persona is effective for generating empathetic and supportive responses. They have often relied on pre-provided persona rather than inferring them during conversations. However, it is not always possible to obtain a user persona before the conversation begins. To address this challenge, we propose PESS (Persona Extraction through Semantic Similarity), a novel framework that can automatically infer informative and consistent persona from dialogues. We devise completeness loss and consistency loss based on semantic similarity scores. The completeness loss encourages the model to generate missing persona information, and the consistency loss guides the model to distinguish between consistent and inconsistent persona. Our experimental results demonstrate that high-quality persona information inferred by PESS is effective in generating emotionally supportive responses.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140397275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Message-Observing Sessions 信息服务会话
Pub Date : 2024-03-07 DOI: 10.1145/3649859
Ryan Kavanagh, B. Pientka
We present Most, a process language with message-observing session types. Message-observing session types extend binary session types with type-level computation to specify communication protocols that vary based on messages observed on other channels. Hence, Most allows us to express global invariants about processes, rather than just local invariants, in a bottom-up, compositional way. We give Most a semantic foundation using traces with binding, a semantic approach for compositionally reasoning about traces in the presence of name generation. We use this semantics to prove type soundness and compositionality for Most processes. We see this as a significant step towards capturing message-dependencies and providing more precise guarantees about processes.
我们介绍的 Most 是一种具有消息观测会话类型的进程语言。消息观测会话类型扩展了具有类型级计算的二进制会话类型,可指定根据在其他通道上观测到的消息而变化的通信协议。因此,Most 允许我们以自下而上的组合方式表达进程的全局不变式,而不仅仅是局部不变式。我们使用带绑定的痕迹为 Most 奠定了语义基础,这是一种在存在名称生成的情况下对痕迹进行组合推理的语义方法。我们使用这种语义来证明 Most 进程的类型健全性和组成性。我们认为这是在捕捉消息依赖性和提供更精确的进程保证方面迈出的重要一步。
{"title":"Message-Observing Sessions","authors":"Ryan Kavanagh, B. Pientka","doi":"10.1145/3649859","DOIUrl":"https://doi.org/10.1145/3649859","url":null,"abstract":"We present Most, a process language with message-observing session types. Message-observing session types extend binary session types with type-level computation to specify communication protocols that vary based on messages observed on other channels. Hence, Most allows us to express global invariants about processes, rather than just local invariants, in a bottom-up, compositional way. We give Most a semantic foundation using traces with binding, a semantic approach for compositionally reasoning about traces in the presence of name generation. We use this semantics to prove type soundness and compositionality for Most processes. We see this as a significant step towards capturing message-dependencies and providing more precise guarantees about processes.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140396881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ArXiv
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1