首页 > 最新文献

arXiv - CS - Human-Computer Interaction最新文献

英文 中文
Advanced Gaze Analytics Dashboard 高级凝视分析仪表板
Pub Date : 2024-09-10 DOI: arxiv-2409.06628
Gavindya Jayawardena, Vikas Ashok, Sampath Jayarathna
Eye movements can provide informative cues to understand human visualscan/search behavior and cognitive load during varying tasks. Visualizations ofreal-time gaze measures during tasks, provide an understanding of humanbehavior as the experiment is being conducted. Even though existing eyetracking analysis tools provide calculation and visualization of eye-trackingdata, none of them support real-time visualizations of advanced gaze measures,such as ambient or focal processing, or eye-tracked measures of cognitive load.In this paper, we present an eye movements analytics dashboard that enablesvisualizations of various gaze measures, fixations, saccades, cognitive load,ambient-focal attention, and gaze transitions analysis by extracting eyemovements from participants utilizing common off-the-shelf eye trackers. Wevalidate the proposed eye movement visualizations by using two publiclyavailable eye-tracking datasets. We showcase that, the proposed dashboard couldbe utilized to visualize advanced eye movement measures generated usingmultiple data sources.
眼球运动可以为了解人类视觉扫描/搜索行为和不同任务期间的认知负荷提供信息线索。对任务过程中的实时注视测量结果进行可视化,可以了解实验进行过程中的人类行为。尽管现有的眼动跟踪分析工具提供眼动跟踪数据的计算和可视化,但它们都不支持高级注视测量的实时可视化,如环境或焦点处理,或认知负荷的眼动跟踪测量。在本文中,我们介绍了一种眼动分析仪表板,它通过从使用普通现成眼动跟踪仪的参与者那里提取眼动数据,实现了各种注视测量、定点、眼球移动、认知负荷、环境焦点注意和注视转换分析的可视化。我们利用两个公开的眼动跟踪数据集对所提出的眼动可视化进行了验证。我们展示了所提出的仪表盘可用于可视化利用多种数据源生成的高级眼动测量结果。
{"title":"Advanced Gaze Analytics Dashboard","authors":"Gavindya Jayawardena, Vikas Ashok, Sampath Jayarathna","doi":"arxiv-2409.06628","DOIUrl":"https://doi.org/arxiv-2409.06628","url":null,"abstract":"Eye movements can provide informative cues to understand human visual\u0000scan/search behavior and cognitive load during varying tasks. Visualizations of\u0000real-time gaze measures during tasks, provide an understanding of human\u0000behavior as the experiment is being conducted. Even though existing eye\u0000tracking analysis tools provide calculation and visualization of eye-tracking\u0000data, none of them support real-time visualizations of advanced gaze measures,\u0000such as ambient or focal processing, or eye-tracked measures of cognitive load.\u0000In this paper, we present an eye movements analytics dashboard that enables\u0000visualizations of various gaze measures, fixations, saccades, cognitive load,\u0000ambient-focal attention, and gaze transitions analysis by extracting eye\u0000movements from participants utilizing common off-the-shelf eye trackers. We\u0000validate the proposed eye movement visualizations by using two publicly\u0000available eye-tracking datasets. We showcase that, the proposed dashboard could\u0000be utilized to visualize advanced eye movement measures generated using\u0000multiple data sources.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Motion Synthesis_ A Diffusion Approach for Motion Stitching and In-Betweening 人体运动合成_用于运动缝合和夹缝的扩散方法
Pub Date : 2024-09-10 DOI: arxiv-2409.06791
Michael Adewole, Oluwaseyi Giwa, Favour Nerrise, Martins Osifeko, Ajibola Oyedeji
Human motion generation is an important area of research in many fields. Inthis work, we tackle the problem of motion stitching and in-betweening. Currentmethods either require manual efforts, or are incapable of handling longersequences. To address these challenges, we propose a diffusion model with atransformer-based denoiser to generate realistic human motion. Our methoddemonstrated strong performance in generating in-betweening sequences,transforming a variable number of input poses into smooth and realistic motionsequences consisting of 75 frames at 15 fps, resulting in a total duration of 5seconds. We present the performance evaluation of our method using quantitativemetrics such as Frechet Inception Distance (FID), Diversity, and Multimodality,along with visual assessments of the generated outputs.
人体运动生成是许多领域的一个重要研究领域。在这项工作中,我们解决了运动拼接和中间处理的问题。目前的方法要么需要人工操作,要么无法处理较长的序列。为了应对这些挑战,我们提出了一种扩散模型,并使用基于变换器的去噪器来生成逼真的人体运动。我们的方法在生成中间序列方面表现出很强的性能,可将不同数量的输入姿势转换成平滑逼真的运动序列,包括 75 帧、15 帧/秒,总时长为 5 秒。我们使用弗雷谢特起始距离(FID)、多样性和多模态等定量指标对我们的方法进行了性能评估,并对生成的输出结果进行了视觉评估。
{"title":"Human Motion Synthesis_ A Diffusion Approach for Motion Stitching and In-Betweening","authors":"Michael Adewole, Oluwaseyi Giwa, Favour Nerrise, Martins Osifeko, Ajibola Oyedeji","doi":"arxiv-2409.06791","DOIUrl":"https://doi.org/arxiv-2409.06791","url":null,"abstract":"Human motion generation is an important area of research in many fields. In\u0000this work, we tackle the problem of motion stitching and in-betweening. Current\u0000methods either require manual efforts, or are incapable of handling longer\u0000sequences. To address these challenges, we propose a diffusion model with a\u0000transformer-based denoiser to generate realistic human motion. Our method\u0000demonstrated strong performance in generating in-betweening sequences,\u0000transforming a variable number of input poses into smooth and realistic motion\u0000sequences consisting of 75 frames at 15 fps, resulting in a total duration of 5\u0000seconds. We present the performance evaluation of our method using quantitative\u0000metrics such as Frechet Inception Distance (FID), Diversity, and Multimodality,\u0000along with visual assessments of the generated outputs.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VBIT: Towards Enhancing Privacy Control Over IoT Devices VBIT:努力加强对物联网设备的隐私控制
Pub Date : 2024-09-10 DOI: arxiv-2409.06233
Jad Al Aaraj, Olivia Figueira, Tu Le, Isabela Figueira, Rahmadi Trimananda, Athina Markopoulou
Internet-of-Things (IoT) devices are increasingly deployed at home, at work,and in other shared and public spaces. IoT devices collect and share data withservice providers and third parties, which poses privacy concerns. Althoughprivacy enhancing tools are quite advanced in other applications domains (eg~advertising and tracker blockers for browsers), users have currently noconvenient way to know or manage what and how data is collected and shared byIoT devices. In this paper, we present VBIT, an interactive system combiningMixed Reality (MR) and web-based applications that allows users to: (1) uncoverand visualize tracking services by IoT devices in an instrumented space and (2)take action to stop or limit that tracking. We design and implement VBIT tooperate at the network traffic level, and we show that it has negligibleperformance overhead, and offers flexibility and good usability. We perform amixed-method user study consisting of an online survey and an in-personinterview study. We show that VBIT users appreciate VBIT's transparency,control, and customization features, and they become significantly more willingto install an IoT advertising and tracking blocker, after using VBIT. In theprocess, we obtain design insights that can be used to further iterate andimprove the design of VBIT and other systems for IoT transparency and control.
物联网(IoT)设备越来越多地部署在家庭、工作场所以及其他共享和公共空间。物联网设备收集并与服务提供商和第三方共享数据,这就带来了隐私问题。尽管在其他应用领域,隐私增强工具已经相当先进(如浏览器的广告和跟踪器拦截器),但用户目前还没有便捷的方法来了解或管理物联网设备收集和共享了哪些数据以及如何收集和共享数据。在本文中,我们介绍了 VBIT,这是一个结合了混合现实(MR)和基于网络的应用程序的交互式系统,用户可以通过它:(1)发现并可视化仪器空间中物联网设备的跟踪服务;(2)采取行动停止或限制跟踪。我们设计和实现的 VBIT 可在网络流量级别运行,并表明其性能开销可忽略不计,而且具有灵活性和良好的可用性。我们采用多种方法对用户进行了研究,包括在线调查和面谈研究。我们的研究表明,VBIT 用户非常欣赏 VBIT 的透明度、控制和定制功能,而且在使用 VBIT 之后,他们安装物联网广告和跟踪拦截器的意愿明显增强。在此过程中,我们获得了设计见解,这些见解可用于进一步迭代和改进 VBIT 以及其他物联网透明度和控制系统的设计。
{"title":"VBIT: Towards Enhancing Privacy Control Over IoT Devices","authors":"Jad Al Aaraj, Olivia Figueira, Tu Le, Isabela Figueira, Rahmadi Trimananda, Athina Markopoulou","doi":"arxiv-2409.06233","DOIUrl":"https://doi.org/arxiv-2409.06233","url":null,"abstract":"Internet-of-Things (IoT) devices are increasingly deployed at home, at work,\u0000and in other shared and public spaces. IoT devices collect and share data with\u0000service providers and third parties, which poses privacy concerns. Although\u0000privacy enhancing tools are quite advanced in other applications domains (eg~\u0000advertising and tracker blockers for browsers), users have currently no\u0000convenient way to know or manage what and how data is collected and shared by\u0000IoT devices. In this paper, we present VBIT, an interactive system combining\u0000Mixed Reality (MR) and web-based applications that allows users to: (1) uncover\u0000and visualize tracking services by IoT devices in an instrumented space and (2)\u0000take action to stop or limit that tracking. We design and implement VBIT to\u0000operate at the network traffic level, and we show that it has negligible\u0000performance overhead, and offers flexibility and good usability. We perform a\u0000mixed-method user study consisting of an online survey and an in-person\u0000interview study. We show that VBIT users appreciate VBIT's transparency,\u0000control, and customization features, and they become significantly more willing\u0000to install an IoT advertising and tracking blocker, after using VBIT. In the\u0000process, we obtain design insights that can be used to further iterate and\u0000improve the design of VBIT and other systems for IoT transparency and control.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"173 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can Large Language Models Unlock Novel Scientific Research Ideas? 大型语言模型能否开启科研新思路?
Pub Date : 2024-09-10 DOI: arxiv-2409.06185
Sandeep Kumar, Tirthankar Ghosal, Vinayak Goyal, Asif Ekbal
"An idea is nothing more nor less than a new combination of old elements"(Young, J.W.). The widespread adoption of Large Language Models (LLMs) andpublicly available ChatGPT have marked a significant turning point in theintegration of Artificial Intelligence (AI) into people's everyday lives. Thisstudy explores the capability of LLMs in generating novel research ideas basedon information from research papers. We conduct a thorough examination of 4LLMs in five domains (e.g., Chemistry, Computer, Economics, Medical, andPhysics). We found that the future research ideas generated by Claude-2 andGPT-4 are more aligned with the author's perspective than GPT-3.5 and Gemini.We also found that Claude-2 generates more diverse future research ideas thanGPT-4, GPT-3.5, and Gemini 1.0. We further performed a human evaluation of thenovelty, relevancy, and feasibility of the generated future research ideas.This investigation offers insights into the evolving role of LLMs in ideageneration, highlighting both its capability and limitations. Our workcontributes to the ongoing efforts in evaluating and utilizing language modelsfor generating future research ideas. We make our datasets and codes publiclyavailable.
"一个想法无非是旧元素的新组合"(Young, J.W.)。大型语言模型(LLMs)和公开可用的 ChatGPT 的广泛应用标志着人工智能(AI)融入人们日常生活的一个重要转折点。本研究基于研究论文中的信息,探讨了 LLM 在产生新颖研究想法方面的能力。我们对五个领域(如化学、计算机、经济学、医学和物理学)的 4 名 LLM 进行了深入研究。我们发现,与 GPT-3.5 和 Gemini 相比,Claude-2 和 GPT-4 产生的未来研究想法更符合作者的观点。我们还进一步对所生成的未来研究想法的新颖性、相关性和可行性进行了人工评估。这项调查深入了解了 LLM 在想法生成中不断演变的作用,突出了其能力和局限性。我们的工作为正在进行的评估和利用语言模型生成未来研究想法的工作做出了贡献。我们公开我们的数据集和代码。
{"title":"Can Large Language Models Unlock Novel Scientific Research Ideas?","authors":"Sandeep Kumar, Tirthankar Ghosal, Vinayak Goyal, Asif Ekbal","doi":"arxiv-2409.06185","DOIUrl":"https://doi.org/arxiv-2409.06185","url":null,"abstract":"\"An idea is nothing more nor less than a new combination of old elements\"\u0000(Young, J.W.). The widespread adoption of Large Language Models (LLMs) and\u0000publicly available ChatGPT have marked a significant turning point in the\u0000integration of Artificial Intelligence (AI) into people's everyday lives. This\u0000study explores the capability of LLMs in generating novel research ideas based\u0000on information from research papers. We conduct a thorough examination of 4\u0000LLMs in five domains (e.g., Chemistry, Computer, Economics, Medical, and\u0000Physics). We found that the future research ideas generated by Claude-2 and\u0000GPT-4 are more aligned with the author's perspective than GPT-3.5 and Gemini.\u0000We also found that Claude-2 generates more diverse future research ideas than\u0000GPT-4, GPT-3.5, and Gemini 1.0. We further performed a human evaluation of the\u0000novelty, relevancy, and feasibility of the generated future research ideas.\u0000This investigation offers insights into the evolving role of LLMs in idea\u0000generation, highlighting both its capability and limitations. Our work\u0000contributes to the ongoing efforts in evaluating and utilizing language models\u0000for generating future research ideas. We make our datasets and codes publicly\u0000available.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
User Preferences for Large Language Model versus Template-Based Explanations of Movie Recommendations: A Pilot Study 用户对大语言模型和基于模板的电影推荐解释的偏好:试点研究
Pub Date : 2024-09-10 DOI: arxiv-2409.06297
Julien Albert, Martin Balfroid, Miriam Doh, Jeremie Bogaert, Luca La Fisca, Liesbet De Vos, Bryan Renard, Vincent Stragier, Emmanuel Jean
Recommender systems have become integral to our digital experiences, fromonline shopping to streaming platforms. Still, the rationale behind theirsuggestions often remains opaque to users. While some systems employ agraph-based approach, offering inherent explainability through pathsassociating recommended items and seed items, non-experts could not easilyunderstand these explanations. A popular alternative is to convert graph-basedexplanations into textual ones using a template and an algorithm, which wedenote here as ''template-based'' explanations. Yet, these can sometimes comeacross as impersonal or uninspiring. A novel method would be to employ largelanguage models (LLMs) for this purpose, which we denote as ''LLM-based''. Toassess the effectiveness of LLMs in generating more resonant explanations, weconducted a pilot study with 25 participants. They were presented with threeexplanations: (1) traditional template-based, (2) LLM-based rephrasing of thetemplate output, and (3) purely LLM-based explanations derived from thegraph-based explanations. Although subject to high variance, preliminaryfindings suggest that LLM-based explanations may provide a richer and moreengaging user experience, further aligning with user expectations. This studysheds light on the potential limitations of current explanation methods andoffers promising directions for leveraging large language models to improveuser satisfaction and trust in recommender systems.
从在线购物到流媒体平台,推荐系统已经成为我们数字体验中不可或缺的一部分。然而,用户对其推荐背后的原理往往并不了解。虽然有些系统采用基于图的方法,通过推荐项目和种子项目之间的关联路径提供内在的可解释性,但非专业人士很难理解这些解释。一种流行的替代方法是使用模板和算法将基于图的解释转换为文本解释,我们在此将其称为 "基于模板的解释"。然而,这些解释有时会显得不近人情或缺乏启发性。为此,一种新颖的方法是使用大型语言模型(LLM),我们称之为 "基于 LLM 的 "解释。为了评估 LLM 在产生更有共鸣的解释方面的有效性,我们对 25 名参与者进行了试点研究。我们向他们展示了三种解释:(1)传统的基于模板的解释;(2)基于 LLM 的模板输出重述;(3)从基于图表的解释中衍生出的纯粹基于 LLM 的解释。尽管差异很大,但初步结果表明,基于 LLM 的解释可能会提供更丰富、更吸引人的用户体验,从而进一步符合用户的期望。这项研究揭示了当前解释方法的潜在局限性,并为利用大型语言模型提高推荐系统的用户满意度和信任度提供了有希望的方向。
{"title":"User Preferences for Large Language Model versus Template-Based Explanations of Movie Recommendations: A Pilot Study","authors":"Julien Albert, Martin Balfroid, Miriam Doh, Jeremie Bogaert, Luca La Fisca, Liesbet De Vos, Bryan Renard, Vincent Stragier, Emmanuel Jean","doi":"arxiv-2409.06297","DOIUrl":"https://doi.org/arxiv-2409.06297","url":null,"abstract":"Recommender systems have become integral to our digital experiences, from\u0000online shopping to streaming platforms. Still, the rationale behind their\u0000suggestions often remains opaque to users. While some systems employ a\u0000graph-based approach, offering inherent explainability through paths\u0000associating recommended items and seed items, non-experts could not easily\u0000understand these explanations. A popular alternative is to convert graph-based\u0000explanations into textual ones using a template and an algorithm, which we\u0000denote here as ''template-based'' explanations. Yet, these can sometimes come\u0000across as impersonal or uninspiring. A novel method would be to employ large\u0000language models (LLMs) for this purpose, which we denote as ''LLM-based''. To\u0000assess the effectiveness of LLMs in generating more resonant explanations, we\u0000conducted a pilot study with 25 participants. They were presented with three\u0000explanations: (1) traditional template-based, (2) LLM-based rephrasing of the\u0000template output, and (3) purely LLM-based explanations derived from the\u0000graph-based explanations. Although subject to high variance, preliminary\u0000findings suggest that LLM-based explanations may provide a richer and more\u0000engaging user experience, further aligning with user expectations. This study\u0000sheds light on the potential limitations of current explanation methods and\u0000offers promising directions for leveraging large language models to improve\u0000user satisfaction and trust in recommender systems.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
"Come to us first": Centering Community Organizations in Artificial Intelligence for Social Good Partnerships "先来找我们":以社区组织为中心的人工智能社会公益伙伴关系
Pub Date : 2024-09-10 DOI: arxiv-2409.06814
Hongjin Lin, Naveena Karusala, Chinasa T. Okolo, Catherine D'Ignazio, Krzysztof Z. Gajos
Artificial Intelligence for Social Good (AI4SG) has emerged as a growing bodyof research and practice exploring the potential of AI technologies to tacklesocial issues. This area emphasizes interdisciplinary partnerships withcommunity organizations, such as non-profits and government agencies. However,amidst excitement about new advances in AI and their potential impact, theneeds, expectations, and aspirations of these community organizations--andwhether they are being met--are not well understood. Understanding thesefactors is important to ensure that the considerable efforts by AI teams andcommunity organizations can actually achieve the positive social impact theystrive for. Drawing on the Data Feminism framework, we explored theperspectives of community organization members on their partnerships with AIteams through 16 semi-structured interviews. Our study highlights the pervasiveinfluence of funding agendas and the optimism surrounding AI's potential.Despite the significant intellectual contributions and labor provided bycommunity organization members, their goals were frequently sidelined in favorof other stakeholders, including AI teams. While many community organizationmembers expected tangible project deployment, only two out of 14 projects westudied reached the deployment stage. However, community organization memberssustained their belief in the potential of the projects, still seeingdiminished goals as valuable. To enhance the efficacy of future collaborations,our participants shared their aspirations for success, calling forco-leadership starting from the early stages of projects. We propose dataco-liberation as a grounding principle for approaching AI4SG moving forward,positing that community organizations' co-leadership is essential for fosteringmore effective, sustainable, and ethical development of AI.
人工智能促进社会公益(AI4SG)作为探索人工智能技术解决社会问题潜力的研究和实践机构,正在不断发展壮大。这一领域强调与社区组织(如非营利组织和政府机构)的跨学科合作。然而,在对人工智能的新进展及其潜在影响感到兴奋的同时,人们对这些社区组织的需求、期望和愿望--以及它们是否得到满足--并不十分了解。了解这些因素对于确保人工智能团队和社区组织所做的大量努力能够真正实现他们所追求的积极社会影响非常重要。借鉴数据女性主义框架,我们通过 16 个半结构化访谈,探讨了社区组织成员对其与人工智能团队合作关系的看法。尽管社区组织成员提供了大量的智力贡献和劳动,但他们的目标经常被其他利益相关者(包括人工智能团队)搁置一边。尽管许多社区组织成员期待着切实的项目部署,但在所研究的 14 个项目中,只有两个项目进入了部署阶段。不过,社区组织成员仍然相信项目的潜力,认为被削弱的目标仍有价值。为了提高未来合作的效率,我们的参与者分享了他们对成功的渴望,呼吁从项目的早期阶段就开始共同领导。我们建议将数据解放作为推动人工智能4SG发展的基本原则,认为社区组织的共同领导对于促进人工智能更有效、更可持续和更合乎道德的发展至关重要。
{"title":"\"Come to us first\": Centering Community Organizations in Artificial Intelligence for Social Good Partnerships","authors":"Hongjin Lin, Naveena Karusala, Chinasa T. Okolo, Catherine D'Ignazio, Krzysztof Z. Gajos","doi":"arxiv-2409.06814","DOIUrl":"https://doi.org/arxiv-2409.06814","url":null,"abstract":"Artificial Intelligence for Social Good (AI4SG) has emerged as a growing body\u0000of research and practice exploring the potential of AI technologies to tackle\u0000social issues. This area emphasizes interdisciplinary partnerships with\u0000community organizations, such as non-profits and government agencies. However,\u0000amidst excitement about new advances in AI and their potential impact, the\u0000needs, expectations, and aspirations of these community organizations--and\u0000whether they are being met--are not well understood. Understanding these\u0000factors is important to ensure that the considerable efforts by AI teams and\u0000community organizations can actually achieve the positive social impact they\u0000strive for. Drawing on the Data Feminism framework, we explored the\u0000perspectives of community organization members on their partnerships with AI\u0000teams through 16 semi-structured interviews. Our study highlights the pervasive\u0000influence of funding agendas and the optimism surrounding AI's potential.\u0000Despite the significant intellectual contributions and labor provided by\u0000community organization members, their goals were frequently sidelined in favor\u0000of other stakeholders, including AI teams. While many community organization\u0000members expected tangible project deployment, only two out of 14 projects we\u0000studied reached the deployment stage. However, community organization members\u0000sustained their belief in the potential of the projects, still seeing\u0000diminished goals as valuable. To enhance the efficacy of future collaborations,\u0000our participants shared their aspirations for success, calling for\u0000co-leadership starting from the early stages of projects. We propose data\u0000co-liberation as a grounding principle for approaching AI4SG moving forward,\u0000positing that community organizations' co-leadership is essential for fostering\u0000more effective, sustainable, and ethical development of AI.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NSP: A Neuro-Symbolic Natural Language Navigational Planner NSP:神经符号自然语言导航规划器
Pub Date : 2024-09-10 DOI: arxiv-2409.06859
William English, Dominic Simon, Rickard Ewetz, Sumit Jha
Path planners that can interpret free-form natural language instructions holdpromise to automate a wide range of robotics applications. These plannerssimplify user interactions and enable intuitive control over complexsemi-autonomous systems. While existing symbolic approaches offer guarantees onthe correctness and efficiency, they struggle to parse free-form naturallanguage inputs. Conversely, neural approaches based on pre-trained LargeLanguage Models (LLMs) can manage natural language inputs but lack performanceguarantees. In this paper, we propose a neuro-symbolic framework for pathplanning from natural language inputs called NSP. The framework leverages theneural reasoning abilities of LLMs to i) craft symbolic representations of theenvironment and ii) a symbolic path planning algorithm. Next, a solution to thepath planning problem is obtained by executing the algorithm on the environmentrepresentation. The framework uses a feedback loop from the symbolic executionenvironment to the neural generation process to self-correct syntax errors andsatisfy execution time constraints. We evaluate our neuro-symbolic approachusing a benchmark suite with 1500 path-planning problems. The experimentalevaluation shows that our neuro-symbolic approach produces 90.1% valid pathsthat are on average 19-77% shorter than state-of-the-art neural approaches.
能够解释自由格式自然语言指令的路径规划器有望实现各种机器人应用的自动化。这些规划器可简化用户交互,实现对复杂半自动系统的直观控制。虽然现有的符号方法能保证正确性和效率,但在解析自由格式的自然语言输入时却显得力不从心。相反,基于预先训练的大型语言模型(LLM)的神经方法可以管理自然语言输入,但缺乏性能保证。在本文中,我们提出了一种用于根据自然语言输入进行路径规划的神经符号框架,称为 NSP。该框架利用 LLM 的神经推理能力来 i) 制作环境的符号表示和 ii) 符号路径规划算法。然后,通过在环境表示上执行算法,获得路径规划问题的解决方案。该框架使用从符号执行环境到神经生成过程的反馈回路来自我纠正语法错误并满足执行时间限制。我们使用一个包含 1500 个路径规划问题的基准套件来评估我们的神经符号方法。实验评估结果表明,我们的神经符号方法生成了 90.1% 的有效路径,比最先进的神经方法平均缩短了 19-77% 的时间。
{"title":"NSP: A Neuro-Symbolic Natural Language Navigational Planner","authors":"William English, Dominic Simon, Rickard Ewetz, Sumit Jha","doi":"arxiv-2409.06859","DOIUrl":"https://doi.org/arxiv-2409.06859","url":null,"abstract":"Path planners that can interpret free-form natural language instructions hold\u0000promise to automate a wide range of robotics applications. These planners\u0000simplify user interactions and enable intuitive control over complex\u0000semi-autonomous systems. While existing symbolic approaches offer guarantees on\u0000the correctness and efficiency, they struggle to parse free-form natural\u0000language inputs. Conversely, neural approaches based on pre-trained Large\u0000Language Models (LLMs) can manage natural language inputs but lack performance\u0000guarantees. In this paper, we propose a neuro-symbolic framework for path\u0000planning from natural language inputs called NSP. The framework leverages the\u0000neural reasoning abilities of LLMs to i) craft symbolic representations of the\u0000environment and ii) a symbolic path planning algorithm. Next, a solution to the\u0000path planning problem is obtained by executing the algorithm on the environment\u0000representation. The framework uses a feedback loop from the symbolic execution\u0000environment to the neural generation process to self-correct syntax errors and\u0000satisfy execution time constraints. We evaluate our neuro-symbolic approach\u0000using a benchmark suite with 1500 path-planning problems. The experimental\u0000evaluation shows that our neuro-symbolic approach produces 90.1% valid paths\u0000that are on average 19-77% shorter than state-of-the-art neural approaches.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing Resource Allocation Tools to Promote Fair Allocation: Do Visualization and Information Framing Matter? 设计资源分配工具以促进公平分配:可视化和信息框架重要吗?
Pub Date : 2024-09-10 DOI: arxiv-2409.06688
Arnav Verma, Luiz Morais, Pierre Dragicevic, Fanny Chevalier
Studies on human decision-making focused on humanitarian aid have found thatcognitive biases can hinder the fair allocation of resources. However, few HCIand Information Visualization studies have explored ways to overcome thosecognitive biases. This work investigates whether the design of interactiveresource allocation tools can help to promote allocation fairness. Wespecifically study the effect of presentation format (using text orvisualization) and a specific framing strategy (showing resources allocated togroups or individuals). In our three crowdsourced experiments, we provideddifferent tool designs to split money between two fictional programs thatbenefit two distinct communities. Our main finding indicates thatindividual-framed visualizations and text may be able to curb unfairallocations caused by group-framed designs. This work opens new perspectivesthat can motivate research on how interactive tools and visualizations can beengineered to combat cognitive biases that lead to inequitable decisions.
以人道主义援助为重点的人类决策研究发现,认知偏差会阻碍资源的公平分配。然而,很少有人机交互和信息可视化研究探讨如何克服这些认知偏差。这项研究探讨了交互式资源分配工具的设计是否有助于促进资源分配的公平性。我们特别研究了展示格式(使用文本或可视化)和特定框架策略(展示分配给团体或个人的资源)的效果。在我们的三个众包实验中,我们提供了不同的工具设计,在两个虚构的项目之间分配资金,这两个项目惠及两个不同的社区。我们的主要发现表明,以个人为框架的可视化和文本可能能够遏制以群体为框架的设计所造成的不公平分配。这项工作开辟了新的视角,可以激励人们研究如何利用互动工具和可视化来消除导致不公平决策的认知偏差。
{"title":"Designing Resource Allocation Tools to Promote Fair Allocation: Do Visualization and Information Framing Matter?","authors":"Arnav Verma, Luiz Morais, Pierre Dragicevic, Fanny Chevalier","doi":"arxiv-2409.06688","DOIUrl":"https://doi.org/arxiv-2409.06688","url":null,"abstract":"Studies on human decision-making focused on humanitarian aid have found that\u0000cognitive biases can hinder the fair allocation of resources. However, few HCI\u0000and Information Visualization studies have explored ways to overcome those\u0000cognitive biases. This work investigates whether the design of interactive\u0000resource allocation tools can help to promote allocation fairness. We\u0000specifically study the effect of presentation format (using text or\u0000visualization) and a specific framing strategy (showing resources allocated to\u0000groups or individuals). In our three crowdsourced experiments, we provided\u0000different tool designs to split money between two fictional programs that\u0000benefit two distinct communities. Our main finding indicates that\u0000individual-framed visualizations and text may be able to curb unfair\u0000allocations caused by group-framed designs. This work opens new perspectives\u0000that can motivate research on how interactive tools and visualizations can be\u0000engineered to combat cognitive biases that lead to inequitable decisions.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mazed and Confused: A Dataset of Cybersickness, Working Memory, Mental Load, Physical Load, and Attention During a Real Walking Task in VR 茫然与困惑:在虚拟现实中完成真实行走任务时的晕机、工作记忆、心理负荷、身体负荷和注意力数据集
Pub Date : 2024-09-10 DOI: arxiv-2409.06898
Jyotirmay Nag Setu, Joshua M Le, Ripan Kumar Kundu, Barry Giesbrecht, Tobias Höllerer, Khaza Anuarul Hoque, Kevin Desai, John Quarles
Virtual Reality (VR) is quickly establishing itself in various industries,including training, education, medicine, and entertainment, in which users arefrequently required to carry out multiple complex cognitive and physicalactivities. However, the relationship between cognitive activities, physicalactivities, and familiar feelings of cybersickness is not well understood andthus can be unpredictable for developers. Researchers have previously providedlabeled datasets for predicting cybersickness while users are stationary, butthere have been few labeled datasets on cybersickness while users arephysically walking. Thus, from 39 participants, we collected head orientation,head position, eye tracking, images, physiological readings from externalsensors, and the self-reported cybersickness severity, physical load, andmental load in VR. Throughout the data collection, participants navigated mazesvia real walking and performed tasks challenging their attention and workingmemory. To demonstrate the dataset's utility, we conducted a case study oftraining classifiers in which we achieved 95% accuracy for cybersicknessseverity classification. The noteworthy performance of the straightforwardclassifiers makes this dataset ideal for future researchers to developcybersickness detection and reduction models. To better understand the featuresthat helped with classification, we performed SHAP(SHapley AdditiveexPlanations) analysis, highlighting the importance of eye tracking andphysiological measures for cybersickness prediction while walking. This opendataset can allow future researchers to study the connection betweencybersickness and cognitive loads and develop prediction models. This datasetwill empower future VR developers to design efficient and effective VirtualEnvironments by improving cognitive load management and minimizingcybersickness.
虚拟现实(VR)正迅速在培训、教育、医疗和娱乐等各行各业中占据一席之地,在这些行业中,用户经常需要进行多种复杂的认知和物理活动。然而,人们对认知活动、身体活动和熟悉的晕网感之间的关系并不十分了解,因此开发人员可能无法预测。研究人员以前曾提供过用于预测用户在静止状态下的晕网感的标注数据集,但很少有关于用户在身体行走时的晕网感的标注数据集。因此,我们收集了 39 名参与者的头部方向、头部位置、眼球跟踪、图像、外部传感器的生理读数,以及在 VR 中自我报告的晕机严重程度、身体负荷和心理负荷。在整个数据收集过程中,参与者通过真实的行走在迷宫中穿梭,并执行挑战其注意力和工作记忆力的任务。为了证明该数据集的实用性,我们进行了一项训练分类器的案例研究,在该研究中,我们对晕机严重程度分类的准确率达到了 95%。直接分类器的显著性能使该数据集成为未来研究人员开发晕机检测和减轻模型的理想选择。为了更好地了解有助于分类的特征,我们进行了 SHAP(SHapley AdditiveexPlanations)分析,强调了眼动跟踪和生理测量对于预测行走时晕机的重要性。该数据集可帮助未来的研究人员研究晕机与认知负荷之间的联系,并开发预测模型。该数据集将帮助未来的虚拟现实开发人员设计出高效的虚拟环境,改善认知负荷管理,最大程度地减少晕机现象。
{"title":"Mazed and Confused: A Dataset of Cybersickness, Working Memory, Mental Load, Physical Load, and Attention During a Real Walking Task in VR","authors":"Jyotirmay Nag Setu, Joshua M Le, Ripan Kumar Kundu, Barry Giesbrecht, Tobias Höllerer, Khaza Anuarul Hoque, Kevin Desai, John Quarles","doi":"arxiv-2409.06898","DOIUrl":"https://doi.org/arxiv-2409.06898","url":null,"abstract":"Virtual Reality (VR) is quickly establishing itself in various industries,\u0000including training, education, medicine, and entertainment, in which users are\u0000frequently required to carry out multiple complex cognitive and physical\u0000activities. However, the relationship between cognitive activities, physical\u0000activities, and familiar feelings of cybersickness is not well understood and\u0000thus can be unpredictable for developers. Researchers have previously provided\u0000labeled datasets for predicting cybersickness while users are stationary, but\u0000there have been few labeled datasets on cybersickness while users are\u0000physically walking. Thus, from 39 participants, we collected head orientation,\u0000head position, eye tracking, images, physiological readings from external\u0000sensors, and the self-reported cybersickness severity, physical load, and\u0000mental load in VR. Throughout the data collection, participants navigated mazes\u0000via real walking and performed tasks challenging their attention and working\u0000memory. To demonstrate the dataset's utility, we conducted a case study of\u0000training classifiers in which we achieved 95% accuracy for cybersickness\u0000severity classification. The noteworthy performance of the straightforward\u0000classifiers makes this dataset ideal for future researchers to develop\u0000cybersickness detection and reduction models. To better understand the features\u0000that helped with classification, we performed SHAP(SHapley Additive\u0000exPlanations) analysis, highlighting the importance of eye tracking and\u0000physiological measures for cybersickness prediction while walking. This open\u0000dataset can allow future researchers to study the connection between\u0000cybersickness and cognitive loads and develop prediction models. This dataset\u0000will empower future VR developers to design efficient and effective Virtual\u0000Environments by improving cognitive load management and minimizing\u0000cybersickness.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"64 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Impedance Modulation to Improve Visuo-Haptic Perception 通过人体阻抗调制改善视觉-触觉感知
Pub Date : 2024-09-10 DOI: arxiv-2409.06124
Xiaoxiao Cheng, Shixian Shen, Ekaterina Ivanova, Gerolamo Carboni, Atsushi Takagi, Etienne Burdet
Humans activate muscles to shape the mechanical interaction with theirenvironment, but can they harness this control mechanism to best sense theenvironment? We investigated how participants adapt their muscle activation tovisual and haptic information when tracking a randomly moving target with arobotic interface. The results exhibit a differentiated effect of these sensorymodalities, where participants' muscle cocontraction increases with the hapticnoise and decreases with the visual noise, in apparent contradiction toprevious results. These results can be explained, and reconciled with previousfindings, when considering muscle spring like mechanics, where stiffnessincreases with cocontraction to regulate motion guidance. Increasingcocontraction to more closely follow the motion plan favors accurate visualover haptic information, while decreasing it avoids injecting visual noise andrelies on accurate haptic information. We formulated this active sensingmechanism as the optimization of visuo-haptic information and effort. This OIEmodel can explain the adaptation of muscle activity to unimodal and multimodalsensory information when interacting with fixed or dynamic environments, orwith another human, and can be used to optimize human-robot interaction.
人类通过激活肌肉来形成与周围环境的机械互动,但他们能否利用这种控制机制来最好地感知环境呢?我们研究了参与者在使用机器人界面追踪随机移动的目标时,如何使肌肉激活适应视觉和触觉信息。研究结果表明,这些感觉模式会产生不同的影响,参与者的肌肉收缩会随着触觉噪声的增加而增加,随着视觉噪声的增加而减少,这显然与之前的研究结果相矛盾。如果考虑到肌肉的弹簧力学,这些结果就可以得到解释,并与之前的研究结果相一致。增加牵引力以更紧密地跟随运动计划,有利于获得准确的视觉信息而非触觉信息,而减少牵引力则可避免注入视觉噪音,并依赖于准确的触觉信息。我们将这种主动感应机制表述为视觉-触觉信息和努力的优化。这种 OIE 模型可以解释在与固定或动态环境或与他人进行交互时,肌肉活动对单模态和多模态感知信息的适应,并可用于优化人机交互。
{"title":"Human Impedance Modulation to Improve Visuo-Haptic Perception","authors":"Xiaoxiao Cheng, Shixian Shen, Ekaterina Ivanova, Gerolamo Carboni, Atsushi Takagi, Etienne Burdet","doi":"arxiv-2409.06124","DOIUrl":"https://doi.org/arxiv-2409.06124","url":null,"abstract":"Humans activate muscles to shape the mechanical interaction with their\u0000environment, but can they harness this control mechanism to best sense the\u0000environment? We investigated how participants adapt their muscle activation to\u0000visual and haptic information when tracking a randomly moving target with a\u0000robotic interface. The results exhibit a differentiated effect of these sensory\u0000modalities, where participants' muscle cocontraction increases with the haptic\u0000noise and decreases with the visual noise, in apparent contradiction to\u0000previous results. These results can be explained, and reconciled with previous\u0000findings, when considering muscle spring like mechanics, where stiffness\u0000increases with cocontraction to regulate motion guidance. Increasing\u0000cocontraction to more closely follow the motion plan favors accurate visual\u0000over haptic information, while decreasing it avoids injecting visual noise and\u0000relies on accurate haptic information. We formulated this active sensing\u0000mechanism as the optimization of visuo-haptic information and effort. This OIE\u0000model can explain the adaptation of muscle activity to unimodal and multimodal\u0000sensory information when interacting with fixed or dynamic environments, or\u0000with another human, and can be used to optimize human-robot interaction.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"10 5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Human-Computer Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1