首页 > 最新文献

ArXiv最新文献

英文 中文
Comparison of Spatial Visualization Techniques for Radiation in Augmented Reality 增强现实中辐射的空间可视化技术比较
Pub Date : 2024-03-08 DOI: 10.1145/3613904.3642646
F. McGee, Rod McCall, Joan Baixauli
Augmented Reality (AR) provides a safe and low-cost option for hazardous safety training that allows for the visualization of aspects that may be invisible, such as radiation. Effectively visually communicating such threats in the environment around the user is not straightforward. This work describes visually encoding radiation using the spatial awareness mesh of an AR Head Mounted Display. We leverage the AR device's GPUs to develop a real time solution that accumulates multiple dynamic sources and uses stencils to prevent an environment being over saturated with a visualization, as well as supporting the encoding of direction explicitly in the visualization. We perform a user study (25 participants) of different visualizations and obtain user feedback. Results show that there are complex interactions and while no visual representation was statistically superior or inferior, user opinions vary widely. We also discuss the evaluation approaches and provide recommendations.
增强现实技术(AR)为危险安全培训提供了一种安全、低成本的选择,可以将辐射等看不见的方面可视化。在用户周围环境中有效地以可视化方式传达此类威胁并非易事。这项工作描述了使用 AR 头戴式显示器的空间感知网格对辐射进行可视化编码。我们利用 AR 设备的 GPU 开发了一种实时解决方案,该解决方案可累积多个动态源,并使用模版来防止可视化环境过度饱和,同时还支持在可视化中明确编码方向。我们对不同的可视化效果进行了用户研究(25 人参与),并获得了用户反馈。结果表明,存在着复杂的交互作用,虽然在统计上没有哪种可视化表现形式更优或更劣,但用户的意见却大相径庭。我们还讨论了评估方法并提出了建议。
{"title":"Comparison of Spatial Visualization Techniques for Radiation in Augmented Reality","authors":"F. McGee, Rod McCall, Joan Baixauli","doi":"10.1145/3613904.3642646","DOIUrl":"https://doi.org/10.1145/3613904.3642646","url":null,"abstract":"Augmented Reality (AR) provides a safe and low-cost option for hazardous safety training that allows for the visualization of aspects that may be invisible, such as radiation. Effectively visually communicating such threats in the environment around the user is not straightforward. This work describes visually encoding radiation using the spatial awareness mesh of an AR Head Mounted Display. We leverage the AR device's GPUs to develop a real time solution that accumulates multiple dynamic sources and uses stencils to prevent an environment being over saturated with a visualization, as well as supporting the encoding of direction explicitly in the visualization. We perform a user study (25 participants) of different visualizations and obtain user feedback. Results show that there are complex interactions and while no visual representation was statistically superior or inferior, user opinions vary widely. We also discuss the evaluation approaches and provide recommendations.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140397056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the Impact of Interconnected External Interfaces in Autonomous Vehicleson Pedestrian Safety and Experience 探索自动驾驶汽车互联外部接口对行人安全和体验的影响
Pub Date : 2024-03-08 DOI: 10.1145/3613904.3642118
Tram Thi Minh Tran, Callum Parker, Marius Hoggenmüller, Yiyuan Wang, M. Tomitsch
Policymakers advocate for the use of external Human-Machine Interfaces (eHMIs) to allow autonomous vehicles (AVs) to communicate their intentions or status. Nonetheless, scalability concerns in complex traffic scenarios arise, such as potentially increasing pedestrian cognitive load or conveying contradictory signals. Building upon precursory works, our study explores 'interconnected eHMIs,' where multiple AV interfaces are interconnected to provide pedestrians with clear and unified information. In a virtual reality study (N=32), we assessed the effectiveness of this concept in improving pedestrian safety and their crossing experience. We compared these results against two conditions: no eHMIs and unconnected eHMIs. Results indicated interconnected eHMIs enhanced safety feelings and encouraged cautious crossings. However, certain design elements, such as the use of the colour red, led to confusion and discomfort. Prior knowledge slightly influenced perceptions of interconnected eHMIs, underscoring the need for refined user education. We conclude with practical implications and future eHMI design research directions.
政策制定者提倡使用外部人机界面(eHMI),让自动驾驶汽车(AV)传达自己的意图或状态。然而,在复杂的交通场景中会出现可扩展性问题,例如可能会增加行人的认知负荷或传达相互矛盾的信号。在前人研究的基础上,我们的研究探索了 "互联电子人机交互界面",即多个自动驾驶汽车界面相互连接,为行人提供清晰统一的信息。在一项虚拟现实研究(32 人)中,我们评估了这一概念在改善行人安全和过街体验方面的有效性。我们将这些结果与两种情况进行了比较:无电子人机界面和未连接电子人机界面。结果表明,相互连接的 eHMI 增强了行人的安全感,鼓励他们谨慎过马路。然而,某些设计元素,如红色的使用,会导致混淆和不适。先前的知识略微影响了人们对互联式电子人机界面的看法,这就强调了完善用户教育的必要性。最后,我们提出了实际意义和未来的电子人机界面设计研究方向。
{"title":"Exploring the Impact of Interconnected External Interfaces in Autonomous Vehicleson Pedestrian Safety and Experience","authors":"Tram Thi Minh Tran, Callum Parker, Marius Hoggenmüller, Yiyuan Wang, M. Tomitsch","doi":"10.1145/3613904.3642118","DOIUrl":"https://doi.org/10.1145/3613904.3642118","url":null,"abstract":"Policymakers advocate for the use of external Human-Machine Interfaces (eHMIs) to allow autonomous vehicles (AVs) to communicate their intentions or status. Nonetheless, scalability concerns in complex traffic scenarios arise, such as potentially increasing pedestrian cognitive load or conveying contradictory signals. Building upon precursory works, our study explores 'interconnected eHMIs,' where multiple AV interfaces are interconnected to provide pedestrians with clear and unified information. In a virtual reality study (N=32), we assessed the effectiveness of this concept in improving pedestrian safety and their crossing experience. We compared these results against two conditions: no eHMIs and unconnected eHMIs. Results indicated interconnected eHMIs enhanced safety feelings and encouraged cautious crossings. However, certain design elements, such as the use of the colour red, led to confusion and discomfort. Prior knowledge slightly influenced perceptions of interconnected eHMIs, underscoring the need for refined user education. We conclude with practical implications and future eHMI design research directions.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":"32 52","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140396995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is Cosine-Similarity of Embeddings Really About Similarity? 嵌入的余弦相似性真的与相似性有关吗?
Pub Date : 2024-03-08 DOI: 10.1145/3589335.3651526
Harald Steck, Chaitanya Ekanadham, Nathan Kallus
Cosine-similarity is the cosine of the angle between two vectors, or equivalently the dot product between their normalizations. A popular application is to quantify semantic similarity between high-dimensional objects by applying cosine-similarity to a learned low-dimensional feature embedding. This can work better but sometimes also worse than the unnormalized dot-product between embedded vectors in practice. To gain insight into this empirical observation, we study embeddings derived from regularized linear models, where closed-form solutions facilitate analytical insights. We derive analytically how cosine-similarity can yield arbitrary and therefore meaningless `similarities.' For some linear models the similarities are not even unique, while for others they are implicitly controlled by the regularization. We discuss implications beyond linear models: a combination of different regularizations are employed when learning deep models; these have implicit and unintended effects when taking cosine-similarities of the resulting embeddings, rendering results opaque and possibly arbitrary. Based on these insights, we caution against blindly using cosine-similarity and outline alternatives.
余弦相似度是两个向量之间角度的余弦值,或者说是两个向量归一化之间的点积。一种流行的应用是通过将余弦相似度应用于学习到的低维特征嵌入来量化高维对象之间的语义相似性。在实践中,这可能比嵌入向量之间的非归一化点积效果更好,但有时也会更糟。为了深入了解这一经验观察结果,我们研究了由正则化线性模型推导出的嵌入,其中的闭式解法有助于分析。我们通过分析推导出余弦相似性如何产生任意的、因此毫无意义的 "相似性"。对于某些线性模型,相似性甚至不是唯一的,而对于其他模型,相似性则受正则化的隐性控制。我们讨论了线性模型之外的影响:在学习深度模型时,我们采用了不同的正则化组合;在计算所得到的嵌入的余弦相似度时,这些正则化组合会产生隐含的、意想不到的影响,使结果变得不透明,甚至可能是任意的。基于这些见解,我们提醒大家不要盲目使用余弦相似度,并概述了替代方法。
{"title":"Is Cosine-Similarity of Embeddings Really About Similarity?","authors":"Harald Steck, Chaitanya Ekanadham, Nathan Kallus","doi":"10.1145/3589335.3651526","DOIUrl":"https://doi.org/10.1145/3589335.3651526","url":null,"abstract":"Cosine-similarity is the cosine of the angle between two vectors, or equivalently the dot product between their normalizations. A popular application is to quantify semantic similarity between high-dimensional objects by applying cosine-similarity to a learned low-dimensional feature embedding. This can work better but sometimes also worse than the unnormalized dot-product between embedded vectors in practice. To gain insight into this empirical observation, we study embeddings derived from regularized linear models, where closed-form solutions facilitate analytical insights. We derive analytically how cosine-similarity can yield arbitrary and therefore meaningless `similarities.' For some linear models the similarities are not even unique, while for others they are implicitly controlled by the regularization. We discuss implications beyond linear models: a combination of different regularizations are employed when learning deep models; these have implicit and unintended effects when taking cosine-similarities of the resulting embeddings, rendering results opaque and possibly arbitrary. Based on these insights, we caution against blindly using cosine-similarity and outline alternatives.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":"5 24","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140396686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Modal and Uni-Modal Soft-Label Alignment for Image-Text Retrieval 图像-文本检索中的跨模态和单模态软标记对齐
Pub Date : 2024-03-08 DOI: 10.1609/aaai.v38i16.29789
Hailang Huang, Zhijie Nie, Ziqiao Wang, Ziyu Shang
Current image-text retrieval methods have demonstrated impressive performance in recent years. However, they still face two problems: the inter-modal matching missing problem and the intra-modal semantic loss problem. These problems can significantly affect the accuracy of image-text retrieval. To address these challenges, we propose a novel method called Cross-modal and Uni-modal Soft-label Alignment (CUSA). Our method leverages the power of uni-modal pre-trained models to provide soft-label supervision signals for the image-text retrieval model. Additionally, we introduce two alignment techniques, Cross-modal Soft-label Alignment (CSA) and Uni-modal Soft-label Alignment (USA), to overcome false negatives and enhance similarity recognition between uni-modal samples. Our method is designed to be plug-and-play, meaning it can be easily applied to existing image-text retrieval models without changing their original architectures. Extensive experiments on various image-text retrieval models and datasets, we demonstrate that our method can consistently improve the performance of image-text retrieval and achieve new state-of-the-art results. Furthermore, our method can also boost the uni-modal retrieval performance of image-text retrieval models, enabling it to achieve universal retrieval. The code and supplementary files can be found at https://github.com/lerogo/aaai24_itr_cusa.
近年来,当前的图像-文本检索方法已经取得了令人瞩目的成绩。然而,它们仍然面临两个问题:模态间匹配缺失问题和模态内语义损失问题。这些问题会严重影响图像-文本检索的准确性。为了解决这些问题,我们提出了一种名为 "跨模态和单模态软标记对齐(CUSA)"的新方法。我们的方法利用单模态预训练模型的力量,为图像文本检索模型提供软标签监督信号。此外,我们还引入了两种对齐技术,即跨模态软标签对齐(CSA)和单模态软标签对齐(USA),以克服误判,提高单模态样本之间的相似性识别能力。我们的方法设计为即插即用,这意味着它可以轻松地应用于现有的图像-文本检索模型,而无需改变其原始架构。我们在各种图像-文本检索模型和数据集上进行了广泛的实验,证明我们的方法可以持续提高图像-文本检索的性能,并取得新的一流成果。此外,我们的方法还能提高图像文本检索模型的单模态检索性能,使其实现通用检索。代码和补充文件可在 https://github.com/lerogo/aaai24_itr_cusa 上找到。
{"title":"Cross-Modal and Uni-Modal Soft-Label Alignment for Image-Text Retrieval","authors":"Hailang Huang, Zhijie Nie, Ziqiao Wang, Ziyu Shang","doi":"10.1609/aaai.v38i16.29789","DOIUrl":"https://doi.org/10.1609/aaai.v38i16.29789","url":null,"abstract":"Current image-text retrieval methods have demonstrated impressive performance in recent years. However, they still face two problems: the inter-modal matching missing problem and the intra-modal semantic loss problem. These problems can significantly affect the accuracy of image-text retrieval. To address these challenges, we propose a novel method called Cross-modal and Uni-modal Soft-label Alignment (CUSA). Our method leverages the power of uni-modal pre-trained models to provide soft-label supervision signals for the image-text retrieval model. Additionally, we introduce two alignment techniques, Cross-modal Soft-label Alignment (CSA) and Uni-modal Soft-label Alignment (USA), to overcome false negatives and enhance similarity recognition between uni-modal samples. Our method is designed to be plug-and-play, meaning it can be easily applied to existing image-text retrieval models without changing their original architectures. Extensive experiments on various image-text retrieval models and datasets, we demonstrate that our method can consistently improve the performance of image-text retrieval and achieve new state-of-the-art results. Furthermore, our method can also boost the uni-modal retrieval performance of image-text retrieval models, enabling it to achieve universal retrieval. The code and supplementary files can be found at https://github.com/lerogo/aaai24_itr_cusa.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":"30 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140396720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spectrum Translation for Refinement of Image Generation (STIG) Based on Contrastive Learning and Spectral Filter Profile 基于对比学习和光谱滤波器轮廓的图像生成改进(STIG)频谱转换
Pub Date : 2024-03-08 DOI: 10.1609/aaai.v38i4.28074
Seokjun Lee, Seung-Won Jung, Hyunseok Seo
Currently, image generation and synthesis have remarkably progressed with generative models. Despite photo-realistic results, intrinsic discrepancies are still observed in the frequency domain. The spectral discrepancy appeared not only in generative adversarial networks but in diffusion models. In this study, we propose a framework to effectively mitigate the disparity in frequency domain of the generated images to improve generative performance of both GAN and diffusion models. This is realized by spectrum translation for the refinement of image generation (STIG) based on contrastive learning. We adopt theoretical logic of frequency components in various generative networks. The key idea, here, is to refine the spectrum of the generated image via the concept of image-to-image translation and contrastive learning in terms of digital signal processing. We evaluate our framework across eight fake image datasets and various cutting-edge models to demonstrate the effectiveness of STIG. Our framework outperforms other cutting-edges showing significant decreases in FID and log frequency distance of spectrum. We further emphasize that STIG improves image quality by decreasing the spectral anomaly. Additionally, validation results present that the frequency-based deepfake detector confuses more in the case where fake spectrums are manipulated by STIG.
目前,图像生成和合成在生成模型方面取得了显著进展。尽管取得了逼真的效果,但在频域中仍可观察到内在差异。频谱差异不仅出现在生成式对抗网络中,也出现在扩散模型中。在本研究中,我们提出了一个框架,以有效缓解生成图像在频域上的差异,从而提高生成式对抗网络和扩散模型的生成性能。这是通过基于对比学习的图像生成细化频谱转换(STIG)来实现的。我们采用了各种生成网络中频率成分的理论逻辑。这里的关键思路是通过图像到图像的转换概念和数字信号处理方面的对比学习来完善生成图像的频谱。我们通过八个假图像数据集和各种尖端模型对我们的框架进行了评估,以证明 STIG 的有效性。我们的框架在 FID 和频谱对数频率距离方面的表现明显优于其他前沿模型。我们进一步强调,STIG 通过减少频谱异常来提高图像质量。此外,验证结果表明,在 STIG 处理伪造频谱的情况下,基于频率的深度伪造检测器会产生更多混淆。
{"title":"Spectrum Translation for Refinement of Image Generation (STIG) Based on Contrastive Learning and Spectral Filter Profile","authors":"Seokjun Lee, Seung-Won Jung, Hyunseok Seo","doi":"10.1609/aaai.v38i4.28074","DOIUrl":"https://doi.org/10.1609/aaai.v38i4.28074","url":null,"abstract":"Currently, image generation and synthesis have remarkably progressed with generative models. Despite photo-realistic results, intrinsic discrepancies are still observed in the frequency domain. The spectral discrepancy appeared not only in generative adversarial networks but in diffusion models. In this study, we propose a framework to effectively mitigate the disparity in frequency domain of the generated images to improve generative performance of both GAN and diffusion models. This is realized by spectrum translation for the refinement of image generation (STIG) based on contrastive learning. We adopt theoretical logic of frequency components in various generative networks. The key idea, here, is to refine the spectrum of the generated image via the concept of image-to-image translation and contrastive learning in terms of digital signal processing. We evaluate our framework across eight fake image datasets and various cutting-edge models to demonstrate the effectiveness of STIG. Our framework outperforms other cutting-edges showing significant decreases in FID and log frequency distance of spectrum. We further emphasize that STIG improves image quality by decreasing the spectral anomaly. Additionally, validation results present that the frequency-based deepfake detector confuses more in the case where fake spectrums are manipulated by STIG.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":"31 52","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140396695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
RLPeri: Accelerating Visual Perimetry Test with Reinforcement Learning and Convolutional Feature Extraction RLPeri:利用强化学习和卷积特征提取加速视觉直观测试
Pub Date : 2024-03-08 DOI: 10.1609/aaai.v38i20.30247
Tanvi Verma, LinhLe Dinh, Nicholas Tan, Xinxing Xu, Chingyu Cheng, Yong Liu
Visual perimetry is an important eye examination that helps detect vision problems caused by ocular or neurological conditions. During the test, a patient's gaze is fixed at a specific location while light stimuli of varying intensities are presented in central and peripheral vision. Based on the patient's responses to the stimuli, the visual field mapping and sensitivity are determined. However, maintaining high levels of concentration throughout the test can be challenging for patients, leading to increased examination times and decreased accuracy.In this work, we present RLPeri, a reinforcement learning-based approach to optimize visual perimetry testing. By determining the optimal sequence of locations and initial stimulus values, we aim to reduce the examination time without compromising accuracy. Additionally, we incorporate reward shaping techniques to further improve the testing performance. To monitor the patient's responses over time during testing, we represent the test's state as a pair of 3D matrices. We apply two different convolutional kernels to extract spatial features across locations as well as features across different stimulus values for each location. Through experiments, we demonstrate that our approach results in a 10-20% reduction in examination time while maintaining the accuracy as compared to state-of-the-art methods. With the presented approach, we aim to make visual perimetry testing more efficient and patient-friendly, while still providing accurate results.
视觉周视检查是一项重要的眼科检查,有助于发现由眼部或神经系统疾病引起的视力问题。在检查过程中,患者的视线会被固定在一个特定的位置,同时中央和周边视线会受到不同强度的光刺激。根据患者对刺激的反应,可以确定视野映射和灵敏度。然而,在整个测试过程中保持高度集中的注意力对患者来说具有挑战性,会导致检查时间延长和准确性降低。在这项工作中,我们提出了基于强化学习的方法 RLPeri,以优化视觉周视测试。通过确定最佳位置序列和初始刺激值,我们旨在缩短检查时间,同时不影响准确性。此外,我们还采用了奖励塑造技术,以进一步提高测试性能。为了监测患者在测试过程中的反应,我们将测试状态表示为一对三维矩阵。我们采用两种不同的卷积核来提取不同位置的空间特征以及每个位置不同刺激值的特征。通过实验,我们证明,与最先进的方法相比,我们的方法在保持准确性的同时,还能将检查时间缩短 10-20%。我们提出的方法旨在使视觉视力测试更高效、更方便患者,同时还能提供准确的结果。
{"title":"RLPeri: Accelerating Visual Perimetry Test with Reinforcement Learning and Convolutional Feature Extraction","authors":"Tanvi Verma, LinhLe Dinh, Nicholas Tan, Xinxing Xu, Chingyu Cheng, Yong Liu","doi":"10.1609/aaai.v38i20.30247","DOIUrl":"https://doi.org/10.1609/aaai.v38i20.30247","url":null,"abstract":"Visual perimetry is an important eye examination that helps detect vision problems caused by ocular or neurological conditions. During the test, a patient's gaze is fixed at a specific location while light stimuli of varying intensities are presented in central and peripheral vision. Based on the patient's responses to the stimuli, the visual field mapping and sensitivity are determined. However, maintaining high levels of concentration throughout the test can be challenging for patients, leading to increased examination times and decreased accuracy.\u0000\u0000In this work, we present RLPeri, a reinforcement learning-based approach to optimize visual perimetry testing. By determining the optimal sequence of locations and initial stimulus values, we aim to reduce the examination time without compromising accuracy. Additionally, we incorporate reward shaping techniques to further improve the testing performance. To monitor the patient's responses over time during testing, we represent the test's state as a pair of 3D matrices. We apply two different convolutional kernels to extract spatial features across locations as well as features across different stimulus values for each location. Through experiments, we demonstrate that our approach results in a 10-20% reduction in examination time while maintaining the accuracy as compared to state-of-the-art methods. With the presented approach, we aim to make visual perimetry testing more efficient and patient-friendly, while still providing accurate results.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":"33 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140397057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Considering Nonstationary within Multivariate Time Series with Variational Hierarchical Transformer for Forecasting 利用变分层次变换器考虑多变量时间序列中的非平稳性以进行预测
Pub Date : 2024-03-08 DOI: 10.1609/aaai.v38i14.29483
Muyao Wang, Wenchao Chen, Bo Chen
The forecasting of Multivariate Time Series (MTS) has long been an important but challenging task. Due to the non-stationary problem across long-distance time steps, previous studies primarily adopt stationarization method to attenuate the non-stationary problem of original series for better predictability. However, existed methods always adopt the stationarized series, which ignore the inherent non-stationarity, and have difficulty in modeling MTS with complex distributions due to the lack of stochasticity. To tackle these problems, we first develop a powerful hierarchical probabilistic generative module to consider the non-stationarity and stochastity characteristics within MTS, and then combine it with transformer for a well-defined variational generative dynamic model named Hierarchical Time series Variational Transformer (HTV-Trans), which recovers the intrinsic non-stationary information into temporal dependencies. Being an powerful probabilistic model, HTV-Trans is utilized to learn expressive representations of MTS and applied to the forecasting tasks. Extensive experiments on diverse datasets show the efficiency of HTV-Trans on MTS forecasting tasks.
长期以来,多变量时间序列(MTS)预测一直是一项重要而又具有挑战性的任务。由于存在跨长距离时间步长的非平稳性问题,以往的研究主要采用平稳化方法来削弱原始序列的非平稳性问题,以获得更好的可预测性。然而,现有方法总是采用静止化序列,忽略了固有的非平稳性,而且由于缺乏随机性,难以对具有复杂分布的 MTS 进行建模。为了解决这些问题,我们首先开发了一个功能强大的分层概率生成模块,以考虑 MTS 的非平稳性和随机性特征,然后将其与变换器相结合,建立了一个定义明确的变异生成动态模型,命名为分层时间序列变异变换器(HTV-Trans),将内在的非平稳信息复原为时间依赖关系。作为一个强大的概率模型,HTV-Trans 被用来学习 MTS 的表达式表示,并应用于预测任务。在不同数据集上进行的大量实验表明,HTV-Trans 在 MTS 预测任务中非常有效。
{"title":"Considering Nonstationary within Multivariate Time Series with Variational Hierarchical Transformer for Forecasting","authors":"Muyao Wang, Wenchao Chen, Bo Chen","doi":"10.1609/aaai.v38i14.29483","DOIUrl":"https://doi.org/10.1609/aaai.v38i14.29483","url":null,"abstract":"The forecasting of Multivariate Time Series (MTS) has long been an important but challenging task. Due to the non-stationary problem across long-distance time steps, previous studies primarily adopt stationarization method to attenuate the non-stationary problem of original series for better predictability. However, existed methods always adopt the stationarized series, which ignore the inherent non-stationarity, and have difficulty in modeling MTS with complex distributions due to the lack of stochasticity. To tackle these problems, we first develop a powerful hierarchical probabilistic generative module to consider the non-stationarity and stochastity characteristics within MTS, and then combine it with transformer for a well-defined variational generative dynamic model named Hierarchical Time series Variational Transformer (HTV-Trans), which recovers the intrinsic non-stationary information into temporal dependencies. Being an powerful probabilistic model, HTV-Trans is utilized to learn expressive representations of MTS and applied to the forecasting tasks. Extensive experiments on diverse datasets show the efficiency of HTV-Trans on MTS forecasting tasks.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":"6 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140396677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital Wellbeing Redefined: Toward User-Centric Approach for Positive Social Media Engagement 重新定义数字福祉:以用户为中心的积极社交媒体参与方法
Pub Date : 2024-03-08 DOI: 10.1145/3647632.3651392
Yixue Zhao, Tianyi Li, Michael Sobolev
The prevalence of social media and its escalating impact on mental health has highlighted the need for effective digital wellbeing strategies. Current digital wellbeing interventions have primarily focused on reducing screen time and social media use, often neglecting the potential benefits of these platforms. This paper introduces a new perspective centered around empowering positive social media experiences, instead of limiting users with restrictive rules. In line with this perspective, we lay out the key requirements that should be considered in future work, aiming to spark a dialogue in this emerging area. We further present our initial effort to address these requirements with PauseNow, an innovative digital wellbeing intervention designed to align users' digital behaviors with their intentions. PauseNow leverages digital nudging and intention-aware recommendations to gently guide users back to their original intentions when they"get lost"during their digital usage, promoting a more mindful use of social media.
社交媒体的盛行及其对心理健康不断升级的影响凸显了制定有效的数字健康战略的必要性。目前的数字健康干预措施主要侧重于减少屏幕时间和社交媒体的使用,往往忽视了这些平台的潜在益处。本文提出了一个新的视角,其核心是赋予用户积极的社交媒体体验,而不是用限制性规则来限制用户。根据这一观点,我们提出了未来工作中应考虑的关键要求,旨在引发这一新兴领域的对话。我们进一步介绍了我们通过 PauseNow 来满足这些要求的初步努力,PauseNow 是一种创新的数字健康干预措施,旨在使用户的数字行为与他们的意图保持一致。PauseNow 利用数字引导和意图感知建议,在用户使用数字媒体时 "迷失方向 "时,温和地引导他们回到最初的意图,从而促进他们更用心地使用社交媒体。
{"title":"Digital Wellbeing Redefined: Toward User-Centric Approach for Positive Social Media Engagement","authors":"Yixue Zhao, Tianyi Li, Michael Sobolev","doi":"10.1145/3647632.3651392","DOIUrl":"https://doi.org/10.1145/3647632.3651392","url":null,"abstract":"The prevalence of social media and its escalating impact on mental health has highlighted the need for effective digital wellbeing strategies. Current digital wellbeing interventions have primarily focused on reducing screen time and social media use, often neglecting the potential benefits of these platforms. This paper introduces a new perspective centered around empowering positive social media experiences, instead of limiting users with restrictive rules. In line with this perspective, we lay out the key requirements that should be considered in future work, aiming to spark a dialogue in this emerging area. We further present our initial effort to address these requirements with PauseNow, an innovative digital wellbeing intervention designed to align users' digital behaviors with their intentions. PauseNow leverages digital nudging and intention-aware recommendations to gently guide users back to their original intentions when they\"get lost\"during their digital usage, promoting a more mindful use of social media.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":"2 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140397048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
To Reach the Unreachable: Exploring the Potential of VR Hand Redirection for Upper Limb Rehabilitation 触及无法触及者:探索虚拟现实手部重定向技术在上肢康复中的潜力
Pub Date : 2024-03-08 DOI: 10.1145/3613904.3642912
Peixuan Xiong, Yukai Zhang, Nandi Zhang, Shihan Fu, Xin Li, Yadan Zheng, Jinni Zhou, Xiquan Hu, Mingming Fan
Rehabilitation therapies are widely employed to assist people with motor impairments in regaining control over their affected body parts. Nevertheless, factors such as fatigue and low self-efficacy can hinder patient compliance during extensive rehabilitation processes. Utilizing hand redirection in virtual reality (VR) enables patients to accomplish seemingly more challenging tasks, thereby bolstering their motivation and confidence. While previous research has investigated user experience and hand redirection among able-bodied people, its effects on motor-impaired people remain unexplored. In this paper, we present a VR rehabilitation application that harnesses hand redirection. Through a user study and semi-structured interviews, we examine the impact of hand redirection on the rehabilitation experiences of people with motor impairments and its potential to enhance their motivation for upper limb rehabilitation. Our findings suggest that patients are not sensitive to hand movement inconsistency, and the majority express interest in incorporating hand redirection into future long-term VR rehabilitation programs.
康复疗法被广泛用于帮助运动障碍患者重新控制受影响的身体部位。然而,在广泛的康复过程中,疲劳和自我效能低等因素会阻碍患者的依从性。在虚拟现实(VR)中利用手部重定向技术可以让患者完成看似更具挑战性的任务,从而增强他们的动力和信心。虽然以往的研究已经对健全人的用户体验和手部重定向进行了调查,但其对运动受损者的影响仍有待探索。在本文中,我们介绍了一种利用手部重定向的 VR 康复应用。通过用户研究和半结构式访谈,我们研究了手部重定向对运动障碍患者康复体验的影响,以及手部重定向提高他们上肢康复积极性的潜力。我们的研究结果表明,患者对手部动作的不一致性并不敏感,大多数患者表示有兴趣将手部重定向功能纳入未来的长期虚拟现实康复计划中。
{"title":"To Reach the Unreachable: Exploring the Potential of VR Hand Redirection for Upper Limb Rehabilitation","authors":"Peixuan Xiong, Yukai Zhang, Nandi Zhang, Shihan Fu, Xin Li, Yadan Zheng, Jinni Zhou, Xiquan Hu, Mingming Fan","doi":"10.1145/3613904.3642912","DOIUrl":"https://doi.org/10.1145/3613904.3642912","url":null,"abstract":"Rehabilitation therapies are widely employed to assist people with motor impairments in regaining control over their affected body parts. Nevertheless, factors such as fatigue and low self-efficacy can hinder patient compliance during extensive rehabilitation processes. Utilizing hand redirection in virtual reality (VR) enables patients to accomplish seemingly more challenging tasks, thereby bolstering their motivation and confidence. While previous research has investigated user experience and hand redirection among able-bodied people, its effects on motor-impaired people remain unexplored. In this paper, we present a VR rehabilitation application that harnesses hand redirection. Through a user study and semi-structured interviews, we examine the impact of hand redirection on the rehabilitation experiences of people with motor impairments and its potential to enhance their motivation for upper limb rehabilitation. Our findings suggest that patients are not sensitive to hand movement inconsistency, and the majority express interest in incorporating hand redirection into future long-term VR rehabilitation programs.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":"31 36","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140396802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Culture Shapes What People Want From AI 文化如何影响人们对人工智能的需求
Pub Date : 2024-03-08 DOI: 10.1145/3613904.3642660
Xiao Ge, Chunchen Xu, Daigo Misaki, Hazel Rose Markus, Jeanne L Tsai
There is an urgent need to incorporate the perspectives of culturally diverse groups into AI developments. We present a novel conceptual framework for research that aims to expand, reimagine, and reground mainstream visions of AI using independent and interdependent cultural models of the self and the environment. Two survey studies support this framework and provide preliminary evidence that people apply their cultural models when imagining their ideal AI. Compared with European American respondents, Chinese respondents viewed it as less important to control AI and more important to connect with AI, and were more likely to prefer AI with capacities to influence. Reflecting both cultural models, findings from African American respondents resembled both European American and Chinese respondents. We discuss study limitations and future directions and highlight the need to develop culturally responsive and relevant AI to serve a broader segment of the world population.
目前迫切需要将不同文化群体的观点纳入人工智能的发展。我们提出了一个新颖的研究概念框架,旨在利用独立和相互依存的自我与环境文化模型,扩展、重新想象和重新定位人工智能的主流愿景。两项调查研究为这一框架提供了支持,并初步证明了人们在想象其理想的人工智能时会运用自己的文化模式。与欧美受访者相比,中国受访者认为控制人工智能不那么重要,而与人工智能建立联系则更重要,并且更倾向于具有影响能力的人工智能。非裔美国受访者的调查结果与欧裔美国受访者和华裔受访者的调查结果相似,反映了这两种文化模式。我们讨论了研究的局限性和未来发展方向,并强调有必要开发适应不同文化、与之相关的人工智能,以服务于全球更广泛的人群。
{"title":"How Culture Shapes What People Want From AI","authors":"Xiao Ge, Chunchen Xu, Daigo Misaki, Hazel Rose Markus, Jeanne L Tsai","doi":"10.1145/3613904.3642660","DOIUrl":"https://doi.org/10.1145/3613904.3642660","url":null,"abstract":"There is an urgent need to incorporate the perspectives of culturally diverse groups into AI developments. We present a novel conceptual framework for research that aims to expand, reimagine, and reground mainstream visions of AI using independent and interdependent cultural models of the self and the environment. Two survey studies support this framework and provide preliminary evidence that people apply their cultural models when imagining their ideal AI. Compared with European American respondents, Chinese respondents viewed it as less important to control AI and more important to connect with AI, and were more likely to prefer AI with capacities to influence. Reflecting both cultural models, findings from African American respondents resembled both European American and Chinese respondents. We discuss study limitations and future directions and highlight the need to develop culturally responsive and relevant AI to serve a broader segment of the world population.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":"28 32","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140396897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
ArXiv
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1