首页 > 最新文献

The 34th Annual ACM Symposium on User Interface Software and Technology最新文献

英文 中文
Route Tapestries: Navigating 360° Virtual Tour Videos Using Slit-Scan Visualizations 路线挂毯:导航360°虚拟旅游视频使用狭缝扫描可视化
Pub Date : 2021-10-10 DOI: 10.1145/3472749.3474746
Jiannan Li, Jia-Ming Lyu, Maurício Sousa, Ravin Balakrishnan, Anthony Tang, Tovi Grossman
An increasingly popular way of experiencing remote places is by viewing 360° virtual tour videos, which show the surrounding view while traveling through an environment. However, finding particular locations in these videos can be difficult because current interfaces rely on distorted frame previews for navigation. To alleviate this usability issue, we propose Route Tapestries, continuous orthographic-perspective projection of scenes along camera routes. We first introduce an algorithm for automatically constructing Route Tapestries from a 360° video, inspired by the slit-scan photography technique. We then present a desktop video player interface using a Route Tapestry timeline for navigation. An online evaluation using a target-seeking task showed that Route Tapestries allowed users to locate targets 22% faster than with YouTube-style equirectangular previews and reduced the failure rate by 75% compared to a more conventional row-of-thumbnail strip preview. Our results highlight the value of reducing visual distortion and providing continuous visual contexts in previews for navigating 360°virtual tour videos.
一种越来越受欢迎的体验偏远地区的方式是观看360°虚拟旅行视频,它可以在旅行时展示周围的景色。然而,在这些视频中找到特定的位置可能很困难,因为当前的界面依赖于扭曲的帧预览来导航。为了缓解这个可用性问题,我们提出了路线挂毯,沿着相机路线连续的正字法透视投影场景。我们首先介绍了一种算法,用于从360°视频自动构建路径挂毯,灵感来自狭缝扫描摄影技术。然后,我们使用Route Tapestry时间轴提供一个桌面视频播放器界面进行导航。一项使用目标寻找任务的在线评估显示,Route tapestry让用户定位目标的速度比使用youtube风格的等矩形预览快22%,与更传统的缩略图行条预览相比,失败率降低了75%。我们的研究结果强调了在导航360°虚拟游览视频的预览中减少视觉失真和提供连续视觉上下文的价值。
{"title":"Route Tapestries: Navigating 360° Virtual Tour Videos Using Slit-Scan Visualizations","authors":"Jiannan Li, Jia-Ming Lyu, Maurício Sousa, Ravin Balakrishnan, Anthony Tang, Tovi Grossman","doi":"10.1145/3472749.3474746","DOIUrl":"https://doi.org/10.1145/3472749.3474746","url":null,"abstract":"An increasingly popular way of experiencing remote places is by viewing 360° virtual tour videos, which show the surrounding view while traveling through an environment. However, finding particular locations in these videos can be difficult because current interfaces rely on distorted frame previews for navigation. To alleviate this usability issue, we propose Route Tapestries, continuous orthographic-perspective projection of scenes along camera routes. We first introduce an algorithm for automatically constructing Route Tapestries from a 360° video, inspired by the slit-scan photography technique. We then present a desktop video player interface using a Route Tapestry timeline for navigation. An online evaluation using a target-seeking task showed that Route Tapestries allowed users to locate targets 22% faster than with YouTube-style equirectangular previews and reduced the failure rate by 75% compared to a more conventional row-of-thumbnail strip preview. Our results highlight the value of reducing visual distortion and providing continuous visual contexts in previews for navigating 360°virtual tour videos.","PeriodicalId":209178,"journal":{"name":"The 34th Annual ACM Symposium on User Interface Software and Technology","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117274835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Planning Epidemic Interventions with EpiPolicy 利用EpiPolicy规划流行病干预措施
Pub Date : 2021-10-10 DOI: 10.1145/3472749.3474794
Zain Tariq, M. Mannino, Mai Le Xuan Anh, Whitney Bagge, A. Abouzeid, D. Shasha
Model-driven policymaking for epidemic control is a challenging collaborative process. It begins when a team of public-health officials, epidemiologists, and economists construct a reasonably predictive disease model representative of the team’s region of interest as a function of its unique socio-economic and demographic characteristics. As the team considers possible interventions such as school closures, social distancing, vaccination drives, etc., they need to simultaneously model each intervention’s effect on disease spread and economic cost. The team then engages in an extensive what-if analysis process to determine a cost-effective policy: a schedule of when, where and how extensively each intervention should be applied. This policymaking process is often an iterative and laborious programming-intensive effort where parameters are introduced and refined, model and intervention behaviors are modified, and schedules changed. We have designed and developed EpiPolicy to support this effort. EpiPolicy is a policy aid and epidemic simulation tool that supports the mathematical specification and simulation of disease and population models, the programmatic specification of interventions and the declarative construction of schedules. EpiPolicy’s design supports a separation of concerns in the modeling process and enables capabilities such as the iterative and automatic exploration of intervention plans with Monte Carlo simulations to find a cost-effective one. We report expert feedback on EpiPolicy. In general, experts found EpiPolicy’s capabilities powerful and transformative, when compared with their current practice.
模型驱动的流行病控制决策是一个具有挑战性的协作过程。首先,一个由公共卫生官员、流行病学家和经济学家组成的团队,根据该地区独特的社会经济和人口特征,构建一个合理的疾病预测模型,代表该团队感兴趣的地区。当研究小组考虑诸如关闭学校、保持社会距离、接种疫苗等可能的干预措施时,他们需要同时模拟每种干预措施对疾病传播和经济成本的影响。然后,团队进行广泛的假设分析过程,以确定具有成本效益的政策:何时、何地以及应用每种干预措施的范围。这一政策制定过程通常是一个反复的、费力的规划密集型工作,其中引入和细化参数,修改模型和干预行为,改变时间表。我们设计并开发了EpiPolicy来支持这项工作。EpiPolicy是一个政策援助和流行病模拟工具,支持疾病和人口模型的数学规范和模拟、干预措施的方案规范和时间表的声明性构建。EpiPolicy的设计支持建模过程中的关注点分离,并支持使用蒙特卡罗模拟对干预计划进行迭代和自动探索等功能,以找到具有成本效益的干预计划。我们报告专家对EpiPolicy的反馈。总的来说,专家们发现,与目前的做法相比,EpiPolicy的功能强大,具有变革性。
{"title":"Planning Epidemic Interventions with EpiPolicy","authors":"Zain Tariq, M. Mannino, Mai Le Xuan Anh, Whitney Bagge, A. Abouzeid, D. Shasha","doi":"10.1145/3472749.3474794","DOIUrl":"https://doi.org/10.1145/3472749.3474794","url":null,"abstract":"Model-driven policymaking for epidemic control is a challenging collaborative process. It begins when a team of public-health officials, epidemiologists, and economists construct a reasonably predictive disease model representative of the team’s region of interest as a function of its unique socio-economic and demographic characteristics. As the team considers possible interventions such as school closures, social distancing, vaccination drives, etc., they need to simultaneously model each intervention’s effect on disease spread and economic cost. The team then engages in an extensive what-if analysis process to determine a cost-effective policy: a schedule of when, where and how extensively each intervention should be applied. This policymaking process is often an iterative and laborious programming-intensive effort where parameters are introduced and refined, model and intervention behaviors are modified, and schedules changed. We have designed and developed EpiPolicy to support this effort. EpiPolicy is a policy aid and epidemic simulation tool that supports the mathematical specification and simulation of disease and population models, the programmatic specification of interventions and the declarative construction of schedules. EpiPolicy’s design supports a separation of concerns in the modeling process and enables capabilities such as the iterative and automatic exploration of intervention plans with Monte Carlo simulations to find a cost-effective one. We report expert feedback on EpiPolicy. In general, experts found EpiPolicy’s capabilities powerful and transformative, when compared with their current practice.","PeriodicalId":209178,"journal":{"name":"The 34th Annual ACM Symposium on User Interface Software and Technology","volume":"387 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116007696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
iText: Hands-free Text Entry on an Imaginary Keyboard for Augmented Reality Systems ittext:增强现实系统的虚拟键盘上的免提文本输入
Pub Date : 2021-10-10 DOI: 10.1145/3472749.3474788
Xueshi Lu, Difeng Yu, Hai-Ning Liang, Jorge Gonçalves
Text entry is an important and frequent task in interactive devices including augmented reality head-mounted displays (AR HMDs). In current AR HMDs, there are still two main open challenges to overcome for efficient and usable text entry: arm fatigue due to mid-air input and visual occlusion because of their small see-through displays. To address these challenges, we present iText, a technique for AR HMDs that is hands-free and is based on an imaginary (invisible) keyboard. We first show that it is feasible and practical to use an imaginary keyboard on AR HMDs. Then, we evaluated its performance and usability with three hands-free selection mechanisms: eye blinks (E-Type), dwell (D-Type), and swipe gestures (G-Type). Our results show that users could achieve an average text entry speed of 11.95, 9.03 and 9.84 words per minutes (WPM) with E-Type, D-Type, and G-Type, respectively. Given that iText with E-Type outperformed the other two selection mechanisms in text entry rate and subjective feedback, we ran a third, 5-day study. Our results show that iText with E-Type can achieve an average text entry rate of 13.76 WPM with a mean word error rate of 1.5%. In short, iText can enable efficient eyes-free text entry and can be useful for various application scenarios in AR HMDs.
在包括增强现实头戴式显示器(AR hmd)在内的交互式设备中,文本输入是一项重要且频繁的任务。在当前的AR头戴式显示器中,要实现高效和可用的文本输入,仍有两个主要的开放挑战需要克服:由于空中输入而引起的手臂疲劳和由于其小透明显示器而造成的视觉遮挡。为了应对这些挑战,我们提出了iText,这是一种基于虚拟(不可见)键盘的AR头戴式显示器技术。我们首先证明了在AR头显上使用虚拟键盘是可行和实用的。然后,我们用三种免提选择机制评估了它的性能和可用性:眨眼(e型)、停留(d型)和滑动手势(g型)。结果表明,e型、d型和g型的平均输入速度分别为11.95、9.03和9.84字/分钟(WPM)。鉴于具有E-Type的ittext在文本输入率和主观反馈方面优于其他两种选择机制,我们进行了为期5天的第三次研究。结果表明,使用E-Type的ittext平均文本输入率为13.76 WPM,平均单词错误率为1.5%。简而言之,iText可以实现高效的无眼文本输入,并且可以用于AR头显的各种应用场景。
{"title":"iText: Hands-free Text Entry on an Imaginary Keyboard for Augmented Reality Systems","authors":"Xueshi Lu, Difeng Yu, Hai-Ning Liang, Jorge Gonçalves","doi":"10.1145/3472749.3474788","DOIUrl":"https://doi.org/10.1145/3472749.3474788","url":null,"abstract":"Text entry is an important and frequent task in interactive devices including augmented reality head-mounted displays (AR HMDs). In current AR HMDs, there are still two main open challenges to overcome for efficient and usable text entry: arm fatigue due to mid-air input and visual occlusion because of their small see-through displays. To address these challenges, we present iText, a technique for AR HMDs that is hands-free and is based on an imaginary (invisible) keyboard. We first show that it is feasible and practical to use an imaginary keyboard on AR HMDs. Then, we evaluated its performance and usability with three hands-free selection mechanisms: eye blinks (E-Type), dwell (D-Type), and swipe gestures (G-Type). Our results show that users could achieve an average text entry speed of 11.95, 9.03 and 9.84 words per minutes (WPM) with E-Type, D-Type, and G-Type, respectively. Given that iText with E-Type outperformed the other two selection mechanisms in text entry rate and subjective feedback, we ran a third, 5-day study. Our results show that iText with E-Type can achieve an average text entry rate of 13.76 WPM with a mean word error rate of 1.5%. In short, iText can enable efficient eyes-free text entry and can be useful for various application scenarios in AR HMDs.","PeriodicalId":209178,"journal":{"name":"The 34th Annual ACM Symposium on User Interface Software and Technology","volume":"276 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116558937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Chemical Haptics: Rendering Haptic Sensations via Topical Stimulants 化学触觉:通过局部刺激物呈现触觉感觉
Pub Date : 2021-10-10 DOI: 10.1145/3472749.3474747
Jasmine Lu, Ziwei Liu, Jas Brooks, Pedro Lopes
We propose a new class of haptic devices that provide haptic sensations by delivering liquid-stimulants to the user's skin; we call this chemical haptics. Upon absorbing these stimulants, which contain safe and small doses of key active ingredients, receptors in the user's skin are chemically triggered, rendering distinct haptic sensations. We identified five chemicals that can render lasting haptic sensations: tingling (sanshool), numbing (lidocaine), stinging (cinnamaldehyde), warming (capsaicin), and cooling (menthol). To enable the application of our novel approach in a variety of settings (such as VR), we engineered a self-contained wearable that can be worn anywhere on the user's skin (e.g., face, arms, legs). Implemented as a soft silicone patch, our device uses micropumps to push the liquid stimulants through channels that are open to the user's skin, enabling topical stimulants to be absorbed by the skin as they pass through. Our approach presents two unique benefits. First, it enables sensations, such as numbing, not possible with existing haptic devices. Second, our approach offers a new pathway, via the skin's chemical receptors, for achieving multiple haptic sensations using a single actuator, which would otherwise require combining multiple actuators (e.g., Peltier, vibration motors, electro-tactile stimulation). We evaluated our approach by means of two studies. In our first study, we characterized the temporal profiles of sensations elicited by each chemical. Using these insights, we designed five interactive VR experiences utilizing chemical haptics, and in our second user study, participants rated these VR experiences with chemical haptics as more immersive than without. Finally, as the first work exploring the use of chemical haptics on the skin, we offer recommendations to designers for how they may employ our approach for their interactive experiences.
我们提出了一种新的触觉设备,通过向用户的皮肤提供液体兴奋剂来提供触觉感觉;我们称之为化学触觉。在吸收这些含有安全且小剂量关键活性成分的兴奋剂后,使用者皮肤中的受体被化学触发,产生独特的触觉。我们确定了五种可以产生持久触觉的化学物质:刺痛感(sanshool)、麻木感(利多卡因)、刺痛感(肉桂醛)、升温感(辣椒素)和降温感(薄荷醇)。为了使我们的新方法能够在各种环境(如VR)中应用,我们设计了一个独立的可穿戴设备,可以佩戴在用户皮肤的任何地方(例如,脸,手臂,腿)。作为一个柔软的硅胶贴片,我们的设备使用微泵推动液体兴奋剂通过对用户皮肤开放的通道,使局部兴奋剂在通过时被皮肤吸收。我们的方法有两个独特的好处。首先,它能产生感觉,比如麻木,这是现有触觉设备无法实现的。其次,我们的方法提供了一种新的途径,通过皮肤的化学感受器,使用单个致动器实现多种触觉感觉,否则需要组合多个致动器(例如,Peltier,振动电机,电触觉刺激)。我们通过两项研究来评估我们的方法。在我们的第一项研究中,我们描述了每种化学物质引起的感觉的时间概况。利用这些见解,我们设计了五种利用化学触觉的交互式VR体验,在我们的第二次用户研究中,参与者认为这些有化学触觉的VR体验比没有的更身临其境。最后,作为探索在皮肤上使用化学触觉的第一项工作,我们为设计师提供了如何将我们的方法用于他们的互动体验的建议。
{"title":"Chemical Haptics: Rendering Haptic Sensations via Topical Stimulants","authors":"Jasmine Lu, Ziwei Liu, Jas Brooks, Pedro Lopes","doi":"10.1145/3472749.3474747","DOIUrl":"https://doi.org/10.1145/3472749.3474747","url":null,"abstract":"We propose a new class of haptic devices that provide haptic sensations by delivering liquid-stimulants to the user's skin; we call this chemical haptics. Upon absorbing these stimulants, which contain safe and small doses of key active ingredients, receptors in the user's skin are chemically triggered, rendering distinct haptic sensations. We identified five chemicals that can render lasting haptic sensations: tingling (sanshool), numbing (lidocaine), stinging (cinnamaldehyde), warming (capsaicin), and cooling (menthol). To enable the application of our novel approach in a variety of settings (such as VR), we engineered a self-contained wearable that can be worn anywhere on the user's skin (e.g., face, arms, legs). Implemented as a soft silicone patch, our device uses micropumps to push the liquid stimulants through channels that are open to the user's skin, enabling topical stimulants to be absorbed by the skin as they pass through. Our approach presents two unique benefits. First, it enables sensations, such as numbing, not possible with existing haptic devices. Second, our approach offers a new pathway, via the skin's chemical receptors, for achieving multiple haptic sensations using a single actuator, which would otherwise require combining multiple actuators (e.g., Peltier, vibration motors, electro-tactile stimulation). We evaluated our approach by means of two studies. In our first study, we characterized the temporal profiles of sensations elicited by each chemical. Using these insights, we designed five interactive VR experiences utilizing chemical haptics, and in our second user study, participants rated these VR experiences with chemical haptics as more immersive than without. Finally, as the first work exploring the use of chemical haptics on the skin, we offer recommendations to designers for how they may employ our approach for their interactive experiences.","PeriodicalId":209178,"journal":{"name":"The 34th Annual ACM Symposium on User Interface Software and Technology","volume":" 14","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120828237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
KondoCloud: Improving Information Management in Cloud Storage via Recommendations Based on File Similarity KondoCloud:通过基于文件相似度的推荐改进云存储中的信息管理
Pub Date : 2021-10-10 DOI: 10.1145/3472749.3474736
Will Brackenbury, A. Mcnutt, K. Chard, Aaron J. Elmore, Blase Ur
Users face many challenges in keeping their personal file collections organized. While current file-management interfaces help users retrieve files in disorganized repositories, they do not aid in organization. Pertinent files can be difficult to find, and files that should have been deleted may remain. To help, we designed KondoCloud, a file-browser interface for personal cloud storage. KondoCloud makes machine learning-based recommendations of files users may want to retrieve, move, or delete. These recommendations leverage the intuition that similar files should be managed similarly. We developed and evaluated KondoCloud through two complementary online user studies. In our Observation Study, we logged the actions of 69 participants who spent 30 minutes manually organizing their own Google Drive repositories. We identified high-level organizational strategies, including moving related files to newly created sub-folders and extensively deleting files. To train the classifiers that underpin KondoCloud’s recommendations, we had participants label whether pairs of files were similar and whether they should be managed similarly. In addition, we extracted ten metadata and content features from all files in participants’ repositories. Our logistic regression classifiers all achieved F1 scores of 0.72 or higher. In our Evaluation Study, 62 participants used KondoCloud either with or without recommendations. Roughly half of participants accepted a non-trivial fraction of recommendations, and some participants accepted nearly all of them. Participants who were shown the recommendations were more likely to delete related files located in different directories. They also generally felt the recommendations improved efficiency. Participants who were not shown recommendations nonetheless manually performed about a third of the actions that would have been recommended.
用户在组织个人文件收藏方面面临许多挑战。虽然当前的文件管理接口可以帮助用户检索杂乱无章的存储库中的文件,但它们无助于组织。相关的文件可能很难找到,而应该删除的文件可能会保留下来。为了提供帮助,我们设计了KondoCloud,一个用于个人云存储的文件浏览器界面。KondoCloud对用户可能想要检索、移动或删除的文件进行基于机器学习的推荐。这些建议利用了类似文件应该以类似方式管理的直觉。我们通过两个互补的在线用户研究开发和评估了KondoCloud。在我们的观察研究中,我们记录了69名参与者的行为,他们花了30分钟手工组织他们自己的谷歌Drive存储库。我们确定了高级组织策略,包括将相关文件移动到新创建的子文件夹和大量删除文件。为了训练支持KondoCloud推荐的分类器,我们让参与者标记成对的文件是否相似,以及它们是否应该以类似的方式管理。此外,我们从参与者存储库中的所有文件中提取了10个元数据和内容特征。我们的逻辑回归分类器都达到了0.72或更高的F1分数。在我们的评估研究中,62名参与者在有或没有推荐的情况下使用KondoCloud。大约一半的参与者接受了一小部分建议,有些参与者几乎接受了所有建议。看到推荐的参与者更有可能删除位于不同目录下的相关文件。他们还普遍认为这些建议提高了效率。尽管如此,没有看到建议的参与者手动执行了大约三分之一的建议操作。
{"title":"KondoCloud: Improving Information Management in Cloud Storage via Recommendations Based on File Similarity","authors":"Will Brackenbury, A. Mcnutt, K. Chard, Aaron J. Elmore, Blase Ur","doi":"10.1145/3472749.3474736","DOIUrl":"https://doi.org/10.1145/3472749.3474736","url":null,"abstract":"Users face many challenges in keeping their personal file collections organized. While current file-management interfaces help users retrieve files in disorganized repositories, they do not aid in organization. Pertinent files can be difficult to find, and files that should have been deleted may remain. To help, we designed KondoCloud, a file-browser interface for personal cloud storage. KondoCloud makes machine learning-based recommendations of files users may want to retrieve, move, or delete. These recommendations leverage the intuition that similar files should be managed similarly. We developed and evaluated KondoCloud through two complementary online user studies. In our Observation Study, we logged the actions of 69 participants who spent 30 minutes manually organizing their own Google Drive repositories. We identified high-level organizational strategies, including moving related files to newly created sub-folders and extensively deleting files. To train the classifiers that underpin KondoCloud’s recommendations, we had participants label whether pairs of files were similar and whether they should be managed similarly. In addition, we extracted ten metadata and content features from all files in participants’ repositories. Our logistic regression classifiers all achieved F1 scores of 0.72 or higher. In our Evaluation Study, 62 participants used KondoCloud either with or without recommendations. Roughly half of participants accepted a non-trivial fraction of recommendations, and some participants accepted nearly all of them. Participants who were shown the recommendations were more likely to delete related files located in different directories. They also generally felt the recommendations improved efficiency. Participants who were not shown recommendations nonetheless manually performed about a third of the actions that would have been recommended.","PeriodicalId":209178,"journal":{"name":"The 34th Annual ACM Symposium on User Interface Software and Technology","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114210390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
ReflecTrack: Enabling 3D Acoustic Position Tracking Using Commodity Dual-Microphone Smartphones ReflecTrack:使用商用双麦克风智能手机实现3D声学位置跟踪
Pub Date : 2021-10-10 DOI: 10.1145/3472749.3474805
Yuzhou Zhuang, Yuntao Wang, Yukang Yan, Xuhai Xu, Yuanchun Shi
3D position tracking on smartphones has the potential to unlock a variety of novel applications, but has not been made widely available due to limitations in smartphone sensors. In this paper, we propose ReflecTrack, a novel 3D acoustic position tracking method for commodity dual-microphone smartphones. A ubiquitous speaker (e.g., smartwatch or earbud) generates inaudible Frequency Modulated Continuous Wave (FMCW) acoustic signals that are picked up by both smartphone microphones. To enable 3D tracking with two microphones, we introduce a reflective surface that can be easily found in everyday objects near the smartphone. Thus, the microphones can receive sound from the speaker and echoes from the surface for FMCW-based acoustic ranging. To simultaneously estimate the distances from the direct and reflective paths, we propose the echo-aware FMCW technique with a new signal pattern and target detection process. Our user study shows that ReflecTrack achieves a median error of 28.4 mm in the 60cm × 60cm × 60cm space and 22.1 mm in the 30cm × 30cm × 30cm space for 3D positioning. We demonstrate the easy accessibility of ReflecTrack using everyday surfaces and objects with several typical applications of 3D position tracking, including 3D input for smartphones, fine-grained gesture recognition, and motion tracking in smartphone-based VR systems.
智能手机上的3D位置跟踪有可能解锁各种新颖的应用,但由于智能手机传感器的限制,尚未广泛应用。在本文中,我们提出了一种新的用于商用双麦克风智能手机的3D声学位置跟踪方法ReflecTrack。无处不在的扬声器(例如智能手表或耳塞)会产生不可听的调频连续波(FMCW)声信号,这些信号会被两个智能手机麦克风接收。为了实现两个麦克风的3D跟踪,我们引入了一个反射表面,可以很容易地在智能手机附近的日常物品中找到。因此,麦克风可以接收来自扬声器的声音和来自表面的回声,用于基于fmcw的声学测距。为了同时估计直接路径和反射路径的距离,我们提出了一种新的信号模式和目标检测过程的回波感知FMCW技术。我们的用户研究表明,ReflecTrack在60cm × 60cm × 60cm空间中实现了28.4 mm的中位误差,在30cm × 30cm × 30cm空间中实现了22.1 mm的中位误差。我们通过几种典型的3D位置跟踪应用,包括智能手机的3D输入、细粒度手势识别和基于智能手机的VR系统中的运动跟踪,展示了ReflecTrack使用日常表面和对象的易于访问性。
{"title":"ReflecTrack: Enabling 3D Acoustic Position Tracking Using Commodity Dual-Microphone Smartphones","authors":"Yuzhou Zhuang, Yuntao Wang, Yukang Yan, Xuhai Xu, Yuanchun Shi","doi":"10.1145/3472749.3474805","DOIUrl":"https://doi.org/10.1145/3472749.3474805","url":null,"abstract":"3D position tracking on smartphones has the potential to unlock a variety of novel applications, but has not been made widely available due to limitations in smartphone sensors. In this paper, we propose ReflecTrack, a novel 3D acoustic position tracking method for commodity dual-microphone smartphones. A ubiquitous speaker (e.g., smartwatch or earbud) generates inaudible Frequency Modulated Continuous Wave (FMCW) acoustic signals that are picked up by both smartphone microphones. To enable 3D tracking with two microphones, we introduce a reflective surface that can be easily found in everyday objects near the smartphone. Thus, the microphones can receive sound from the speaker and echoes from the surface for FMCW-based acoustic ranging. To simultaneously estimate the distances from the direct and reflective paths, we propose the echo-aware FMCW technique with a new signal pattern and target detection process. Our user study shows that ReflecTrack achieves a median error of 28.4 mm in the 60cm × 60cm × 60cm space and 22.1 mm in the 30cm × 30cm × 30cm space for 3D positioning. We demonstrate the easy accessibility of ReflecTrack using everyday surfaces and objects with several typical applications of 3D position tracking, including 3D input for smartphones, fine-grained gesture recognition, and motion tracking in smartphone-based VR systems.","PeriodicalId":209178,"journal":{"name":"The 34th Annual ACM Symposium on User Interface Software and Technology","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121501993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Tabs.do: Task-Centric Browser Tab Management 选项卡。以任务为中心的浏览器选项卡管理
Pub Date : 2021-10-10 DOI: 10.1145/3472749.3474777
Joseph Chee Chang
Despite the increasing complexity and scale of people’s online activities, browser interfaces have stayed largely the same since tabs were introduced in major browsers nearly 20 years ago. The gap between simple tab-based browser interfaces and the complexity of users’ tasks can lead to serious adverse effects – commonly referred to as “tab overload.” This paper introduces a Chrome extension called Tabs.do, which explores bringing a task-centric approach to the browser, helping users to group their tabs into tasks and then organize, prioritize, and switch between those tasks fluidly. To lower the cost of importing, Tabs.do uses machine learning to make intelligent suggestions for grouping users’ open tabs into task bundles by exploiting behavioral and semantic features. We conducted a field deployment study where participants used Tabs.do with their real-life tasks in the wild, and showed that Tabs.do can decrease tab clutter, enabled users to create rich task structures with lightweight interactions, and allowed participants to context-switch among tasks more efficiently.
尽管人们的在线活动越来越复杂,规模也越来越大,但自从20年前标签被引入主流浏览器以来,浏览器界面基本上保持不变。简单的基于选项卡的浏览器界面和用户任务的复杂性之间的差距可能导致严重的不利影响——通常被称为“选项卡过载”。本文介绍了一个名为Tabs的Chrome扩展。Do,它探索了将以任务为中心的方法引入浏览器,帮助用户将他们的选项卡分组到任务中,然后在这些任务之间流畅地组织、优先排序和切换。为降低导入成本,选项卡。Do使用机器学习,通过利用行为和语义特征,为将用户打开的选项卡分组到任务包中提供智能建议。我们进行了一项现场部署研究,参与者使用tab。在野外完成现实生活中的任务,并展示了Tabs。Do可以减少TAB的混乱,使用户能够使用轻量级交互创建丰富的任务结构,并允许参与者更有效地在任务之间进行上下文切换。
{"title":"Tabs.do: Task-Centric Browser Tab Management","authors":"Joseph Chee Chang","doi":"10.1145/3472749.3474777","DOIUrl":"https://doi.org/10.1145/3472749.3474777","url":null,"abstract":"Despite the increasing complexity and scale of people’s online activities, browser interfaces have stayed largely the same since tabs were introduced in major browsers nearly 20 years ago. The gap between simple tab-based browser interfaces and the complexity of users’ tasks can lead to serious adverse effects – commonly referred to as “tab overload.” This paper introduces a Chrome extension called Tabs.do, which explores bringing a task-centric approach to the browser, helping users to group their tabs into tasks and then organize, prioritize, and switch between those tasks fluidly. To lower the cost of importing, Tabs.do uses machine learning to make intelligent suggestions for grouping users’ open tabs into task bundles by exploiting behavioral and semantic features. We conducted a field deployment study where participants used Tabs.do with their real-life tasks in the wild, and showed that Tabs.do can decrease tab clutter, enabled users to create rich task structures with lightweight interactions, and allowed participants to context-switch among tasks more efficiently.","PeriodicalId":209178,"journal":{"name":"The 34th Annual ACM Symposium on User Interface Software and Technology","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124052455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
AirConstellations: In-Air Device Formations for Cross-Device Interaction via Multiple Spatially-Aware Armatures 空中星座:通过多个空间感知电枢进行跨设备交互的空中设备编队
Pub Date : 2021-10-10 DOI: 10.1145/3472749.3474820
Nicolai Marquardt, Nathalie Henry Riche, Christian Holz, Hugo Romat, M. Pahud, Frederik Brudy, David Ledo, Chunjong Park, M. Nicholas, T. Seyed, E. Ofek, Bongshin Lee, W. Buxton, K. Hinckley
AirConstellations supports a unique semi-fixed style of cross-device interactions via multiple self-spatially-aware armatures to which users can easily attach (or detach) tablets and other devices. In particular, AirConstellations affords highly flexible and dynamic device formations where the users can bring multiple devices together in-air — with 2–5 armatures poseable in 7DoF within the same workspace — to suit the demands of their current task, social situation, app scenario, or mobility needs. This affords an interaction metaphor where relative orientation, proximity, attaching (or detaching) devices, and continuous movement into and out of ad-hoc ensembles can drive context-sensitive interactions. Yet all devices remain self-stable in useful configurations even when released in mid-air. We explore flexible physical arrangement, feedforward of transition options, and layering of devices in-air across a variety of multi-device app scenarios. These include video conferencing with flexible arrangement of the person-space of multiple remote participants around a shared task-space, layered and tiled device formations with overview+detail and shared-to-personal transitions, and flexible composition of UI panels and tool palettes across devices for productivity applications. A preliminary interview study highlights user reactions to AirConstellations, such as for minimally disruptive device formations, easier physical transitions, and balancing ”seeing and being seen” in remote work.
AirConstellations支持一种独特的半固定风格的跨设备交互,通过多个自我空间感知的电枢,用户可以轻松地连接(或分离)平板电脑和其他设备。特别是,AirConstellations提供了高度灵活和动态的设备编队,用户可以在空中将多个设备组合在一起——在同一工作空间内,2-5个电枢可摆出7自由度——以适应他们当前的任务、社交场合、应用程序场景或移动需求。这提供了一个交互隐喻,其中相对方向、接近度、附加(或分离)设备以及进入和退出特定集成的连续移动可以驱动上下文敏感的交互。然而,即使在半空中释放,所有设备也能在有用的配置下保持自稳定。我们探索灵活的物理安排,前馈转换选项,以及各种多设备应用程序场景中的空中设备分层。其中包括视频会议,可以灵活地安排多个远程参与者围绕共享任务空间的个人空间,分层和平铺设备结构,具有概述+细节和共享到个人的过渡,以及跨设备的UI面板和工具调色板的灵活组合,用于生产力应用程序。一项初步的访谈研究强调了用户对AirConstellations的反应,例如最小破坏性的设备形成,更容易的物理转换,以及在远程工作中平衡“看到和被看到”。
{"title":"AirConstellations: In-Air Device Formations for Cross-Device Interaction via Multiple Spatially-Aware Armatures","authors":"Nicolai Marquardt, Nathalie Henry Riche, Christian Holz, Hugo Romat, M. Pahud, Frederik Brudy, David Ledo, Chunjong Park, M. Nicholas, T. Seyed, E. Ofek, Bongshin Lee, W. Buxton, K. Hinckley","doi":"10.1145/3472749.3474820","DOIUrl":"https://doi.org/10.1145/3472749.3474820","url":null,"abstract":"AirConstellations supports a unique semi-fixed style of cross-device interactions via multiple self-spatially-aware armatures to which users can easily attach (or detach) tablets and other devices. In particular, AirConstellations affords highly flexible and dynamic device formations where the users can bring multiple devices together in-air — with 2–5 armatures poseable in 7DoF within the same workspace — to suit the demands of their current task, social situation, app scenario, or mobility needs. This affords an interaction metaphor where relative orientation, proximity, attaching (or detaching) devices, and continuous movement into and out of ad-hoc ensembles can drive context-sensitive interactions. Yet all devices remain self-stable in useful configurations even when released in mid-air. We explore flexible physical arrangement, feedforward of transition options, and layering of devices in-air across a variety of multi-device app scenarios. These include video conferencing with flexible arrangement of the person-space of multiple remote participants around a shared task-space, layered and tiled device formations with overview+detail and shared-to-personal transitions, and flexible composition of UI panels and tool palettes across devices for productivity applications. A preliminary interview study highlights user reactions to AirConstellations, such as for minimally disruptive device formations, easier physical transitions, and balancing ”seeing and being seen” in remote work.","PeriodicalId":209178,"journal":{"name":"The 34th Annual ACM Symposium on User Interface Software and Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127644791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
VEmotion: Using Driving Context for Indirect Emotion Prediction in Real-Time 虚拟情绪:利用驾驶情境进行实时间接情绪预测
Pub Date : 2021-10-10 DOI: 10.1145/3472749.3474775
David Bethge, T. Kosch, L. Chuang, Albrecht Schmidt
Detecting emotions while driving remains a challenge in Human-Computer Interaction. Current methods to estimate the driver’s experienced emotions use physiological sensing (e.g., skin-conductance, electroencephalography), speech, or facial expressions. However, drivers need to use wearable devices, perform explicit voice interaction, or require robust facial expressiveness. We present VEmotion (Virtual Emotion Sensor), a novel method to predict driver emotions in an unobtrusive way using contextual smartphone data. VEmotion analyzes information including traffic dynamics, environmental factors, in-vehicle context, and road characteristics to implicitly classify driver emotions. We demonstrate the applicability in a real-world driving study (N = 12) to evaluate the emotion prediction performance. Our results show that VEmotion outperforms facial expressions by 29% in a person-dependent classification and by 8.5% in a person-independent classification. We discuss how VEmotion enables empathic car interfaces to sense the driver’s emotions and will provide in-situ interface adaptations on-the-go.
在驾驶时检测情绪仍然是人机交互中的一个挑战。目前估计驾驶员情绪的方法使用生理感知(如皮肤电导、脑电图)、言语或面部表情。然而,司机需要使用可穿戴设备,进行明确的语音交互,或者需要强大的面部表情。我们提出了VEmotion(虚拟情绪传感器),这是一种使用上下文智能手机数据以一种不显眼的方式预测驾驶员情绪的新方法。VEmotion通过分析交通动态、环境因素、车内环境和道路特征等信息,对驾驶员情绪进行隐式分类。我们在真实驾驶研究(N = 12)中证明了该方法的适用性,以评估情绪预测的性能。我们的研究结果表明,在与人相关的分类中,虚拟情感比面部表情高出29%,在与人无关的分类中高出8.5%。我们讨论了VEmotion如何使移情汽车界面能够感知驾驶员的情绪,并将在移动中提供原位接口适应。
{"title":"VEmotion: Using Driving Context for Indirect Emotion Prediction in Real-Time","authors":"David Bethge, T. Kosch, L. Chuang, Albrecht Schmidt","doi":"10.1145/3472749.3474775","DOIUrl":"https://doi.org/10.1145/3472749.3474775","url":null,"abstract":"Detecting emotions while driving remains a challenge in Human-Computer Interaction. Current methods to estimate the driver’s experienced emotions use physiological sensing (e.g., skin-conductance, electroencephalography), speech, or facial expressions. However, drivers need to use wearable devices, perform explicit voice interaction, or require robust facial expressiveness. We present VEmotion (Virtual Emotion Sensor), a novel method to predict driver emotions in an unobtrusive way using contextual smartphone data. VEmotion analyzes information including traffic dynamics, environmental factors, in-vehicle context, and road characteristics to implicitly classify driver emotions. We demonstrate the applicability in a real-world driving study (N = 12) to evaluate the emotion prediction performance. Our results show that VEmotion outperforms facial expressions by 29% in a person-dependent classification and by 8.5% in a person-independent classification. We discuss how VEmotion enables empathic car interfaces to sense the driver’s emotions and will provide in-situ interface adaptations on-the-go.","PeriodicalId":209178,"journal":{"name":"The 34th Annual ACM Symposium on User Interface Software and Technology","volume":"41 20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128487501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Etna: Harvesting Action Graphs from Websites Etna:从网站收集行动图表
Pub Date : 2021-10-10 DOI: 10.1145/3472749.3474752
Oriana Riva, Jason Kace
Knowledge bases, such as Google knowledge graph, contain millions of entities (people, places, etc.) and billions of facts about them. While much is known about entities, little is known about the actions these entities relate to. On the other hand, the Web has lots of information about human tasks. A website for restaurant reservations, for example, implicitly knows about various restaurant-related actions (making reservations, delivering food, etc.), the inputs these actions require and their expected output; it can also be automated to execute those actions. To harvest action knowledge from websites, we propose Etna. Users demonstrate how to accomplish various tasks in a website, and Etna constructs an action-state model of the website visualized as an action graph. An action graph includes definitions of tasks and actions, knowledge about their start/end states, and execution scripts for their automation. We report on our experience in building action-state models of many commercial websites and use cases that leveraged them.
知识库,如b谷歌知识图谱,包含数百万个实体(人、地点等)和数十亿个关于它们的事实。虽然我们对实体了解很多,但对这些实体所涉及的行为却知之甚少。另一方面,Web有很多关于人工任务的信息。例如,用于餐厅预订的网站隐式地知道各种与餐厅相关的操作(预订、送餐等)、这些操作所需的输入及其预期输出;也可以自动执行这些操作。为了从网站中获取行动知识,我们提出了Etna。用户演示如何在网站中完成各种任务,Etna构建了一个网站的动作状态模型,以动作图的形式可视化。操作图包括任务和操作的定义、有关其开始/结束状态的知识,以及用于自动化的执行脚本。我们报告了我们在构建许多商业网站的动作状态模型和利用它们的用例方面的经验。
{"title":"Etna: Harvesting Action Graphs from Websites","authors":"Oriana Riva, Jason Kace","doi":"10.1145/3472749.3474752","DOIUrl":"https://doi.org/10.1145/3472749.3474752","url":null,"abstract":"Knowledge bases, such as Google knowledge graph, contain millions of entities (people, places, etc.) and billions of facts about them. While much is known about entities, little is known about the actions these entities relate to. On the other hand, the Web has lots of information about human tasks. A website for restaurant reservations, for example, implicitly knows about various restaurant-related actions (making reservations, delivering food, etc.), the inputs these actions require and their expected output; it can also be automated to execute those actions. To harvest action knowledge from websites, we propose Etna. Users demonstrate how to accomplish various tasks in a website, and Etna constructs an action-state model of the website visualized as an action graph. An action graph includes definitions of tasks and actions, knowledge about their start/end states, and execution scripts for their automation. We report on our experience in building action-state models of many commercial websites and use cases that leveraged them.","PeriodicalId":209178,"journal":{"name":"The 34th Annual ACM Symposium on User Interface Software and Technology","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130974218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
The 34th Annual ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1