首页 > 最新文献

Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology最新文献

英文 中文
Virtual Muscle Force: Communicating Kinesthetic Forces Through Pseudo-Haptic Feedback and Muscle Input 虚拟肌肉力量:通过伪触觉反馈和肌肉输入传达动觉力量
Michael Rietzler, Gabriel Haas, Thomas Dreja, Florian Geiselhart, E. Rukzio
Natural haptic feedback in virtual reality (VR) is complex andchallenging, due to the intricacy of necessary stimuli and re-spective hardware. Pseudo-haptic feedback aims at providinghaptic feedback without providing actual haptic stimuli butby using other sensory channels (e.g. visual cues) for feed-back. We combine such an approach with the additional inputmodality of muscle activity that is mapped to a virtual force toinfluence the interaction flow. In comparison to existing approaches as well as to no kines-thetic feedback at all the presented solution significantly in-creased immersion, enjoyment as well as the perceived qualityof kinesthetic feedback.
由于必要的刺激和相应硬件的复杂性,虚拟现实(VR)中的自然触觉反馈是复杂和具有挑战性的。伪触觉反馈旨在提供触觉反馈,而不提供实际的触觉刺激,而是使用其他感官通道(例如视觉线索)进行反馈。我们将这种方法与肌肉活动的额外输入模态相结合,该输入模态被映射为影响交互流的虚拟力。与现有的方法相比,该解决方案显著提高了沉浸感、享受感以及感知到的动觉反馈质量。
{"title":"Virtual Muscle Force: Communicating Kinesthetic Forces Through Pseudo-Haptic Feedback and Muscle Input","authors":"Michael Rietzler, Gabriel Haas, Thomas Dreja, Florian Geiselhart, E. Rukzio","doi":"10.1145/3332165.3347871","DOIUrl":"https://doi.org/10.1145/3332165.3347871","url":null,"abstract":"Natural haptic feedback in virtual reality (VR) is complex andchallenging, due to the intricacy of necessary stimuli and re-spective hardware. Pseudo-haptic feedback aims at providinghaptic feedback without providing actual haptic stimuli butby using other sensory channels (e.g. visual cues) for feed-back. We combine such an approach with the additional inputmodality of muscle activity that is mapped to a virtual force toinfluence the interaction flow. In comparison to existing approaches as well as to no kines-thetic feedback at all the presented solution significantly in-creased immersion, enjoyment as well as the perceived qualityof kinesthetic feedback.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131249240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
INVANER: INteractive VAscular Network Editing and Repair inaner:交互式血管网络编辑和修复
Valentin Z. Nigolian, T. Igarashi, Hirofumi Seo
Vascular network reconstruction is an essential aspect of the daily practice of medical doctors working with vascular systems. Accurately representing vascular networks, not only graphically but also in a way that encompasses their structure, can be used to run simulations, plan medical procedures or identify real-life diseases, for example. A vascular network is thus reconstructed from a 3D medical image sequence via segmentation and skeletonization. Many automatic algorithms exist to do so but tend to fail for specific corner cases. On the other hand, manual methods exist as well but are tedious to use and require a lot of time. In this paper, we introduce an interactive vascular network reconstruction system called INVANER that relies on a graph-like representation of the network's structure. A general skeleton is obtained with an automatic method and medical practitioners are allowed to manually repair the local defects where this method fails. Our system uses graph-related tools with local effects and introduces two novel tools, dedicated to solving two common problems arising when automatically extracting the centerlines of vascular structures: so-called "Kissing Vessels" and a type of phenomenon we call "Dotted Vessels."
血管网络重建是医生处理血管系统的日常实践的一个重要方面。例如,准确地表示血管网络,不仅是图形化的,而且以一种包含其结构的方式,可以用于运行模拟,计划医疗程序或识别现实生活中的疾病。因此,通过分割和骨架化,从三维医学图像序列重建血管网络。许多自动算法都能做到这一点,但在特定的极端情况下往往会失败。另一方面,手工方法也存在,但使用起来很繁琐,需要花费大量时间。在本文中,我们介绍了一个名为INVANER的交互式血管网络重建系统,该系统依赖于网络结构的类图表示。一般骨架是用自动方法获得的,医生可以在这种方法失败的地方手工修复局部缺陷。我们的系统使用具有局部效果的图形相关工具,并引入了两种新颖的工具,致力于解决自动提取血管结构中心线时出现的两个常见问题:所谓的“亲吻血管”和一种我们称之为“虚线血管”的现象。
{"title":"INVANER: INteractive VAscular Network Editing and Repair","authors":"Valentin Z. Nigolian, T. Igarashi, Hirofumi Seo","doi":"10.1145/3332165.3347900","DOIUrl":"https://doi.org/10.1145/3332165.3347900","url":null,"abstract":"Vascular network reconstruction is an essential aspect of the daily practice of medical doctors working with vascular systems. Accurately representing vascular networks, not only graphically but also in a way that encompasses their structure, can be used to run simulations, plan medical procedures or identify real-life diseases, for example. A vascular network is thus reconstructed from a 3D medical image sequence via segmentation and skeletonization. Many automatic algorithms exist to do so but tend to fail for specific corner cases. On the other hand, manual methods exist as well but are tedious to use and require a lot of time. In this paper, we introduce an interactive vascular network reconstruction system called INVANER that relies on a graph-like representation of the network's structure. A general skeleton is obtained with an automatic method and medical practitioners are allowed to manually repair the local defects where this method fails. Our system uses graph-related tools with local effects and introduces two novel tools, dedicated to solving two common problems arising when automatically extracting the centerlines of vascular structures: so-called \"Kissing Vessels\" and a type of phenomenon we call \"Dotted Vessels.\"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130600262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Session details: Session 2B: Media Authoring 会话详细信息:会话2B:媒体创作
T. Igarashi
{"title":"Session details: Session 2B: Media Authoring","authors":"T. Igarashi","doi":"10.1145/3368372","DOIUrl":"https://doi.org/10.1145/3368372","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"14 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116932183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Session 5B: Physical Displays 会话详细信息:会话5B:物理显示
Chris Harrison
{"title":"Session details: Session 5B: Physical Displays","authors":"Chris Harrison","doi":"10.1145/3368378","DOIUrl":"https://doi.org/10.1145/3368378","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"162 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124389530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tessutivo
Jun Gong, Yu Wu, Lei Yan, T. Seyed, Xing-Dong Yang
We present Tessutivo, a contact-based inductive sensing technique for contextual interactions on interactive fabrics. Our technique recognizes conductive objects (mainly metallic) that are commonly found in households and workplaces, such as keys, coins, and electronic devices. We built a prototype containing six by six spiral-shaped coils made of conductive thread, sewn onto a four-layer fabric structure. We carefully designed the coil shape parameters to maximize the sensitivity based on a new inductance approximation formula. Through a ten-participant study, we evaluated the performance of our proposed sensing technique across 27 common objects. We yielded 93.9% real-time accuracy for object recognition. We conclude by presenting several applications to demonstrate the unique interactions enabled by our technique.
{"title":"Tessutivo","authors":"Jun Gong, Yu Wu, Lei Yan, T. Seyed, Xing-Dong Yang","doi":"10.1145/3332165.3347897","DOIUrl":"https://doi.org/10.1145/3332165.3347897","url":null,"abstract":"We present Tessutivo, a contact-based inductive sensing technique for contextual interactions on interactive fabrics. Our technique recognizes conductive objects (mainly metallic) that are commonly found in households and workplaces, such as keys, coins, and electronic devices. We built a prototype containing six by six spiral-shaped coils made of conductive thread, sewn onto a four-layer fabric structure. We carefully designed the coil shape parameters to maximize the sensitivity based on a new inductance approximation formula. Through a ten-participant study, we evaluated the performance of our proposed sensing technique across 27 common objects. We yielded 93.9% real-time accuracy for object recognition. We conclude by presenting several applications to demonstrate the unique interactions enabled by our technique.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114376544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Modeling the Uncertainty in 2D Moving Target Selection 二维运动目标选择中的不确定性建模
Jin Huang, Feng Tian, Nianlong Li, Xiangmin Fan
Understanding the selection uncertainty of moving targets is a fundamental research problem in HCI. However, the only few works in this domain mainly focus on selecting 1D moving targets with certain input devices, where the model generalizability has not been extensively investigated. In this paper, we propose a 2D Ternary-Gaussian model to describe the selection uncertainty manifested in endpoint distribution for moving target selection. We explore and compare two candidate methods to generalize the problem space from 1D to 2D tasks, and evaluate their performances with three input modalities including mouse, stylus, and finger touch. By applying the proposed model in assisting target selection, we achieved up to 4% improvement in pointing speed and 41% in pointing accuracy compared with two state-of-the-art selection technologies. In addition, when we tested our model to predict pointing errors in a realistic user interface, we observed high fit of 0.94 R2.
理解移动目标的选择的不确定性是一个人机交互的基础研究问题。然而,该领域仅有的少量研究主要集中在选择具有特定输入设备的一维运动目标上,模型的可泛化性尚未得到广泛的研究。本文提出了一种二维三元高斯模型来描述运动目标选择中端点分布的不确定性。我们探索并比较了两种候选方法,将问题空间从一维任务推广到二维任务,并在包括鼠标、触控笔和手指触摸在内的三种输入方式下评估了它们的性能。通过将该模型应用于辅助目标选择,与两种最先进的选择技术相比,我们的指向速度提高了4%,指向精度提高了41%。此外,当我们测试我们的模型来预测现实用户界面中的指向误差时,我们观察到0.94 R2的高拟合。
{"title":"Modeling the Uncertainty in 2D Moving Target Selection","authors":"Jin Huang, Feng Tian, Nianlong Li, Xiangmin Fan","doi":"10.1145/3332165.3347880","DOIUrl":"https://doi.org/10.1145/3332165.3347880","url":null,"abstract":"Understanding the selection uncertainty of moving targets is a fundamental research problem in HCI. However, the only few works in this domain mainly focus on selecting 1D moving targets with certain input devices, where the model generalizability has not been extensively investigated. In this paper, we propose a 2D Ternary-Gaussian model to describe the selection uncertainty manifested in endpoint distribution for moving target selection. We explore and compare two candidate methods to generalize the problem space from 1D to 2D tasks, and evaluate their performances with three input modalities including mouse, stylus, and finger touch. By applying the proposed model in assisting target selection, we achieved up to 4% improvement in pointing speed and 41% in pointing accuracy compared with two state-of-the-art selection technologies. In addition, when we tested our model to predict pointing errors in a realistic user interface, we observed high fit of 0.94 R2.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115207267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
CAVRN
Sebastian Herscher, Connor DeFanti, N. Vitovitch, Corinne Brenner, Haijun Xia, Kris Layng, Ken Perlin
The virtual reality ecosystem has gained momentum in the gaming, entertainment, and enterprise markets, but is hampered by limitations in concurrent user count, throughput, and accessibility to mass audiences. Based on our analysis of the current state of the virtual reality ecosystem and relevant aspects of traditional media, we propose a set of design hypotheses for practical and effective seated virtual reality experiences of scale. Said hypotheses manifest in the Collective Audience Virtual Reality Nexus (CAVRN), a framework and management system for large-scale (30+ user) virtual reality deployment in a theater-like physical setting. A mixed methodology study of CAVE, an experience implemented using CAVRN, generated rich insights into the proposed hypotheses. We discuss the implications of our findings on content design, audience representation, and audience interaction.
{"title":"CAVRN","authors":"Sebastian Herscher, Connor DeFanti, N. Vitovitch, Corinne Brenner, Haijun Xia, Kris Layng, Ken Perlin","doi":"10.1145/3332165.3347929","DOIUrl":"https://doi.org/10.1145/3332165.3347929","url":null,"abstract":"The virtual reality ecosystem has gained momentum in the gaming, entertainment, and enterprise markets, but is hampered by limitations in concurrent user count, throughput, and accessibility to mass audiences. Based on our analysis of the current state of the virtual reality ecosystem and relevant aspects of traditional media, we propose a set of design hypotheses for practical and effective seated virtual reality experiences of scale. Said hypotheses manifest in the Collective Audience Virtual Reality Nexus (CAVRN), a framework and management system for large-scale (30+ user) virtual reality deployment in a theater-like physical setting. A mixed methodology study of CAVE, an experience implemented using CAVRN, generated rich insights into the proposed hypotheses. We discuss the implications of our findings on content design, audience representation, and audience interaction.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125055918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Learning Cooperative Personalized Policies from Gaze Data 从注视数据中学习合作个性化策略
Christoph Gebhardt, Brian Hecox, B. V. Opheusden, Daniel J. Wigdor, James M. Hillis, Otmar Hilliges, Hrvoje Benko
An ideal Mixed Reality (MR) system would only present virtual information (e.g., a label) when it is useful to the person. However, deciding when a label is useful is challenging: it depends on a variety of factors, including the current task, previous knowledge, context, etc. In this paper, we propose a Reinforcement Learning (RL) method to learn when to show or hide an object's label given eye movement data. We demonstrate the capabilities of this approach by showing that an intelligent agent can learn cooperative policies that better support users in a visual search task than manually designed heuristics. Furthermore, we show the applicability of our approach to more realistic environments and use cases (e.g., grocery shopping). By posing MR object labeling as a model-free RL problem, we can learn policies implicitly by observing users' behavior without requiring a visual search model or data annotation.
一个理想的混合现实(MR)系统只会在对人有用的时候呈现虚拟信息(例如,标签)。然而,决定一个标签何时有用是具有挑战性的:它取决于各种因素,包括当前任务、以前的知识、上下文等。在本文中,我们提出了一种强化学习(RL)方法来学习何时显示或隐藏给定眼动数据的对象标签。我们通过展示智能代理可以学习合作策略来证明这种方法的能力,这些策略比手动设计的启发式方法更好地支持用户的视觉搜索任务。此外,我们展示了我们的方法对更现实的环境和用例的适用性(例如,杂货店购物)。通过将MR对象标记作为无模型RL问题,我们可以通过观察用户行为来隐式学习策略,而不需要视觉搜索模型或数据注释。
{"title":"Learning Cooperative Personalized Policies from Gaze Data","authors":"Christoph Gebhardt, Brian Hecox, B. V. Opheusden, Daniel J. Wigdor, James M. Hillis, Otmar Hilliges, Hrvoje Benko","doi":"10.1145/3332165.3347933","DOIUrl":"https://doi.org/10.1145/3332165.3347933","url":null,"abstract":"An ideal Mixed Reality (MR) system would only present virtual information (e.g., a label) when it is useful to the person. However, deciding when a label is useful is challenging: it depends on a variety of factors, including the current task, previous knowledge, context, etc. In this paper, we propose a Reinforcement Learning (RL) method to learn when to show or hide an object's label given eye movement data. We demonstrate the capabilities of this approach by showing that an intelligent agent can learn cooperative policies that better support users in a visual search task than manually designed heuristics. Furthermore, we show the applicability of our approach to more realistic environments and use cases (e.g., grocery shopping). By posing MR object labeling as a model-free RL problem, we can learn policies implicitly by observing users' behavior without requiring a visual search model or data annotation.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127384974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
RFTouchPads
Meng-Ju Hsieh, Jr-Ling Guo, Chin-Yuan Lu, Han-Wei Hsieh, Rong-Hao Liang, Bing-Yu Chen
This paper presents RFTouchPads, a system of batteryless and wireless modular hardware designs of two-dimensional (2D) touch sensor pads based on the ultra-high frequency (UHF) radio-frequency identification (RFID) technology. In this system, multiple RFID IC chips are connected to an antenna in parallel. Each chip connects only one of its endpoints to the antenna; hence, the module normally turns off when it gets insufficient energy to operate. When a finger touches the circuit trace attached to another endpoint of the chip, the finger functions as part of the antenna that turns the connected chip on, while the finger touch location is determined according to the chip's ID. Based on this principle, we propose two hardware designs, namely, StickerPad and TilePad. StickerPad is a flexible 3×3 touch-sensing pad suitable for applications on curved surfaces such as the human body. TilePad is a modular 3×3 touch-sensing pad that supports the modular area expansion by tiling and provides a more flexible deployment because its antenna is folded. Our implementation allows 2D touch inputs to be reliability detected 2 m away from a remote antenna of an RFID reader. The proposed batteryless, wireless, and modular hardware design enables fine-grained and less-constrained 2D touch inputs in various ubiquitous computing applications.
{"title":"RFTouchPads","authors":"Meng-Ju Hsieh, Jr-Ling Guo, Chin-Yuan Lu, Han-Wei Hsieh, Rong-Hao Liang, Bing-Yu Chen","doi":"10.1145/3332165.3347910","DOIUrl":"https://doi.org/10.1145/3332165.3347910","url":null,"abstract":"This paper presents RFTouchPads, a system of batteryless and wireless modular hardware designs of two-dimensional (2D) touch sensor pads based on the ultra-high frequency (UHF) radio-frequency identification (RFID) technology. In this system, multiple RFID IC chips are connected to an antenna in parallel. Each chip connects only one of its endpoints to the antenna; hence, the module normally turns off when it gets insufficient energy to operate. When a finger touches the circuit trace attached to another endpoint of the chip, the finger functions as part of the antenna that turns the connected chip on, while the finger touch location is determined according to the chip's ID. Based on this principle, we propose two hardware designs, namely, StickerPad and TilePad. StickerPad is a flexible 3×3 touch-sensing pad suitable for applications on curved surfaces such as the human body. TilePad is a modular 3×3 touch-sensing pad that supports the modular area expansion by tiling and provides a more flexible deployment because its antenna is folded. Our implementation allows 2D touch inputs to be reliability detected 2 m away from a remote antenna of an RFID reader. The proposed batteryless, wireless, and modular hardware design enables fine-grained and less-constrained 2D touch inputs in various ubiquitous computing applications.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128018282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Mantis 螳螂
Pub Date : 2019-10-17 DOI: 10.1163/2214-8647_dnp_e721650
G. Barnaby, A. Roudaut
{"title":"Mantis","authors":"G. Barnaby, A. Roudaut","doi":"10.1163/2214-8647_dnp_e721650","DOIUrl":"https://doi.org/10.1163/2214-8647_dnp_e721650","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125634096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1