首页 > 最新文献

Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology最新文献

英文 中文
Interacting with live preview frames: in-picture cues for a digital camera interface 与实时预览帧交互:数码相机界面的图片提示
Steven R. Gomez
We present a new interaction paradigm for digital cameras aimed at making interactive imaging algorithms accessible on these devices. In our system, the user creates visual cues in front of the lens during the live preview frames that are continuously processed before the snapshot is taken. These cues are recognized by the camera's image processor to control the lens or other settings. We design and analyze vision-based camera interactions, including focus and zoom controls, and argue that the vision-based paradigm offers a new level of photographer control needed for the next generation of digital cameras.
我们提出了一种新的数码相机交互范例,旨在使交互式成像算法在这些设备上可访问。在我们的系统中,用户在实时预览帧期间在镜头前创建视觉线索,这些帧在拍摄快照之前被不断处理。这些线索被相机的图像处理器识别,以控制镜头或其他设置。我们设计和分析了基于视觉的相机交互,包括对焦和变焦控制,并认为基于视觉的范式为下一代数码相机提供了一个新的摄影师控制水平。
{"title":"Interacting with live preview frames: in-picture cues for a digital camera interface","authors":"Steven R. Gomez","doi":"10.1145/1866218.1866250","DOIUrl":"https://doi.org/10.1145/1866218.1866250","url":null,"abstract":"We present a new interaction paradigm for digital cameras aimed at making interactive imaging algorithms accessible on these devices. In our system, the user creates visual cues in front of the lens during the live preview frames that are continuously processed before the snapshot is taken. These cues are recognized by the camera's image processor to control the lens or other settings. We design and analyze vision-based camera interactions, including focus and zoom controls, and argue that the vision-based paradigm offers a new level of photographer control needed for the next generation of digital cameras.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"25 1","pages":"419-420"},"PeriodicalIF":0.0,"publicationDate":"2010-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74040022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
MobileSurface: interaction in the air for mobile computing mobilessurface:移动计算的空中交互
Ji Zhao, Hujia Liu, Chunhui Zhang, Zhengyou Zhang
We describe a virtual interactive surface technology based on a projector-camera system connected to a mobile device. This system, named mobile surface, can project images on any free surfaces and enable interaction in the air within the projection area. The projector used in the system scans a laser beam very quickly across the projection area to produce a stable image at 60 fps. The camera-projector synchronization is applied to obtain the image of the appointed scanning line. So our system can project what is perceived as a stable image onto the display surface, while simulta neously working as a structured light 3D scanning system.
我们描述了一种基于连接到移动设备的投影-摄像系统的虚拟交互表面技术。该系统被命名为移动表面,可以在任何自由表面上投影图像,并在投影区域内的空气中进行交互。系统中使用的投影仪扫描激光束非常快地穿过投影区域,以60帧/秒的速度产生稳定的图像。采用摄像机-投影仪同步,获得指定扫描线的图像。因此,我们的系统可以将被认为是稳定的图像投影到显示表面上,同时作为一个结构光3D扫描系统工作。
{"title":"MobileSurface: interaction in the air for mobile computing","authors":"Ji Zhao, Hujia Liu, Chunhui Zhang, Zhengyou Zhang","doi":"10.1145/1866218.1866270","DOIUrl":"https://doi.org/10.1145/1866218.1866270","url":null,"abstract":"We describe a virtual interactive surface technology based on a projector-camera system connected to a mobile device. This system, named mobile surface, can project images on any free surfaces and enable interaction in the air within the projection area. The projector used in the system scans a laser beam very quickly across the projection area to produce a stable image at 60 fps. The camera-projector synchronization is applied to obtain the image of the appointed scanning line. So our system can project what is perceived as a stable image onto the display surface, while simulta neously working as a structured light 3D scanning system.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"10 1","pages":"459-460"},"PeriodicalIF":0.0,"publicationDate":"2010-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87246933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
QWIC: performance heuristics for large scale exploratory user interfaces QWIC:大规模探索性用户界面的性能启发式
Daniel A. Smith, Joe Lambert, M. Schraefel, D. Bretherton
Faceted browsers offer an effective way to explore relationships and build new knowledge across data sets. So far, web-based faceted browsers have been hampered by limited feature performance and scale. QWIC, Quick Web Interface Control, describes a set of design heuristics to address performance speed both at the interface and the backend to operate on large-scale sources.
分面浏览器提供了一种探索关系和跨数据集构建新知识的有效方法。到目前为止,基于web的分面浏览器受到功能性能和规模的限制。QWIC,即快速Web界面控制,描述了一组设计启发式方法,以解决在界面和后端操作大规模源时的性能速度问题。
{"title":"QWIC: performance heuristics for large scale exploratory user interfaces","authors":"Daniel A. Smith, Joe Lambert, M. Schraefel, D. Bretherton","doi":"10.1145/1866218.1866266","DOIUrl":"https://doi.org/10.1145/1866218.1866266","url":null,"abstract":"Faceted browsers offer an effective way to explore relationships and build new knowledge across data sets. So far, web-based faceted browsers have been hampered by limited feature performance and scale. QWIC, Quick Web Interface Control, describes a set of design heuristics to address performance speed both at the interface and the backend to operate on large-scale sources.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"78 1","pages":"451-452"},"PeriodicalIF":0.0,"publicationDate":"2010-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91120125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Supporting self-expression for informal communication 在非正式交流中支持自我表达
Lisa G. Cowan
Mobile phones are becoming the central tools for communicating and can help us keep in touch with friends and family on-the-go. However, they can also place high demands on attention and constrain interaction. My research concerns how to design communication mechanisms that mitigate these problems to support self-expression for informal communication on mobile phones. I will study how people communicate with camera-phone photos, paper-based sketches, and projected information and how this communication impacts social practices.
手机正在成为沟通的主要工具,可以帮助我们在旅途中与朋友和家人保持联系。然而,它们也会对注意力提出很高的要求,并限制互动。我的研究关注的是如何设计沟通机制来缓解这些问题,以支持在手机上非正式沟通的自我表达。我将研究人们如何通过照相手机照片、纸上草图和投影信息进行交流,以及这种交流如何影响社会实践。
{"title":"Supporting self-expression for informal communication","authors":"Lisa G. Cowan","doi":"10.1145/1866218.1866221","DOIUrl":"https://doi.org/10.1145/1866218.1866221","url":null,"abstract":"Mobile phones are becoming the central tools for communicating and can help us keep in touch with friends and family on-the-go. However, they can also place high demands on attention and constrain interaction. My research concerns how to design communication mechanisms that mitigate these problems to support self-expression for informal communication on mobile phones. I will study how people communicate with camera-phone photos, paper-based sketches, and projected information and how this communication impacts social practices.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"28 1","pages":"351-354"},"PeriodicalIF":0.0,"publicationDate":"2010-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72933351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The multiplayer: multi-perspective social video navigation 多人模式:多视角社交视频导航
Zihao Yu, N. Diakopoulos, Mor Naaman
We present a multi-perspective video "multiplayer" designed to organize social video aggregated from online sites like YouTube. Our system automatically time-aligns videos using audio fingerprinting, thus bringing them into a unified temporal frame. The interface utilizes social metadata to visually aid navigation and cue users to more interesting portions of an event. We provide details about the visual and interaction design rationale of the multiplayer.
我们提出了一个多视角视频“多人模式”,旨在组织来自YouTube等在线网站的社交视频。我们的系统使用音频指纹自动对视频进行时间对齐,从而将它们纳入统一的时间框架。该界面利用社会元数据直观地帮助导航,并提示用户到事件中更有趣的部分。我们详细介绍了多人游戏的视觉和交互设计原理。
{"title":"The multiplayer: multi-perspective social video navigation","authors":"Zihao Yu, N. Diakopoulos, Mor Naaman","doi":"10.1145/1866218.1866246","DOIUrl":"https://doi.org/10.1145/1866218.1866246","url":null,"abstract":"We present a multi-perspective video \"multiplayer\" designed to organize social video aggregated from online sites like YouTube. Our system automatically time-aligns videos using audio fingerprinting, thus bringing them into a unified temporal frame. The interface utilizes social metadata to visually aid navigation and cue users to more interesting portions of an event. We provide details about the visual and interaction design rationale of the multiplayer.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"25 1","pages":"413-414"},"PeriodicalIF":0.0,"publicationDate":"2010-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75613672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Intelligent tagging interfaces: beyond folksonomy 智能标签界面:超越大众分类法
Jesse Vig
This paper summarizes our work on using tags to broaden the dialog between a recommender system and its users. We present two tagging applications that enrich this dialog: tagsplanations are tag-based explanations of recommendations provided by a system to its users, and Movie Tuner is a conversational recommender system that enables users to provide feedback on movie recommendations using tags. We discuss the design of both systems and the experimental methodology used to evaluate the design choices.
本文总结了我们在使用标签来扩大推荐系统和用户之间的对话方面的工作。我们介绍了两个丰富这个对话框的标记应用程序:标记解释是系统向其用户提供的基于标记的推荐解释,Movie Tuner是一个会话推荐系统,使用户能够使用标记提供关于电影推荐的反馈。我们讨论了这两个系统的设计和用于评估设计选择的实验方法。
{"title":"Intelligent tagging interfaces: beyond folksonomy","authors":"Jesse Vig","doi":"10.1145/1866218.1866226","DOIUrl":"https://doi.org/10.1145/1866218.1866226","url":null,"abstract":"This paper summarizes our work on using tags to broaden the dialog between a recommender system and its users. We present two tagging applications that enrich this dialog: tagsplanations are tag-based explanations of recommendations provided by a system to its users, and Movie Tuner is a conversational recommender system that enables users to provide feedback on movie recommendations using tags. We discuss the design of both systems and the experimental methodology used to evaluate the design choices.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"8 1","pages":"371-374"},"PeriodicalIF":0.0,"publicationDate":"2010-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75867157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Blinkbot: look at, blink and move 眨眼机器人:看,眨眼,移动
Pranav Mistry, Kentaro Ishii, M. Inami, T. Igarashi
In this paper we present BlinkBot - a hands free input interface to control and command a robot. BlinkBot explores the natural modality of gaze and blink to direct a robot to move an object from a location to another. The paper also explains detailed hardware and software implementation of the prototype system.
在本文中,我们提出了BlinkBot -一个双手自由输入接口来控制和指挥机器人。BlinkBot探索了凝视和眨眼的自然模式,以指导机器人将物体从一个位置移动到另一个位置。文中还详细说明了原型系统的硬件和软件实现。
{"title":"Blinkbot: look at, blink and move","authors":"Pranav Mistry, Kentaro Ishii, M. Inami, T. Igarashi","doi":"10.1145/1866218.1866238","DOIUrl":"https://doi.org/10.1145/1866218.1866238","url":null,"abstract":"In this paper we present BlinkBot - a hands free input interface to control and command a robot. BlinkBot explores the natural modality of gaze and blink to direct a robot to move an object from a location to another. The paper also explains detailed hardware and software implementation of the prototype system.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"27 1","pages":"397-398"},"PeriodicalIF":0.0,"publicationDate":"2010-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90346062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Surfboard: keyboard with microphone as a low-cost interactive surface 冲浪板:带麦克风的键盘,作为低成本的交互界面
Jun Kato, Daisuke Sakamoto, T. Igarashi
We introduce a technique to detect simple gestures of "surfing" (moving a hand horizontally) on a standard keyboard by analyzing recorded sounds in real-time with a microphone attached close to the keyboard. This technique allows the user to maintain a focus on the screen while surfing on the keyboard. Since this technique uses a standard keyboard without any modification, the user can take full advantage of the input functionality and tactile quality of his favorite keyboard supplemented with our interface.
我们介绍了一种技术,通过实时分析键盘附近的麦克风记录的声音,来检测标准键盘上“冲浪”(水平移动一只手)的简单手势。这种技术允许用户在键盘上冲浪时保持对屏幕的关注。由于该技术使用标准键盘,无需任何修改,用户可以充分利用自己喜欢的键盘的输入功能和触觉质量,并辅以我们的界面。
{"title":"Surfboard: keyboard with microphone as a low-cost interactive surface","authors":"Jun Kato, Daisuke Sakamoto, T. Igarashi","doi":"10.1145/1866218.1866233","DOIUrl":"https://doi.org/10.1145/1866218.1866233","url":null,"abstract":"We introduce a technique to detect simple gestures of \"surfing\" (moving a hand horizontally) on a standard keyboard by analyzing recorded sounds in real-time with a microphone attached close to the keyboard. This technique allows the user to maintain a focus on the screen while surfing on the keyboard. Since this technique uses a standard keyboard without any modification, the user can take full advantage of the input functionality and tactile quality of his favorite keyboard supplemented with our interface.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"31 1","pages":"387-388"},"PeriodicalIF":0.0,"publicationDate":"2010-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90978142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
A support to multi-devices web application 支持多设备的web应用程序
Xavier Le Pallec, Raphaël Marvie, J. Rouillard, Jean-Claude Tarby
Programming an application which uses interactive devices located on different terminals is not easy. Programming such applications with standard Web technologies (HTTP, Javascript, Web browser) is even more difficult. However, Web applications have interesting properties like running on very different terminals, the lack of a specific installation step, the ability to evolve the application code at runtime. Our demonstration presents a support for designing multi-devices Web applications. After introducing the context of this work, we briefly describe some problems related to the design of multi-devices web application. Then, we present the toolkit we have implemented to help the development of applications based upon distant interactive devices.
编写一个使用位于不同终端上的交互设备的应用程序并不容易。使用标准Web技术(HTTP、Javascript、Web浏览器)编写这样的应用程序就更加困难了。然而,Web应用程序有一些有趣的特性,比如运行在非常不同的终端上、没有特定的安装步骤、能够在运行时改进应用程序代码。我们的演示展示了对设计多设备Web应用程序的支持。在介绍了本研究的背景之后,我们简要描述了与多设备web应用程序设计相关的一些问题。然后,我们介绍了我们实现的工具包,以帮助开发基于远程交互设备的应用程序。
{"title":"A support to multi-devices web application","authors":"Xavier Le Pallec, Raphaël Marvie, J. Rouillard, Jean-Claude Tarby","doi":"10.1145/1866218.1866235","DOIUrl":"https://doi.org/10.1145/1866218.1866235","url":null,"abstract":"Programming an application which uses interactive devices located on different terminals is not easy. Programming such applications with standard Web technologies (HTTP, Javascript, Web browser) is even more difficult. However, Web applications have interesting properties like running on very different terminals, the lack of a specific installation step, the ability to evolve the application code at runtime. Our demonstration presents a support for designing multi-devices Web applications. After introducing the context of this work, we briefly describe some problems related to the design of multi-devices web application. Then, we present the toolkit we have implemented to help the development of applications based upon distant interactive devices.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"15 1","pages":"391-392"},"PeriodicalIF":0.0,"publicationDate":"2010-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90446483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
OnObject: gestural play with tagged everyday objects OnObject:用标记的日常物品进行手势游戏
Keywon Chung, Michael Shilman, C. Merrill, H. Ishii
Many Tangible User Interface (TUI) systems employ sensor-equipped physical objects. However they do not easily scale to users' actual environments; most everyday objects lack the necessary hardware, and modification requires hardware and software development by skilled individuals. This limits TUI creation by end users, resulting in inflexible interfaces in which the mapping of sensor input and output events cannot be easily modified reflecting the end user's wishes and circumstances. We introduce OnObject, a small device worn on the hand, which can program physical objects to respond to a set of gestural triggers. Users attach RFID tags to situated objects, grab them by the tag, and program their responses to grab, release, shake, swing, and thrust gestures using a built-in button and a microphone. In this paper, we demonstrate how novice end users including preschool children can instantly create engaging gestural object interfaces with sound feedback from toys, drawings, or clay.
许多有形用户界面(TUI)系统使用配备传感器的物理对象。然而,它们不容易扩展到用户的实际环境;大多数日常用品缺乏必要的硬件,修改需要由熟练的个人开发硬件和软件。这限制了最终用户创建TUI,导致界面不灵活,其中传感器输入和输出事件的映射不能容易地修改,以反映最终用户的愿望和情况。我们介绍了OnObject,一个戴在手上的小设备,它可以对物理对象进行编程,以响应一组手势触发。用户将RFID标签贴在定位对象上,通过标签抓取它们,并通过内置按钮和麦克风编程它们的响应,以抓取、释放、摇动、摇摆和推力手势。在本文中,我们演示了包括学龄前儿童在内的新手终端用户如何使用来自玩具、图纸或粘土的声音反馈立即创建引人入胜的手势对象界面。
{"title":"OnObject: gestural play with tagged everyday objects","authors":"Keywon Chung, Michael Shilman, C. Merrill, H. Ishii","doi":"10.1145/1866218.1866229","DOIUrl":"https://doi.org/10.1145/1866218.1866229","url":null,"abstract":"Many Tangible User Interface (TUI) systems employ sensor-equipped physical objects. However they do not easily scale to users' actual environments; most everyday objects lack the necessary hardware, and modification requires hardware and software development by skilled individuals. This limits TUI creation by end users, resulting in inflexible interfaces in which the mapping of sensor input and output events cannot be easily modified reflecting the end user's wishes and circumstances. We introduce OnObject, a small device worn on the hand, which can program physical objects to respond to a set of gestural triggers. Users attach RFID tags to situated objects, grab them by the tag, and program their responses to grab, release, shake, swing, and thrust gestures using a built-in button and a microphone. In this paper, we demonstrate how novice end users including preschool children can instantly create engaging gestural object interfaces with sound feedback from toys, drawings, or clay.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"28 1","pages":"379-380"},"PeriodicalIF":0.0,"publicationDate":"2010-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89848474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
期刊
Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1