首页 > 最新文献

Journal on Multimodal User Interfaces最新文献

英文 中文
Multimodal exploration in elementary music classroom 小学音乐课堂的多模态探索
3区 计算机科学 Q2 Computer Science Pub Date : 2023-10-18 DOI: 10.1007/s12193-023-00420-x
Martha Papadogianni, Ercan Altinsoy, Areti Andreopoulou
{"title":"Multimodal exploration in elementary music classroom","authors":"Martha Papadogianni, Ercan Altinsoy, Areti Andreopoulou","doi":"10.1007/s12193-023-00420-x","DOIUrl":"https://doi.org/10.1007/s12193-023-00420-x","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135824646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hearing loss prevention at loud music events via real-time visuo-haptic feedback 通过实时视觉触觉反馈预防大声音乐活动中的听力损失
3区 计算机科学 Q2 Computer Science Pub Date : 2023-10-13 DOI: 10.1007/s12193-023-00419-4
Luca Turchet, Simone Luiten, Tjebbe Treub, Marloes van der Burgt, Costanza Siani, Alberto Boem
Abstract Hearing loss is becoming a global problem, partly as a consequence of exposure to loud music. People may be unaware about the harmful sound levels and consequent damages caused by loud music at venues such as discotheques or festivals. Earplugs are effective in reducing the risk of noise-induced hearing loss but have been shown to be an insufficient prevention strategy. Thus, when it is not possible to lower the volume of the sound source, a viable solution to the problem is to relocate to quieter locations from time to time. In this context, this study introduces a bracelet device with the goal of warning users when the music sound level is too loud in their specific location, via haptic, visual or visuo-haptic feedback. The bracelet embeds a microphone, a microcontroller, an LED strip and four vibration motors. We performed a user study where thirteen participants were asked to react to the three kinds of feedback during a simulated disco club event where the volume of music pieces varied to reach a loud intensity. Results showed that participants never missed the above threshold notification via all types of feedback, but visual feedback led to the slowest reaction times and was deemed the least effective. In line with the findings reported in the hearing loss prevention literature, the perceived usefulness of the proposed device was highly dependent on participants’ subjective approach to the topic of hearing risks at loud music events as well as their willingness to take action regarding its prevention. Ultimately, our study shows how technology, no matter how effective, may not be able to cope with these kinds of cultural issues concerning hearing loss prevention. Educational strategies may represent a more effective solution to the real problem of changing people’s attitudes and motivations to want to protect their hearing.
听力损失正在成为一个全球性的问题,部分原因是暴露在嘈杂的音乐中。人们可能没有意识到在迪斯科舞厅或节日等场所嘈杂的音乐所造成的有害声音水平和随之而来的损害。耳塞在降低噪音引起的听力损失的风险方面是有效的,但已被证明是一种不充分的预防策略。因此,当无法降低声源的音量时,一个可行的解决方案是不时地搬迁到更安静的地方。在此背景下,本研究介绍了一种手环设备,其目的是通过触觉、视觉或视触觉反馈,在用户特定位置的音乐声级过大时警告用户。这款手环内嵌了一个麦克风、一个微控制器、一个LED灯条和四个振动马达。我们进行了一项用户研究,要求13名参与者在模拟的迪斯科俱乐部活动中对三种反馈做出反应,其中音乐作品的音量变化到一个响亮的强度。结果显示,通过所有类型的反馈,参与者从未错过上述阈值通知,但视觉反馈导致的反应时间最慢,被认为是最无效的。与听力损失预防文献报道的结果一致,所提议设备的感知有用性高度依赖于参与者对大声音乐活动中听力风险主题的主观方法以及他们采取预防措施的意愿。最终,我们的研究表明,无论技术多么有效,都可能无法应对与听力损失预防有关的这些文化问题。对于改变人们保护听力的态度和动机这一实际问题,教育策略可能是一种更有效的解决方案。
{"title":"Hearing loss prevention at loud music events via real-time visuo-haptic feedback","authors":"Luca Turchet, Simone Luiten, Tjebbe Treub, Marloes van der Burgt, Costanza Siani, Alberto Boem","doi":"10.1007/s12193-023-00419-4","DOIUrl":"https://doi.org/10.1007/s12193-023-00419-4","url":null,"abstract":"Abstract Hearing loss is becoming a global problem, partly as a consequence of exposure to loud music. People may be unaware about the harmful sound levels and consequent damages caused by loud music at venues such as discotheques or festivals. Earplugs are effective in reducing the risk of noise-induced hearing loss but have been shown to be an insufficient prevention strategy. Thus, when it is not possible to lower the volume of the sound source, a viable solution to the problem is to relocate to quieter locations from time to time. In this context, this study introduces a bracelet device with the goal of warning users when the music sound level is too loud in their specific location, via haptic, visual or visuo-haptic feedback. The bracelet embeds a microphone, a microcontroller, an LED strip and four vibration motors. We performed a user study where thirteen participants were asked to react to the three kinds of feedback during a simulated disco club event where the volume of music pieces varied to reach a loud intensity. Results showed that participants never missed the above threshold notification via all types of feedback, but visual feedback led to the slowest reaction times and was deemed the least effective. In line with the findings reported in the hearing loss prevention literature, the perceived usefulness of the proposed device was highly dependent on participants’ subjective approach to the topic of hearing risks at loud music events as well as their willingness to take action regarding its prevention. Ultimately, our study shows how technology, no matter how effective, may not be able to cope with these kinds of cultural issues concerning hearing loss prevention. Educational strategies may represent a more effective solution to the real problem of changing people’s attitudes and motivations to want to protect their hearing.","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135855412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A social robot as your reading companion: exploring the relationships between gaze patterns and knowledge gains 作为你阅读伙伴的社交机器人:探索凝视模式和知识收获之间的关系
3区 计算机科学 Q2 Computer Science Pub Date : 2023-10-12 DOI: 10.1007/s12193-023-00418-5
Xuan Liu, Jiachen Ma, Qiang Wang
{"title":"A social robot as your reading companion: exploring the relationships between gaze patterns and knowledge gains","authors":"Xuan Liu, Jiachen Ma, Qiang Wang","doi":"10.1007/s12193-023-00418-5","DOIUrl":"https://doi.org/10.1007/s12193-023-00418-5","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135967728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In-vehicle air gesture design: impacts of display modality and control orientation 车载手势设计:显示方式与控制方向的影响
3区 计算机科学 Q2 Computer Science Pub Date : 2023-09-14 DOI: 10.1007/s12193-023-00415-8
Jason Sterkenburg, Steven Landry, Shabnam FakhrHosseini, Myounghoon Jeon
{"title":"In-vehicle air gesture design: impacts of display modality and control orientation","authors":"Jason Sterkenburg, Steven Landry, Shabnam FakhrHosseini, Myounghoon Jeon","doi":"10.1007/s12193-023-00415-8","DOIUrl":"https://doi.org/10.1007/s12193-023-00415-8","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135487156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Pegasos: a framework for the creation of direct mobile coaching feedback systems Pegasos:创建直接移动教练反馈系统的框架
3区 计算机科学 Q2 Computer Science Pub Date : 2023-09-12 DOI: 10.1007/s12193-023-00411-y
Martin Dobiasch, Stefan Oppl, Michael Stöckl, Arnold Baca
Abstract Feedback is essential for athletes in order to improve their sport performance. Feedback systems try to provide athletes and coaches not only with visualisations of acquired data, but moreover, with insights into—possibly—invisible aspects of their performance. With the widespread adoption of smartphones and the increase in their capabilities, their use as a device for applications of feedback systems is becoming increasingly popular. However, developing mobile feedback systems requires a high level of expertise from researchers and practitioners. The Direct Mobile Coaching model is a design-paradigm for mobile feedback systems. In order to reduce programming efforts, PEGASOS, a framework for creating feedback systems implementing the so-called Direct Mobile Coaching model, is introduced. The paper compares this framework with state-of-the-art research with regard to their ability of providing different variants feedback and offering multimodality to users.
为了提高运动员的运动表现,反馈是必不可少的。反馈系统不仅试图为运动员和教练提供所获得数据的可视化,而且还试图洞察他们表现中可能不可见的方面。随着智能手机的广泛采用及其功能的增加,它们作为反馈系统应用程序的设备正变得越来越受欢迎。然而,开发移动反馈系统需要来自研究人员和从业人员的高水平的专业知识。直接移动教练模型是移动反馈系统的设计范例。为了减少编程的工作量,引入了PEGASOS,一个用于创建执行所谓的直接移动教练模型的反馈系统的框架。本文将该框架与最新的研究进行了比较,比较了它们提供不同变体反馈和向用户提供多模态的能力。
{"title":"Pegasos: a framework for the creation of direct mobile coaching feedback systems","authors":"Martin Dobiasch, Stefan Oppl, Michael Stöckl, Arnold Baca","doi":"10.1007/s12193-023-00411-y","DOIUrl":"https://doi.org/10.1007/s12193-023-00411-y","url":null,"abstract":"Abstract Feedback is essential for athletes in order to improve their sport performance. Feedback systems try to provide athletes and coaches not only with visualisations of acquired data, but moreover, with insights into—possibly—invisible aspects of their performance. With the widespread adoption of smartphones and the increase in their capabilities, their use as a device for applications of feedback systems is becoming increasingly popular. However, developing mobile feedback systems requires a high level of expertise from researchers and practitioners. The Direct Mobile Coaching model is a design-paradigm for mobile feedback systems. In order to reduce programming efforts, PEGASOS, a framework for creating feedback systems implementing the so-called Direct Mobile Coaching model, is introduced. The paper compares this framework with state-of-the-art research with regard to their ability of providing different variants feedback and offering multimodality to users.","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135824683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
PepperOSC: enabling interactive sonification of a robot’s expressive movement PepperOSC:使机器人的表达运动的交互式声音化
3区 计算机科学 Q2 Computer Science Pub Date : 2023-09-09 DOI: 10.1007/s12193-023-00414-9
Adrian B. Latupeirissa, Roberto Bresin
Abstract This paper presents the design and development of PepperOSC, an interface that connects Pepper and NAO robots with sound production tools to enable the development of interactive sonification in human-robot interaction (HRI). The interface uses Open Sound Control (OSC) messages to stream kinematic data from robots to various sound design and music production tools. The goals of PepperOSC are twofold: (i) to provide a tool for HRI researchers in developing multimodal user interfaces through sonification, and (ii) to lower the barrier for sound designers to contribute to HRI. To demonstrate the potential use of PepperOSC, this paper also presents two applications we have conducted: (i) a course project by two master’s students who created a robot sound model in Pure Data, and (ii) a museum installation of Pepper robot, employing sound models developed by a sound designer and a composer/researcher in music technology using MaxMSP and SuperCollider respectively. Furthermore, we discuss the potential use cases of PepperOSC in social robotics and artistic contexts. These applications demonstrate the versatility of PepperOSC and its ability to explore diverse aesthetic strategies for robot movement sonification, offering a promising approach to enhance the effectiveness and appeal of human-robot interactions.
摘要:本文介绍了PepperOSC的设计和开发,该接口将Pepper和NAO机器人与声音制作工具连接起来,以实现人机交互(HRI)中的交互式声效开发。该接口使用开放声音控制(OSC)消息将机器人的运动数据流式传输到各种声音设计和音乐制作工具。PepperOSC的目标是双重的:(i)为HRI研究人员提供一个工具,通过声音来开发多模态用户界面,(ii)降低声音设计师为HRI做出贡献的障碍。为了展示PepperOSC的潜在用途,本文还介绍了我们进行的两个应用:(i)两个硕士生的课程项目,他们在Pure Data中创建了一个机器人声音模型,以及(ii)一个胡椒机器人的博物馆装置,使用声音设计师和音乐技术作曲家/研究员分别使用MaxMSP和SuperCollider开发的声音模型。此外,我们还讨论了PepperOSC在社交机器人和艺术环境中的潜在用例。这些应用程序展示了PepperOSC的多功能性及其探索机器人运动声音的不同美学策略的能力,提供了一种有前途的方法来提高人机交互的有效性和吸引力。
{"title":"PepperOSC: enabling interactive sonification of a robot’s expressive movement","authors":"Adrian B. Latupeirissa, Roberto Bresin","doi":"10.1007/s12193-023-00414-9","DOIUrl":"https://doi.org/10.1007/s12193-023-00414-9","url":null,"abstract":"Abstract This paper presents the design and development of PepperOSC, an interface that connects Pepper and NAO robots with sound production tools to enable the development of interactive sonification in human-robot interaction (HRI). The interface uses Open Sound Control (OSC) messages to stream kinematic data from robots to various sound design and music production tools. The goals of PepperOSC are twofold: (i) to provide a tool for HRI researchers in developing multimodal user interfaces through sonification, and (ii) to lower the barrier for sound designers to contribute to HRI. To demonstrate the potential use of PepperOSC, this paper also presents two applications we have conducted: (i) a course project by two master’s students who created a robot sound model in Pure Data, and (ii) a museum installation of Pepper robot, employing sound models developed by a sound designer and a composer/researcher in music technology using MaxMSP and SuperCollider respectively. Furthermore, we discuss the potential use cases of PepperOSC in social robotics and artistic contexts. These applications demonstrate the versatility of PepperOSC and its ability to explore diverse aesthetic strategies for robot movement sonification, offering a promising approach to enhance the effectiveness and appeal of human-robot interactions.","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136108559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Perceptually congruent sonification of auditory line charts 听觉折线图的感知一致性超声处理
IF 2.9 3区 计算机科学 Q2 Computer Science Pub Date : 2023-08-30 DOI: 10.1007/s12193-023-00413-w
J. Fitzpatrick, Flaithrí Neff
{"title":"Perceptually congruent sonification of auditory line charts","authors":"J. Fitzpatrick, Flaithrí Neff","doi":"10.1007/s12193-023-00413-w","DOIUrl":"https://doi.org/10.1007/s12193-023-00413-w","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48937999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Correction to: Understanding virtual drilling perception using sound, and kinesthetic cues obtained with a mouse and keyboard 更正:使用鼠标和键盘获得的声音和动觉线索来理解虚拟钻孔感知
IF 2.9 3区 计算机科学 Q2 Computer Science Pub Date : 2023-08-28 DOI: 10.1007/s12193-023-00412-x
Guoxuan Ning, Brianna Grant, B. Kapralos, A. Quevedo, Kc Collins, K. Kanev, A. Dubrowski
{"title":"Correction to: Understanding virtual drilling perception using sound, and kinesthetic cues obtained with a mouse and keyboard","authors":"Guoxuan Ning, Brianna Grant, B. Kapralos, A. Quevedo, Kc Collins, K. Kanev, A. Dubrowski","doi":"10.1007/s12193-023-00412-x","DOIUrl":"https://doi.org/10.1007/s12193-023-00412-x","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46417378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on the application of gaze visualization interface on virtual reality training systems 注视可视化界面在虚拟现实训练系统中的应用研究
IF 2.9 3区 计算机科学 Q2 Computer Science Pub Date : 2023-08-18 DOI: 10.1007/s12193-023-00409-6
Haram Choi, Joungheum Kwon, Sanghun Nam
{"title":"Research on the application of gaze visualization interface on virtual reality training systems","authors":"Haram Choi, Joungheum Kwon, Sanghun Nam","doi":"10.1007/s12193-023-00409-6","DOIUrl":"https://doi.org/10.1007/s12193-023-00409-6","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44814008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial expression recognition via transfer learning in cooperative game paradigms for enhanced social AI 基于迁移学习的人脸表情识别在增强社交AI的合作游戏范式中的应用
IF 2.9 3区 计算机科学 Q2 Computer Science Pub Date : 2023-08-14 DOI: 10.1007/s12193-023-00410-z
Paula Castro Sánchez, Casey C. Bennett
{"title":"Facial expression recognition via transfer learning in cooperative game paradigms for enhanced social AI","authors":"Paula Castro Sánchez, Casey C. Bennett","doi":"10.1007/s12193-023-00410-z","DOIUrl":"https://doi.org/10.1007/s12193-023-00410-z","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45061503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal on Multimodal User Interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1