首页 > 最新文献

International Journal of Multimedia Data Engineering and Management最新文献

英文 中文
A-DisETrac Advanced Analytic Dashboard for Distributed Eye Tracking A-DisETrac 分布式眼动仪高级分析仪表板
Pub Date : 2024-04-02 DOI: 10.4018/IJMDEM.341792
Yasasi Abeysinghe, Bhanuka Mahanama, Gavindya Jayawardena, Yasith Jayawardana, Mohan Sunkara, Andrew T. Duchowski, Vikas Ashok, S. Jayarathna
Understanding how individuals focus and perform visual searches during collaborative tasks can help improve user engagement. Eye tracking measures provide informative cues for such understanding. This article presents A-DisETrac, an advanced analytic dashboard for distributed eye tracking. It uses off-the-shelf eye trackers to monitor multiple users in parallel, compute both traditional and advanced gaze measures in real-time, and display them on an interactive dashboard. Using two pilot studies, the system was evaluated in terms of user experience and utility, and compared with existing work. Moreover, the system was used to study how advanced gaze measures such as ambient-focal coefficient K and real-time index of pupillary activity relate to collaborative behavior. It was observed that the time a group takes to complete a puzzle is related to the ambient visual scanning behavior quantified and groups that spent more time had more scanning behavior. User experience questionnaire results suggest that their dashboard provides a comparatively good user experience.
了解个人在协作任务中如何集中注意力和进行视觉搜索,有助于提高用户的参与度。眼动跟踪测量为这种理解提供了信息线索。本文介绍了用于分布式眼动跟踪的高级分析仪表板 A-DisETrac。它使用现成的眼动追踪器对多个用户进行并行监控,实时计算传统和先进的注视测量值,并将其显示在交互式仪表盘上。通过两项试点研究,对该系统的用户体验和实用性进行了评估,并与现有工作进行了比较。此外,该系统还用于研究环境焦点系数 K 和瞳孔活动实时指数等高级凝视测量指标与协作行为的关系。研究发现,一个小组完成一个谜题所花费的时间与量化的环境视觉扫描行为有关,花费更多时间的小组有更多的扫描行为。用户体验问卷调查结果表明,他们的仪表盘提供了相对较好的用户体验。
{"title":"A-DisETrac Advanced Analytic Dashboard for Distributed Eye Tracking","authors":"Yasasi Abeysinghe, Bhanuka Mahanama, Gavindya Jayawardena, Yasith Jayawardana, Mohan Sunkara, Andrew T. Duchowski, Vikas Ashok, S. Jayarathna","doi":"10.4018/IJMDEM.341792","DOIUrl":"https://doi.org/10.4018/IJMDEM.341792","url":null,"abstract":"Understanding how individuals focus and perform visual searches during collaborative tasks can help improve user engagement. Eye tracking measures provide informative cues for such understanding. This article presents A-DisETrac, an advanced analytic dashboard for distributed eye tracking. It uses off-the-shelf eye trackers to monitor multiple users in parallel, compute both traditional and advanced gaze measures in real-time, and display them on an interactive dashboard. Using two pilot studies, the system was evaluated in terms of user experience and utility, and compared with existing work. Moreover, the system was used to study how advanced gaze measures such as ambient-focal coefficient K and real-time index of pupillary activity relate to collaborative behavior. It was observed that the time a group takes to complete a puzzle is related to the ambient visual scanning behavior quantified and groups that spent more time had more scanning behavior. User experience questionnaire results suggest that their dashboard provides a comparatively good user experience.","PeriodicalId":445080,"journal":{"name":"International Journal of Multimedia Data Engineering and Management","volume":"47 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140753147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of Light Field and Conventional Near-Eye AR Displays in Virtual-Real Integration Efficiency 光场与传统近眼AR显示的虚实集成效率比较
Pub Date : 2023-11-09 DOI: 10.4018/ijmdem.333609
Wei-An Teng, Su-Ling Yeh, Homer H. Chen
Most existing wearable displays for augmented reality (AR) have only one fixed focal plane and hence can easily suffer from vergence-accommodation conflict (VAC). In contrast, light field displays allow users to focus at any depth free of VAC. This paper presents a series of text-based visual search tasks to systematically and quantitatively compare a near-eye light field AR display with a conventional AR display, specifically in regards to how participants wearing such displays would perform on a virtual-real integration task. Task performance is evaluated by task completion rate and accuracy. The results show that the light field AR glasses lead to significantly higher user performance than the conventional AR glasses. In addition, 80% of the participants prefer the light field AR glasses over the conventional AR glasses for visual comfort.
大多数用于增强现实(AR)的可穿戴式显示器只有一个固定焦平面,因此很容易出现收敛调节冲突(VAC)。相比之下,光场显示器允许用户在没有VAC的任何深度聚焦。本文提出了一系列基于文本的视觉搜索任务,以系统和定量地比较近眼光场AR显示器与传统AR显示器,特别是关于佩戴此类显示器的参与者在虚拟现实集成任务中的表现。任务绩效是通过任务完成率和准确率来评估的。结果表明,光场AR眼镜的用户性能明显高于传统AR眼镜。此外,与传统AR眼镜相比,80%的参与者更喜欢光场AR眼镜的视觉舒适度。
{"title":"Comparison of Light Field and Conventional Near-Eye AR Displays in Virtual-Real Integration Efficiency","authors":"Wei-An Teng, Su-Ling Yeh, Homer H. Chen","doi":"10.4018/ijmdem.333609","DOIUrl":"https://doi.org/10.4018/ijmdem.333609","url":null,"abstract":"Most existing wearable displays for augmented reality (AR) have only one fixed focal plane and hence can easily suffer from vergence-accommodation conflict (VAC). In contrast, light field displays allow users to focus at any depth free of VAC. This paper presents a series of text-based visual search tasks to systematically and quantitatively compare a near-eye light field AR display with a conventional AR display, specifically in regards to how participants wearing such displays would perform on a virtual-real integration task. Task performance is evaluated by task completion rate and accuracy. The results show that the light field AR glasses lead to significantly higher user performance than the conventional AR glasses. In addition, 80% of the participants prefer the light field AR glasses over the conventional AR glasses for visual comfort.","PeriodicalId":445080,"journal":{"name":"International Journal of Multimedia Data Engineering and Management","volume":" 13","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135192165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automation of Explainability Auditing for Image Recognition 图像识别的可解释性审计自动化
Pub Date : 2023-11-01 DOI: 10.4018/ijmdem.332882
Duleep Rathgamage Don, Jonathan Boardman, Sudhashree Sayenju, Ramazan Aygun, Yifan Zhang, Bill Franks, Sereres Johnston, George Lee, Dan Sullivan, Girish Modgil
XAI requires artificial intelligence systems to provide explanations for their decisions and actions for review. Nevertheless, for big data systems where decisions are made frequently, it is technically impossible to have an expert monitor every decision. To solve this problem, the authors propose an explainability auditing method for image recognition whether the explanations are relevant for the decision made by a black box model, and involve an expert as needed when explanations are doubtful. The explainability auditing system classifies explanations as weak or satisfactory using a local explainability model by analyzing the image segments that impacted the decision. This version of the proposed method uses LIME to generate the local explanations as superpixels. Then a bag of image patches is extracted from the superpixels to determine their texture and evaluate the local explanations. Using a rooftop image dataset, the authors show that 95.7% of the cases to be audited can be detected by the proposed method.
XAI需要人工智能系统为他们的决策和行动提供解释,以供审查。然而,对于频繁决策的大数据系统来说,让专家监控每一个决策在技术上是不可能的。为了解决这一问题,作者提出了一种可解释性审计方法,用于图像识别的解释是否与黑箱模型的决策相关,当解释有疑问时,根据需要引入专家。可解释性审计系统通过分析影响决策的图像片段,使用局部可解释性模型将解释分类为弱或满意。这个版本的方法使用LIME生成局部解释作为超像素。然后从超像素中提取一袋图像补丁来确定它们的纹理并评估局部解释。使用屋顶图像数据集,作者表明,该方法可以检测到95.7%的待审计案例。
{"title":"Automation of Explainability Auditing for Image Recognition","authors":"Duleep Rathgamage Don, Jonathan Boardman, Sudhashree Sayenju, Ramazan Aygun, Yifan Zhang, Bill Franks, Sereres Johnston, George Lee, Dan Sullivan, Girish Modgil","doi":"10.4018/ijmdem.332882","DOIUrl":"https://doi.org/10.4018/ijmdem.332882","url":null,"abstract":"XAI requires artificial intelligence systems to provide explanations for their decisions and actions for review. Nevertheless, for big data systems where decisions are made frequently, it is technically impossible to have an expert monitor every decision. To solve this problem, the authors propose an explainability auditing method for image recognition whether the explanations are relevant for the decision made by a black box model, and involve an expert as needed when explanations are doubtful. The explainability auditing system classifies explanations as weak or satisfactory using a local explainability model by analyzing the image segments that impacted the decision. This version of the proposed method uses LIME to generate the local explanations as superpixels. Then a bag of image patches is extracted from the superpixels to determine their texture and evaluate the local explanations. Using a rooftop image dataset, the authors show that 95.7% of the cases to be audited can be detected by the proposed method.","PeriodicalId":445080,"journal":{"name":"International Journal of Multimedia Data Engineering and Management","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135271392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Acquisition and Visualization of Point Cloud Using Airborne LIDAR and Game Engine 基于机载激光雷达和游戏引擎的点云自适应采集与可视化
Pub Date : 2023-10-27 DOI: 10.4018/ijmdem.332881
Chengxuan Huang, Evan Brock, Dalei Wu, Yu Liang
The development of digital twin for smart city applications requires real-time monitoring and mapping of urban environments. This work develops a framework of real-time urban mapping using an airborne light detection and ranging (LIDAR) agent and game engine. In order to improve the accuracy and efficiency of data acquisition and utilization, the framework is focused on the following aspects: (1) an optimal navigation strategy using Deep Q-Network (DQN) reinforcement learning, (2) multi-streamed game engines employed in visualizing data of urban environment and training the deep-learning-enabled data acquisition platform, (3) dynamic mesh used to formulate and analyze the captured point-cloud, and (4) a quantitative error analysis for points generated with our experimental aerial mapping platform, and an accuracy analysis of post-processing. Experimental results show that the proposed DQN-enabled navigation strategy, rendering algorithm, and post-processing could enable a game engine to efficiently generate a highly accurate digital twin of an urban environment.
智能城市应用的数字孪生发展需要对城市环境进行实时监控和绘图。这项工作开发了一个使用机载光探测和测距(LIDAR)代理和游戏引擎的实时城市地图框架。为了提高数据采集和利用的准确性和效率,该框架主要关注以下几个方面:(1)基于Deep Q-Network (DQN)强化学习的最优导航策略;(2)采用多流游戏引擎对城市环境数据进行可视化并训练基于深度学习的数据采集平台;(3)采用动态网格对捕获的点云进行制定和分析;(4)对实验航测平台生成的点进行定量误差分析,并对后处理的精度进行分析。实验结果表明,所提出的基于dqn的导航策略、渲染算法和后处理能够使游戏引擎高效地生成高精度的城市环境数字孪生体。
{"title":"Adaptive Acquisition and Visualization of Point Cloud Using Airborne LIDAR and Game Engine","authors":"Chengxuan Huang, Evan Brock, Dalei Wu, Yu Liang","doi":"10.4018/ijmdem.332881","DOIUrl":"https://doi.org/10.4018/ijmdem.332881","url":null,"abstract":"The development of digital twin for smart city applications requires real-time monitoring and mapping of urban environments. This work develops a framework of real-time urban mapping using an airborne light detection and ranging (LIDAR) agent and game engine. In order to improve the accuracy and efficiency of data acquisition and utilization, the framework is focused on the following aspects: (1) an optimal navigation strategy using Deep Q-Network (DQN) reinforcement learning, (2) multi-streamed game engines employed in visualizing data of urban environment and training the deep-learning-enabled data acquisition platform, (3) dynamic mesh used to formulate and analyze the captured point-cloud, and (4) a quantitative error analysis for points generated with our experimental aerial mapping platform, and an accuracy analysis of post-processing. Experimental results show that the proposed DQN-enabled navigation strategy, rendering algorithm, and post-processing could enable a game engine to efficiently generate a highly accurate digital twin of an urban environment.","PeriodicalId":445080,"journal":{"name":"International Journal of Multimedia Data Engineering and Management","volume":"49 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136234977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
iASSIST
Pub Date : 2020-10-01 DOI: 10.4018/ijmdem.2020100103
Zhigang Zhu, Jin Chen, Lei Zhang, Yaohua Chang, Tyler Franklin, Hao Tang, Arber Ruci
The iASSIST is an iPhone-based assistive sensor solution for independent and safe travel for people who are blind or visually impaired, or those who simply face challenges in navigating an unfamiliar indoor environment. The solution integrates information of Bluetooth beacons, data connectivity, visual models, and user preferences. Hybrid models of interiors are created in a modeling stage with these multimodal data, collected, and mapped to the floor plan as the modeler walks through the building. Client-server architecture allows scaling to large areas by lazy-loading models according to beacon signals and/or adjacent region proximity. During the navigation stage, a user with the navigation app is localized within the floor plan, using visual, connectivity, and user preference data, along an optimal route to their destination. User interfaces for both modeling and navigation use multimedia channels, including visual, audio, and haptic feedback for targeted users. The design of human subject test experiments is also described, in addition to some preliminary experimental results.
iASSIST是一款基于iphone的辅助传感器解决方案,为盲人或视障人士或在不熟悉的室内环境中遇到困难的人提供独立和安全的旅行。该解决方案集成了蓝牙信标、数据连接、可视化模型和用户偏好等信息。室内的混合模型是在建模阶段使用这些多模式数据创建的,当建模者穿过建筑物时,收集并映射到平面图上。客户端-服务器架构允许根据信标信号和/或邻近区域的接近程度通过延迟加载模型扩展到大面积。在导航阶段,使用导航应用程序的用户在平面图中进行本地化,使用视觉,连接性和用户偏好数据,沿着最佳路线到达目的地。建模和导航的用户界面都使用多媒体通道,包括针对目标用户的视觉、音频和触觉反馈。本文还介绍了人体主体测试实验的设计,以及一些初步的实验结果。
{"title":"iASSIST","authors":"Zhigang Zhu, Jin Chen, Lei Zhang, Yaohua Chang, Tyler Franklin, Hao Tang, Arber Ruci","doi":"10.4018/ijmdem.2020100103","DOIUrl":"https://doi.org/10.4018/ijmdem.2020100103","url":null,"abstract":"The iASSIST is an iPhone-based assistive sensor solution for independent and safe travel for people who are blind or visually impaired, or those who simply face challenges in navigating an unfamiliar indoor environment. The solution integrates information of Bluetooth beacons, data connectivity, visual models, and user preferences. Hybrid models of interiors are created in a modeling stage with these multimodal data, collected, and mapped to the floor plan as the modeler walks through the building. Client-server architecture allows scaling to large areas by lazy-loading models according to beacon signals and/or adjacent region proximity. During the navigation stage, a user with the navigation app is localized within the floor plan, using visual, connectivity, and user preference data, along an optimal route to their destination. User interfaces for both modeling and navigation use multimedia channels, including visual, audio, and haptic feedback for targeted users. The design of human subject test experiments is also described, in addition to some preliminary experimental results.","PeriodicalId":445080,"journal":{"name":"International Journal of Multimedia Data Engineering and Management","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125343255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
FaceTimeMap
Pub Date : 2019-04-01 DOI: 10.4018/ijmdem.2019040103
Buddha Shrestha, Haeyong Chung, R. S. Aygün
In this article, the authors study bitmap indexing for temporal querying of faces that appear in videos.Sincethebitmapindexisoriginallydesignedtoselectasetofrecordsthatsatisfyavalue inthedomainoftheattribute,thereisnoclearstrategyforhowtoapplyitfortemporalquerying. Accordingly,theauthorsintroduceamulti-levelbitmapindexthattheauthorscall“FaceTimeMap” for temporal querying of faces in videos. The first level of the FaceTimeMap index is used for determiningwhetherapersonappearsinavideoornot,whereasthesecondleveloftheindexis usedfordeterminingintervalswhenapersonappears.First,theauthorsanalyzetheco-appearance querywheretwoormorepeopleappearsimultaneouslyinavideo,andthenexaminenext-appearance querywhereapersonappearsrightafteranotherperson.Inaddition,toconsiderthegapbetweenthe appearanceofpeople,theauthorsstudyeventual-andprior-appearancequeries.Queriesaresatisfied byapplyingbitwiseoperationsontheFaceTimeMapindex.Theauthorsprovidesomeperformance studiesassociatedwiththisindex. KEywoRDS Allen’s Intervals, Co-Appearance, Eventual-Appearance, Face Search, Next-Appearance
在本文中,作者研究了位图索引对视频中出现的人脸进行时态查询。由于位图索引最初设计用于选择满足属性域中某个值的一组记录,因此对于如何将其应用于临时查询没有明确的策略。因此,作者引入了一个多层次的位图索引,作者称之为“FaceTimeMap”,用于视频中人脸的实时查询。FaceTimeMap索引的第一级用于确定一个人是否出现在视频中,而索引的第二级用于确定一个人出现的时间间隔。首先,作者分析了两个或更多的人同时出现在视频中的共同出现查询,然后研究了一个人紧接着出现在另一个人之后的下一个出现查询。此外,为了考虑人们外表之间的差距,作者研究了最终外表和先前外表的查询。查询通过在FaceTimeMap索引上应用按位操作来满足。作者提供了一些与该指数相关的性能研究。
{"title":"FaceTimeMap","authors":"Buddha Shrestha, Haeyong Chung, R. S. Aygün","doi":"10.4018/ijmdem.2019040103","DOIUrl":"https://doi.org/10.4018/ijmdem.2019040103","url":null,"abstract":"In this article, the authors study bitmap indexing for temporal querying of faces that appear in videos.Sincethebitmapindexisoriginallydesignedtoselectasetofrecordsthatsatisfyavalue inthedomainoftheattribute,thereisnoclearstrategyforhowtoapplyitfortemporalquerying. Accordingly,theauthorsintroduceamulti-levelbitmapindexthattheauthorscall“FaceTimeMap” for temporal querying of faces in videos. The first level of the FaceTimeMap index is used for determiningwhetherapersonappearsinavideoornot,whereasthesecondleveloftheindexis usedfordeterminingintervalswhenapersonappears.First,theauthorsanalyzetheco-appearance querywheretwoormorepeopleappearsimultaneouslyinavideo,andthenexaminenext-appearance querywhereapersonappearsrightafteranotherperson.Inaddition,toconsiderthegapbetweenthe appearanceofpeople,theauthorsstudyeventual-andprior-appearancequeries.Queriesaresatisfied byapplyingbitwiseoperationsontheFaceTimeMapindex.Theauthorsprovidesomeperformance studiesassociatedwiththisindex. KEywoRDS Allen’s Intervals, Co-Appearance, Eventual-Appearance, Face Search, Next-Appearance","PeriodicalId":445080,"journal":{"name":"International Journal of Multimedia Data Engineering and Management","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116664677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
International Journal of Multimedia Data Engineering and Management
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1