首页 > 最新文献

Proceedings of the 2nd International Workshop on Interactive Multimedia Retrieval最新文献

英文 中文
An Asynchronous Scheme for the Distributed Evaluation of Interactive Multimedia Retrieval 交互式多媒体检索分布式评价的异步方案
Loris Sauter, Ralph Gasser, A. Bernstein, H. Schuldt, Luca Rossetto
Evaluation campaigns for interactive multimedia retrieval, such as the Video Browser Shodown (VBS) or the Lifelog Search Challenge (LSC), so far imposed constraints on both simultaneity and locality of all participants, requiring them to solve the same tasks in the same place, at the same time and under the same conditions. These constraints are in contrast to other evaluation campaigns that do not focus on interactivity, where participants can process the tasks in any place at any time. The recent travel restrictions necessitated the relaxation of the locality constraint of interactive campaigns, enabling participants to take place from an arbitrary location. Born out of necessity, this relaxation turned out to be a boon since it greatly simplified the evaluation process and enabled organisation of ad-hoc evaluations outside of the large campaigns. However, it also introduced an additional complication in cases where participants were spread over several time zones. In this paper, we introduce an evaluation scheme for interactive retrieval evaluation that relaxes both the simultaneity and locality constraints, enabling participation from any place at any time within a predefined time frame. This scheme, as implemented in the Distributed Retrieval Evaluation Server (DRES), enables novel ways of conducting interactive retrieval evaluation and bridged the gap between interactive campaigns and non-interactive ones.
交互式多媒体检索的评估活动,如视频浏览器关闭(VBS)或生活日志搜索挑战(LSC),到目前为止,对所有参与者的同时性和局部性都施加了限制,要求他们在同一地点、同一时间和同一条件下解决相同的任务。这些限制与其他不关注交互性的评估活动形成对比,参与者可以在任何时间任何地点处理任务。由于最近的旅行限制,有必要放宽互动式活动的地点限制,使参与者能够从任意地点进行活动。出于必要,这种放松被证明是一件好事,因为它大大简化了评估过程,并使在大型活动之外组织特别评估成为可能。然而,在参与者分散在几个时区的情况下,它也带来了额外的复杂性。在本文中,我们引入了一种交互式检索评价方案,该方案放松了同时性和局地性约束,使得在预定义的时间框架内,任何地点的任何时间都可以参与。该方案在分布式检索评估服务器(DRES)中实现,实现了交互式检索评估的新方法,并弥合了交互式活动与非交互式活动之间的差距。
{"title":"An Asynchronous Scheme for the Distributed Evaluation of Interactive Multimedia Retrieval","authors":"Loris Sauter, Ralph Gasser, A. Bernstein, H. Schuldt, Luca Rossetto","doi":"10.1145/3552467.3554797","DOIUrl":"https://doi.org/10.1145/3552467.3554797","url":null,"abstract":"Evaluation campaigns for interactive multimedia retrieval, such as the Video Browser Shodown (VBS) or the Lifelog Search Challenge (LSC), so far imposed constraints on both simultaneity and locality of all participants, requiring them to solve the same tasks in the same place, at the same time and under the same conditions. These constraints are in contrast to other evaluation campaigns that do not focus on interactivity, where participants can process the tasks in any place at any time. The recent travel restrictions necessitated the relaxation of the locality constraint of interactive campaigns, enabling participants to take place from an arbitrary location. Born out of necessity, this relaxation turned out to be a boon since it greatly simplified the evaluation process and enabled organisation of ad-hoc evaluations outside of the large campaigns. However, it also introduced an additional complication in cases where participants were spread over several time zones. In this paper, we introduce an evaluation scheme for interactive retrieval evaluation that relaxes both the simultaneity and locality constraints, enabling participation from any place at any time within a predefined time frame. This scheme, as implemented in the Distributed Retrieval Evaluation Server (DRES), enables novel ways of conducting interactive retrieval evaluation and bridged the gap between interactive campaigns and non-interactive ones.","PeriodicalId":168191,"journal":{"name":"Proceedings of the 2nd International Workshop on Interactive Multimedia Retrieval","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128127378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Influence of Late Fusion of High-Level Features on User Relevance Feedback for Videos 高级特征后期融合对视频用户相关性反馈的影响
O. Khan, Jan Zahálka, Björn þór Jónsson
Content-based media retrieval relies on multimodal data representations. For videos, these representations mainly focus on the textual, visual, and audio modalities. While the modality representations can be used individually, combining their information can improve the overall retrieval experience. For video collections, retrieval focuses on either finding a full length video or specific segment(s) from one or more videos. For the former, the textual metadata along with broad descriptions of the contents are useful. For the latter, visual and audio modality representations are preferable as they represent the contents of specific segments in videos. Interactive learning approaches, such as user relevance feedback, have shown promising results when solving exploration and search tasks in larger collections. When combining modality representations in user relevance feedback, often a form of late modality fusion method is applied. While this generally tends to improve retrieval, its performance for video collections with multiple modality representations of high-level features, is not well known. In this study we analyse the effects of late fusion using high-level features, such as semantic concepts, actions, scenes, and audio. From our experiments on three video datasets, V3C1, Charades, and VGG-Sound, we show that fusion works well, but depending on the task or dataset, excluding one or more modalities can improve results. When it is clear that a modality is better for a task, setting a preference to enhance that modality's influence in the fusion process can also be greatly beneficial. Furthermore, we show that mixing fusion results and results from individual modalities can be better than only performing fusion.
基于内容的媒体检索依赖于多模态数据表示。对于视频,这些表征主要集中在文本、视觉和音频模式上。虽然模态表示可以单独使用,但将它们的信息组合起来可以改善整体检索体验。对于视频集合,检索的重点是从一个或多个视频中找到完整长度的视频或特定片段。对于前者,文本元数据以及对内容的广泛描述是有用的。对于后者,视觉和音频的模态表示是可取的,因为它们表示视频中特定片段的内容。交互式学习方法,如用户相关性反馈,在解决大型集合中的探索和搜索任务时显示出有希望的结果。在用户关联反馈中结合模态表示时,通常采用一种后期模态融合方法。虽然这通常倾向于提高检索,但它对具有高级特征的多模态表示的视频集合的性能并不为人所知。在这项研究中,我们使用高级特征分析后期融合的影响,如语义概念、动作、场景和音频。从我们对三个视频数据集(V3C1、Charades和VGG-Sound)的实验中,我们发现融合效果很好,但根据任务或数据集的不同,排除一种或多种模式可以改善结果。当一种模态明显更适合某项任务时,设置偏好以增强该模态在融合过程中的影响也可能非常有益。此外,我们表明混合融合结果和单个模式的结果可以比只进行融合更好。
{"title":"Influence of Late Fusion of High-Level Features on User Relevance Feedback for Videos","authors":"O. Khan, Jan Zahálka, Björn þór Jónsson","doi":"10.1145/3552467.3554795","DOIUrl":"https://doi.org/10.1145/3552467.3554795","url":null,"abstract":"Content-based media retrieval relies on multimodal data representations. For videos, these representations mainly focus on the textual, visual, and audio modalities. While the modality representations can be used individually, combining their information can improve the overall retrieval experience. For video collections, retrieval focuses on either finding a full length video or specific segment(s) from one or more videos. For the former, the textual metadata along with broad descriptions of the contents are useful. For the latter, visual and audio modality representations are preferable as they represent the contents of specific segments in videos. Interactive learning approaches, such as user relevance feedback, have shown promising results when solving exploration and search tasks in larger collections. When combining modality representations in user relevance feedback, often a form of late modality fusion method is applied. While this generally tends to improve retrieval, its performance for video collections with multiple modality representations of high-level features, is not well known. In this study we analyse the effects of late fusion using high-level features, such as semantic concepts, actions, scenes, and audio. From our experiments on three video datasets, V3C1, Charades, and VGG-Sound, we show that fusion works well, but depending on the task or dataset, excluding one or more modalities can improve results. When it is clear that a modality is better for a task, setting a preference to enhance that modality's influence in the fusion process can also be greatly beneficial. Furthermore, we show that mixing fusion results and results from individual modalities can be better than only performing fusion.","PeriodicalId":168191,"journal":{"name":"Proceedings of the 2nd International Workshop on Interactive Multimedia Retrieval","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131168596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
NeoCube NeoCube
Nikolaj Mertz, Björn þór Jónsson, Aaron Duane
In this work, we consider metadata-based exploration of media collections using the M3 data model, to support multimedia analytics applications. We propose a new metadata-server implementation based on the Neo4j graph database system, and compare it to the existing, heavily-optimised server based on a relational database system. We show that the graph-based implementation performs well for interactive metadata-space retrieval, albeit not as well as the optimised relational implementation. The graph-based implementation also allows for very efficient updates to the metadata collection, however, which are practically impossible in the optimised relational implementation.
{"title":"NeoCube","authors":"Nikolaj Mertz, Björn þór Jónsson, Aaron Duane","doi":"10.1145/3552467.3554799","DOIUrl":"https://doi.org/10.1145/3552467.3554799","url":null,"abstract":"In this work, we consider metadata-based exploration of media collections using the M3 data model, to support multimedia analytics applications. We propose a new metadata-server implementation based on the Neo4j graph database system, and compare it to the existing, heavily-optimised server based on a relational database system. We show that the graph-based implementation performs well for interactive metadata-space retrieval, albeit not as well as the optimised relational implementation. The graph-based implementation also allows for very efficient updates to the metadata collection, however, which are practically impossible in the optimised relational implementation.","PeriodicalId":168191,"journal":{"name":"Proceedings of the 2nd International Workshop on Interactive Multimedia Retrieval","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121446540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
VILT
Sophie Fischer, Carlos Gemmell, Iain Mackie, Jeffrey Dalton
This work addresses challenges in developing conversational assistants that support rich multimodal video interactions to accomplish real-world tasks interactively. We introduce the task of automatically linking instructional videos to task steps as "Video Instructions Linking for Complex Tasks" (VILT). Specifically, we focus on the domain of cooking and empowering users to cook meals interactively with a video-enabled Alexa skill. We create a reusable benchmark with 61 queries from recipe tasks and curate a collection of 2,133 instructional "How-To" cooking videos. Studying VILT with state-of-the-art retrieval methods, we find that dense retrieval with ANCE is the most effective, achieving an NDCG@3 of 0.566 and P@1 of 0.644. We also conduct a user study that measures the effect of incorporating videos in a real-world task setting, where 10 participants perform several cooking tasks with varying multimodal experimental conditions using a state-of-the-art Alexa TaskBot system. The users interacting with manually linked videos said they learned something new 64% of the time, which is a 9% increase compared to the automatically linked videos (55%), indicating that linked video relevance is important for task learning.
{"title":"VILT","authors":"Sophie Fischer, Carlos Gemmell, Iain Mackie, Jeffrey Dalton","doi":"10.1145/3552467.3554794","DOIUrl":"https://doi.org/10.1145/3552467.3554794","url":null,"abstract":"This work addresses challenges in developing conversational assistants that support rich multimodal video interactions to accomplish real-world tasks interactively. We introduce the task of automatically linking instructional videos to task steps as \"Video Instructions Linking for Complex Tasks\" (VILT). Specifically, we focus on the domain of cooking and empowering users to cook meals interactively with a video-enabled Alexa skill. We create a reusable benchmark with 61 queries from recipe tasks and curate a collection of 2,133 instructional \"How-To\" cooking videos. Studying VILT with state-of-the-art retrieval methods, we find that dense retrieval with ANCE is the most effective, achieving an NDCG@3 of 0.566 and P@1 of 0.644. We also conduct a user study that measures the effect of incorporating videos in a real-world task setting, where 10 participants perform several cooking tasks with varying multimodal experimental conditions using a state-of-the-art Alexa TaskBot system. The users interacting with manually linked videos said they learned something new 64% of the time, which is a 9% increase compared to the automatically linked videos (55%), indicating that linked video relevance is important for task learning.","PeriodicalId":168191,"journal":{"name":"Proceedings of the 2nd International Workshop on Interactive Multimedia Retrieval","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115368935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Combining Semantic and Visual Image Graphs for Efficient Search and Exploration of Large Dynamic Image Collections 结合语义和视觉图像图,高效搜索和探索大型动态图像集合
K. U. Barthel, N. Hezel, Konstantin Schall, K. Jung
Image collections today often consist of millions of images, making it impossible to get an overview of the entire content. In recent years, we have presented several demonstrators for graph-based systems allowing image search and a visual exploration of the collection. Meanwhile, very powerful visual and also joint visual-textual feature vectors have been developed, which are suitable for finding similar images to query images or according to a textual description. A drawback of these image feature vectors is that they have a high number of dimensions, which leads to long search times, especially for large image collections. In this paper, we show how it is possible to significantly reduce the search time even for high-dimensional feature vectors and improve the efficiency of the search system. By combining two different image graphs, on the one hand, an extremely fast approximate nearest neighbor search can be achieved. Experimental results show that the proposed method performs better than state-of-the-art methods. On the other hand, it is possible to visually explore the entire image collection in real time using a standard web browser. Unlike other graph-based search systems, the proposed image graphs can dynamically adapt to the insertion and removal of images from the collection.
今天的图像集合通常由数百万张图像组成,因此不可能获得整个内容的概述。近年来,我们提出了几个基于图形的系统的演示,允许图像搜索和集合的视觉探索。同时,开发了非常强大的视觉特征向量和视觉-文本联合特征向量,适用于查找相似图像以查询图像或根据文本描述。这些图像特征向量的一个缺点是它们具有很高的维数,这导致较长的搜索时间,特别是对于大型图像集合。在本文中,我们展示了如何显著减少高维特征向量的搜索时间并提高搜索系统的效率。通过结合两种不同的图像图,一方面可以实现极快的近似最近邻搜索;实验结果表明,该方法优于现有方法。另一方面,使用标准的web浏览器可以实时可视化地浏览整个图像集。与其他基于图的搜索系统不同,所提出的图像图可以动态地适应从集合中插入和删除图像。
{"title":"Combining Semantic and Visual Image Graphs for Efficient Search and Exploration of Large Dynamic Image Collections","authors":"K. U. Barthel, N. Hezel, Konstantin Schall, K. Jung","doi":"10.1145/3552467.3554796","DOIUrl":"https://doi.org/10.1145/3552467.3554796","url":null,"abstract":"Image collections today often consist of millions of images, making it impossible to get an overview of the entire content. In recent years, we have presented several demonstrators for graph-based systems allowing image search and a visual exploration of the collection. Meanwhile, very powerful visual and also joint visual-textual feature vectors have been developed, which are suitable for finding similar images to query images or according to a textual description. A drawback of these image feature vectors is that they have a high number of dimensions, which leads to long search times, especially for large image collections. In this paper, we show how it is possible to significantly reduce the search time even for high-dimensional feature vectors and improve the efficiency of the search system. By combining two different image graphs, on the one hand, an extremely fast approximate nearest neighbor search can be achieved. Experimental results show that the proposed method performs better than state-of-the-art methods. On the other hand, it is possible to visually explore the entire image collection in real time using a standard web browser. Unlike other graph-based search systems, the proposed image graphs can dynamically adapt to the insertion and removal of images from the collection.","PeriodicalId":168191,"journal":{"name":"Proceedings of the 2nd International Workshop on Interactive Multimedia Retrieval","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130203998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Impact of Blind Image Quality Assessment on the Retrieval of Lifelog Images 盲图像质量评价对生活日志图像检索的影响
Ricardo F. Ribeiro, A. Trifan, António J. R. Neves
The use of personal lifelogs can be beneficial to improve the quality of our life, as they can serve as tools for memory augmentation or for providing support to people with memory issues. In visual lifelogs, data are captured by cameras in the form of images or videos. However, a considerable amount of these images or videos are affected by different types of distortions or noise due to the non-controlled acquisition process. This article addresses the use of Blind Image Quality Assessment algorithms as a pre-processing approach in the retrieval of lifelogging images. As the amount of lifelog images has increased over the last few years, it is fundamental to find solutions to filter images in a lifelog data collection. We evaluate the impact of a Blind Image Quality Assessment algorithm by performing different retrieval experiments through a lifelogging system named MEMORIA. The results are promising and show that our approach can reduce the amount of images to process and retrieve in a lifelog data collection without losing valuable information, and provide to the user the most valuable images. By excluding a considerable amount of images in the pre-processing stage of a lifelogging system, its performance can be increased by saving time and resources.
使用个人生活日志有助于提高我们的生活质量,因为它们可以作为增强记忆的工具,或者为有记忆问题的人提供支持。在视觉生活日志中,数据由相机以图像或视频的形式捕获。然而,由于非受控的采集过程,这些图像或视频中有相当一部分受到不同类型的失真或噪声的影响。这篇文章解决了使用盲图像质量评估算法作为预处理方法在检索生活日志图像。随着过去几年生活日志图像数量的增加,在生活日志数据收集中找到过滤图像的解决方案是至关重要的。我们通过一个名为MEMORIA的生命记录系统进行不同的检索实验来评估盲图像质量评估算法的影响。结果是有希望的,表明我们的方法可以减少在生活日志数据收集中处理和检索的图像数量,而不会丢失有价值的信息,并为用户提供最有价值的图像。通过在生命记录系统的预处理阶段剔除相当数量的图像,可以节省时间和资源,提高其性能。
{"title":"Impact of Blind Image Quality Assessment on the Retrieval of Lifelog Images","authors":"Ricardo F. Ribeiro, A. Trifan, António J. R. Neves","doi":"10.1145/3552467.3554798","DOIUrl":"https://doi.org/10.1145/3552467.3554798","url":null,"abstract":"The use of personal lifelogs can be beneficial to improve the quality of our life, as they can serve as tools for memory augmentation or for providing support to people with memory issues. In visual lifelogs, data are captured by cameras in the form of images or videos. However, a considerable amount of these images or videos are affected by different types of distortions or noise due to the non-controlled acquisition process. This article addresses the use of Blind Image Quality Assessment algorithms as a pre-processing approach in the retrieval of lifelogging images. As the amount of lifelog images has increased over the last few years, it is fundamental to find solutions to filter images in a lifelog data collection. We evaluate the impact of a Blind Image Quality Assessment algorithm by performing different retrieval experiments through a lifelogging system named MEMORIA. The results are promising and show that our approach can reduce the amount of images to process and retrieve in a lifelog data collection without losing valuable information, and provide to the user the most valuable images. By excluding a considerable amount of images in the pre-processing stage of a lifelogging system, its performance can be increased by saving time and resources.","PeriodicalId":168191,"journal":{"name":"Proceedings of the 2nd International Workshop on Interactive Multimedia Retrieval","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132283591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Proceedings of the 2nd International Workshop on Interactive Multimedia Retrieval 第二届交互式多媒体检索国际研讨会论文集
{"title":"Proceedings of the 2nd International Workshop on Interactive Multimedia Retrieval","authors":"","doi":"10.1145/3552467","DOIUrl":"https://doi.org/10.1145/3552467","url":null,"abstract":"","PeriodicalId":168191,"journal":{"name":"Proceedings of the 2nd International Workshop on Interactive Multimedia Retrieval","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115085086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 2nd International Workshop on Interactive Multimedia Retrieval
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1