首页 > 最新文献

Computer Vision Applications in Sports最新文献

英文 中文
Aerodynamic analysis via foreground segmentation 前景分割气动分析
Pub Date : 2017-01-29 DOI: 10.17863/CAM.8293
P. Carey, Stuart Bennett, Joan Lasenby, T. Purnell
Results from wind-tunnel testing of athletes cannot always be repeated on the track, but reducing aerodynamic drag is critical for racing. Drag force is highly correlated with an athlete's frontal area, so in this paper we describe a system to segment an athlete from the very challenging background found in a standard racing environment. Given an accurate segmentation, a front-on view, and the athlete's position (for scaling), one can effectively count the pixels and thereby measure the moving area. The method described does not rely on alteration of the track lighting, background, or athlete's appearance. An image-matting algorithm more used in the film industry is combined with an innovative model-based pre-process to allow the whole measurement to be automated. Area results have better than one percent error compared to handextracted measurements over a representative period, while frame-by-frame measurements capture expected cyclic variation. A near real-time implementation permits rapid iteration of aerodynamic experiments during training.
运动员风洞测试的结果不能总是在赛道上重复,但减少空气动力阻力对比赛至关重要。阻力与运动员的额叶区域高度相关,因此在本文中,我们描述了一个系统,可以将运动员从标准比赛环境中非常具有挑战性的背景中分离出来。给定准确的分割、正面视图和运动员的位置(用于缩放),就可以有效地计算像素,从而测量运动区域。所描述的方法不依赖于赛道照明、背景或运动员外表的改变。在电影行业中使用的图像抠图算法与创新的基于模型的预处理相结合,使整个测量过程自动化。在代表性时期内,与手工提取的测量结果相比,面积结果的误差小于1%,而逐帧测量则捕获预期的循环变化。接近实时的实现允许在训练期间快速迭代空气动力学实验。
{"title":"Aerodynamic analysis via foreground segmentation","authors":"P. Carey, Stuart Bennett, Joan Lasenby, T. Purnell","doi":"10.17863/CAM.8293","DOIUrl":"https://doi.org/10.17863/CAM.8293","url":null,"abstract":"Results from wind-tunnel testing of athletes cannot always be repeated on the track, but reducing aerodynamic drag is critical for racing. Drag force is highly correlated with an athlete's frontal area, so in this paper we describe a system to segment an athlete from the very challenging background found in a standard racing environment. Given an accurate segmentation, a front-on view, and the athlete's position (for scaling), one can effectively count the pixels and thereby measure the moving area. The method described does not rely on alteration of the track lighting, background, or athlete's appearance. An image-matting algorithm more used in the film industry is combined with an innovative model-based pre-process to allow the whole measurement to be automated. Area results have better than one percent error compared to handextracted measurements over a representative period, while frame-by-frame measurements capture expected cyclic variation. A near real-time implementation permits rapid iteration of aerodynamic experiments during training.","PeriodicalId":261646,"journal":{"name":"Computer Vision Applications in Sports","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129730265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Comparison of a Virtual Game-Day Experience on Varying Devices 不同设备上的虚拟游戏日体验比较
Pub Date : 2017-01-29 DOI: 10.2352/ISSN.2470-1173.2017.16.CVAS-346
John W. V. Miller, Holly Baiotto, Anastacia MacAllister, Melynda Hoover, Gabe Evans, Jonathan Schlueter, Vijay Kalivarapu, E. Winer
Collegiate athletics, particularly football, provide tremendous value to schools through branding, revenue, and publicity. As a result, extensive effort is put into recruiting talented students. When recruiting, home games are exceptional tools used to show a school's unique game-day atmosphere. However, this is not a viable option during the offseason or for off-site visits. This paper explores a solution to these challenges by using virtual reality (VR) to recreate the game-day experience. The Virtual Reality Application Center in conjunction with Iowa State University (ISU) athletics, created a VR application mimicking the game-day experience at ISU. This application was displayed using the world's highest resolution six-sided CAVETM, an Oculus Rift DK2 computer-driven head mounted display (HMD) and a Merge VR smart phone-driven HMD. A between-subjects user study compared presence between the different systems and a video control. In total, 82 students participated, indicating their presence using the Witmer and Singer questionnaire. Results revealed that while the CAVETM scored the highest in presence, the Oculus and Merge only experienced a slight drop compared to the CAVETM. This result suggests that the mobile ultra-low-cost Merge is a viable alternative to the CAVE TM and Oculus for delivering the game-day experience to ISU recruits.
大学体育运动,尤其是橄榄球,通过品牌、收入和宣传为学校提供了巨大的价值。因此,学校投入了大量的精力来招收有才能的学生。在招聘时,主场比赛是用来展示学校独特的比赛日氛围的特殊工具。然而,这不是一个可行的选择,在淡季或非现场访问。本文通过使用虚拟现实(VR)来重现比赛日体验,探讨了解决这些挑战的方法。虚拟现实应用中心与爱荷华州立大学(ISU)体育合作,创建了一个虚拟现实应用程序,模拟了爱荷华州立大学比赛日的体验。该应用程序使用世界上最高分辨率的六面CAVETM, Oculus Rift DK2计算机驱动的头戴式显示器(HMD)和Merge VR智能手机驱动的HMD来显示。受试者之间的用户研究比较了不同系统和视频控制之间的存在。总共有82名学生参与了调查,并通过维特默和辛格的问卷来表明他们的存在。结果显示,虽然CAVETM的在场感得分最高,但Oculus和Merge的在场感与CAVETM相比仅略有下降。这一结果表明,移动超低成本的Merge是CAVE TM和Oculus的可行替代品,可以为ISU新兵提供比赛日体验。
{"title":"Comparison of a Virtual Game-Day Experience on Varying Devices","authors":"John W. V. Miller, Holly Baiotto, Anastacia MacAllister, Melynda Hoover, Gabe Evans, Jonathan Schlueter, Vijay Kalivarapu, E. Winer","doi":"10.2352/ISSN.2470-1173.2017.16.CVAS-346","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2017.16.CVAS-346","url":null,"abstract":"Collegiate athletics, particularly football, provide tremendous value to schools through branding, revenue, and publicity. As a result, extensive effort is put into recruiting talented students. When recruiting, home games are exceptional tools used to show a school's unique game-day atmosphere. However, this is not a viable option during the offseason or for off-site visits. This paper explores a solution to these challenges by using virtual reality (VR) to recreate the game-day experience. The Virtual Reality Application Center in conjunction with Iowa State University (ISU) athletics, created a VR application mimicking the game-day experience at ISU. This application was displayed using the world's highest resolution six-sided CAVETM, an Oculus Rift DK2 computer-driven head mounted display (HMD) and a Merge VR smart phone-driven HMD. A between-subjects user study compared presence between the different systems and a video control. In total, 82 students participated, indicating their presence using the Witmer and Singer questionnaire. Results revealed that while the CAVETM scored the highest in presence, the Oculus and Merge only experienced a slight drop compared to the CAVETM. This result suggests that the mobile ultra-low-cost Merge is a viable alternative to the CAVE TM and Oculus for delivering the game-day experience to ISU recruits.","PeriodicalId":261646,"journal":{"name":"Computer Vision Applications in Sports","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133492928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Virtual tracking shots for sports analysis 用于运动分析的虚拟跟踪投篮
Pub Date : 2017-01-29 DOI: 10.2352/ISSN.2470-1173.2017.16.CVAS-342
Stuart Bennett, Joan Lasenby, T. Purnell
Reviewing athletic performance is a critical part of modern sports training, but snapshots only showing part of a course or exercise can be misleading, while travelling cameras are expensive. In this paper we describe a system merging the output of many autonomous inexpensive camera nodes distributed around a course to reliably synthesize tracking shots of multiple athletes training concurrently. Issues such as uncontrolled lighting, athlete occlusions and overtaking/pack-motion are dealt with, as is compensating for the quirks of cheap image sensors. The resultant system is entirely automated, inexpensive, scalable and provides output in near real-time, allowing coaching staff to give immediate and relevant feedback on a performance. Requiring no alteration to existing training exercises has boosted the system's uptake by coaches, with over 100,000 videos recorded to date.
回顾运动员的表现是现代运动训练的重要组成部分,但只展示部分课程或运动的快照可能会产生误导,而旅行相机很昂贵。在本文中,我们描述了一个系统,该系统融合了分布在一个球场周围的许多自主的廉价相机节点的输出,以可靠地合成多个运动员同时训练的跟踪镜头。解决了诸如不受控制的照明、运动员遮挡和超车/拥挤运动等问题,并补偿了廉价图像传感器的怪异之处。由此产生的系统是完全自动化的,价格低廉,可扩展,并提供近乎实时的输出,允许教练组对表现提供即时和相关的反馈。由于不需要改变现有的训练练习,教练们对该系统的接受程度有所提高,迄今已录制了超过10万段视频。
{"title":"Virtual tracking shots for sports analysis","authors":"Stuart Bennett, Joan Lasenby, T. Purnell","doi":"10.2352/ISSN.2470-1173.2017.16.CVAS-342","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2017.16.CVAS-342","url":null,"abstract":"Reviewing athletic performance is a critical part of modern sports training, but snapshots only showing part of a course or exercise can be misleading, while travelling cameras are expensive. In this paper we describe a system merging the output of many autonomous inexpensive camera nodes distributed around a course to reliably synthesize tracking shots of multiple athletes training concurrently. Issues such as uncontrolled lighting, athlete occlusions and overtaking/pack-motion are dealt with, as is compensating for the quirks of cheap image sensors. The resultant system is entirely automated, inexpensive, scalable and provides output in near real-time, allowing coaching staff to give immediate and relevant feedback on a performance. Requiring no alteration to existing training exercises has boosted the system's uptake by coaches, with over 100,000 videos recorded to date.","PeriodicalId":261646,"journal":{"name":"Computer Vision Applications in Sports","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125335969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pose Estimation for Deriving Kinematic Parameters of Competitive Swimmers 基于姿态估计的竞技游泳运动员运动参数提取
Pub Date : 2017-01-29 DOI: 10.2352/ISSN.2470-1173.2017.16.CVAS-345
D. Zecha, C. Eggert, R. Lienhart
In the field of competitive swimming a quantitative evaluation of kinematic parameters is a valuable tool for coaches but also a labor intensive task. We present a system which is able to automate the extraction of many kinematic parameters such as stroke frequency, kick rates and stroke-specific intra-cyclic parameters from video footage of an athlete. While this task can in principle be solved by human pose estimation, the problem is exacerbated by permanently changing self-occlusion and severe noise caused by air bubbles, splashes, light reflection and light refraction. Current approaches for pose estimation are unable to provide the necessary localization precision under these conditions in order to enable accurate estimates of all desired kinematic parameters. In this paper we reduce the problem of kinematic parameter derivation to detecting key frames with a deep neural network human pose estimator. We show that we can correctly detect key frames with a precision which is on par with the human annotation performance. From the correctly located key frames, aforementioned parameters can be successfully inferred.
在竞技游泳领域,运动参数的定量评估是教练员的重要工具,也是一项劳动密集型的工作。我们提出了一个系统,该系统能够从运动员的视频片段中自动提取许多运动学参数,如中风频率、踢腿率和中风特定的周期内参数。虽然这项任务原则上可以通过人体姿势估计来解决,但由于气泡、飞溅、光反射和光折射引起的永久变化的自遮挡和严重噪声,问题变得更加严重。目前的姿态估计方法无法在这些条件下提供必要的定位精度,以实现对所有所需运动学参数的准确估计。本文利用深度神经网络人体姿态估计器将运动参数推导问题简化为关键帧检测问题。我们证明了我们可以正确地检测关键帧,其精度与人类注释性能相当。从正确定位的关键帧中,可以成功地推断出上述参数。
{"title":"Pose Estimation for Deriving Kinematic Parameters of Competitive Swimmers","authors":"D. Zecha, C. Eggert, R. Lienhart","doi":"10.2352/ISSN.2470-1173.2017.16.CVAS-345","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2017.16.CVAS-345","url":null,"abstract":"In the field of competitive swimming a quantitative evaluation of kinematic parameters is a valuable tool for coaches but also a labor intensive task. We present a system which is able to automate the extraction of many kinematic parameters such as stroke frequency, kick rates and stroke-specific intra-cyclic parameters from video footage of an athlete. While this task can in principle be solved by human pose estimation, the problem is exacerbated by permanently changing self-occlusion and severe noise caused by air bubbles, splashes, light reflection and light refraction. Current approaches for pose estimation are unable to provide the necessary localization precision under these conditions in order to enable accurate estimates of all desired kinematic parameters. In this paper we reduce the problem of kinematic parameter derivation to detecting key frames with a deep neural network human pose estimator. We show that we can correctly detect key frames with a precision which is on par with the human annotation performance. From the correctly located key frames, aforementioned parameters can be successfully inferred.","PeriodicalId":261646,"journal":{"name":"Computer Vision Applications in Sports","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127015923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Digital Playbook - A Teaching Tool for American Football 数字剧本-美式足球教学工具
Pub Date : 2017-01-29 DOI: 10.2352/ISSN.2470-1173.2017.16.CVAS-347
M. Vorstandlechner, M. Gelautz, Christoph Putz
{"title":"Digital Playbook - A Teaching Tool for American Football","authors":"M. Vorstandlechner, M. Gelautz, Christoph Putz","doi":"10.2352/ISSN.2470-1173.2017.16.CVAS-347","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2017.16.CVAS-347","url":null,"abstract":"","PeriodicalId":261646,"journal":{"name":"Computer Vision Applications in Sports","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130007654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Goal!! Event detection in sports video 目标! !体育视频中的事件检测
Pub Date : 2017-01-29 DOI: 10.2352/ISSN.2470-1173.2017.16.CVAS-344
Grigorios Tsagkatakis, M. Jaber, P. Tsakalides
Understanding complex events from unstructured video, like scoring a goal in a football game, is an extremely challenging task due to the dynamics, complexity and variation of video sequences. In this work, we attack this problem exploiting the capabilities of the recently developed framework of deep learning. We consider independently encoding spatial and temporal information via convolutional neural networks and fusion of features via regularized Autoencoders. To demonstrate the capacities of the proposed scheme, a new dataset is compiled, composed of goal and no-goal sequences. Experimental results demonstrate that extremely high classification accuracy can be achieved, from a dramatically limited number of examples, by leveraging pretrained models with fine-tuned fusion of spatio-temporal features. Introduction Analyzing unstructured video streams is a challenging task for multiple reasons [10]. A first challenge is associated with the complexity of real world dynamics that are manifested in such video streams, including changes in viewpoint, illumination and quality. In addition, while annotated image datasets are prevalent, a smaller number of labeled datasets are available for video analytics. Last, the analysis of massive, high dimensional video streams is extremely demanding, requiring significantly higher computational resources compared to still imagery [11]. In this work, we focus on the analysis of a particular type of videos showing multi-person sport activities and more specifically football (soccer) games. Sport videos in general are acquired from different vantage points and the decision of selecting a single stream for broadcasting is taken by the director. As a result, the broadcasted video stream is characterized by varying acquisition conditions like zooming-in near the goalpost during a goal and zooming-out to cover the full field. In this complex situation, we consider the high level objective of detecting specific and semantically meaningful events like an opponent team scoring a goal. Succeeding in this task will allow the automatic transcription of games, video summarization and automatic statistical analysis. Despite the many challenges associated with video analytics, the human brain is able to extract meaning and provide contextual information in a limited amount of time and from a limited set of training examples. From a computational perspective, the process of event detection in a video sequence amounts to two foundamental steps, namely (i) spatio-temporal feature extraction and (ii) example classification. Typically, feature extraction approaches rely on highly engineered handcrafted features like the SIFT, which however are not able to generalize to more challenging cases. To achieve this objective, we consider the state-of-theart framework of deep learning [18] and more specifically the case of Convolutional Neural Networks (CNNs) [16], which has taken by storm almost all problems related to computer vision, ran
与图像检测问题不同,视频中的特征提取必须解决相关的挑战
{"title":"Goal!! Event detection in sports video","authors":"Grigorios Tsagkatakis, M. Jaber, P. Tsakalides","doi":"10.2352/ISSN.2470-1173.2017.16.CVAS-344","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2017.16.CVAS-344","url":null,"abstract":"Understanding complex events from unstructured video, like scoring a goal in a football game, is an extremely challenging task due to the dynamics, complexity and variation of video sequences. In this work, we attack this problem exploiting the capabilities of the recently developed framework of deep learning. We consider independently encoding spatial and temporal information via convolutional neural networks and fusion of features via regularized Autoencoders. To demonstrate the capacities of the proposed scheme, a new dataset is compiled, composed of goal and no-goal sequences. Experimental results demonstrate that extremely high classification accuracy can be achieved, from a dramatically limited number of examples, by leveraging pretrained models with fine-tuned fusion of spatio-temporal features. Introduction Analyzing unstructured video streams is a challenging task for multiple reasons [10]. A first challenge is associated with the complexity of real world dynamics that are manifested in such video streams, including changes in viewpoint, illumination and quality. In addition, while annotated image datasets are prevalent, a smaller number of labeled datasets are available for video analytics. Last, the analysis of massive, high dimensional video streams is extremely demanding, requiring significantly higher computational resources compared to still imagery [11]. In this work, we focus on the analysis of a particular type of videos showing multi-person sport activities and more specifically football (soccer) games. Sport videos in general are acquired from different vantage points and the decision of selecting a single stream for broadcasting is taken by the director. As a result, the broadcasted video stream is characterized by varying acquisition conditions like zooming-in near the goalpost during a goal and zooming-out to cover the full field. In this complex situation, we consider the high level objective of detecting specific and semantically meaningful events like an opponent team scoring a goal. Succeeding in this task will allow the automatic transcription of games, video summarization and automatic statistical analysis. Despite the many challenges associated with video analytics, the human brain is able to extract meaning and provide contextual information in a limited amount of time and from a limited set of training examples. From a computational perspective, the process of event detection in a video sequence amounts to two foundamental steps, namely (i) spatio-temporal feature extraction and (ii) example classification. Typically, feature extraction approaches rely on highly engineered handcrafted features like the SIFT, which however are not able to generalize to more challenging cases. To achieve this objective, we consider the state-of-theart framework of deep learning [18] and more specifically the case of Convolutional Neural Networks (CNNs) [16], which has taken by storm almost all problems related to computer vision, ran","PeriodicalId":261646,"journal":{"name":"Computer Vision Applications in Sports","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126482068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
期刊
Computer Vision Applications in Sports
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1