首页 > 最新文献

2022 Symposium on Eye Tracking Research and Applications最新文献

英文 中文
Using Eye Tracking Data for Enhancing Adaptive Learning Systems 使用眼动追踪数据增强自适应学习系统
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3532195
Kathrin Kennel
Adaptive learning systems analyse a learner's input and respond on the basis of it, for example by providing individual feedback or selecting appropriate follow-up tasks. To provide good feedback, such a system must have a high diagnostic capability. The collection of gaze data alongside the traditional data obtained through mouse and keyboard input seems to be a promising approach for this. We use the example of graphical differentiation to investigate whether and how the integration of eye tracking data into such a system can succeed. For this purpose, we analyse students' eye tracking data and gather empirical understanding about which measures are suitable as decision support for adaptation
适应性学习系统分析学习者的输入并在此基础上做出反应,例如提供个人反馈或选择适当的后续任务。为了提供良好的反馈,这样的系统必须具有较高的诊断能力。将凝视数据与通过鼠标和键盘输入获得的传统数据一起收集似乎是一种很有前途的方法。我们使用图形分化的例子来研究眼动追踪数据是否以及如何集成到这样一个系统中可以成功。为此,我们分析了学生的眼动追踪数据,并收集了关于哪些措施适合作为适应决策支持的实证理解
{"title":"Using Eye Tracking Data for Enhancing Adaptive Learning Systems","authors":"Kathrin Kennel","doi":"10.1145/3517031.3532195","DOIUrl":"https://doi.org/10.1145/3517031.3532195","url":null,"abstract":"Adaptive learning systems analyse a learner's input and respond on the basis of it, for example by providing individual feedback or selecting appropriate follow-up tasks. To provide good feedback, such a system must have a high diagnostic capability. The collection of gaze data alongside the traditional data obtained through mouse and keyboard input seems to be a promising approach for this. We use the example of graphical differentiation to investigate whether and how the integration of eye tracking data into such a system can succeed. For this purpose, we analyse students' eye tracking data and gather empirical understanding about which measures are suitable as decision support for adaptation","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129518184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Poster: A Preliminary Investigation on Eye Gaze-based Concentration Recognition during Silent Reading of Text 海报:文字默读过程中基于眼睛注视的注意力识别的初步研究
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3531632
Saki Tanaka, Airi Tsuji, K. Fujinami
We propose machine learning models to recognize state of non-concentration using eye-gaze data to increase the productivity. The experimental results show that Random Forest classifier with a 12 s window can divide the states with an F1-score more than 0.9.
我们提出了机器学习模型来识别不集中的状态,使用眼睛注视数据来提高生产力。实验结果表明,具有12 s窗口的随机森林分类器可以对状态进行分类,f1分数大于0.9。
{"title":"Poster: A Preliminary Investigation on Eye Gaze-based Concentration Recognition during Silent Reading of Text","authors":"Saki Tanaka, Airi Tsuji, K. Fujinami","doi":"10.1145/3517031.3531632","DOIUrl":"https://doi.org/10.1145/3517031.3531632","url":null,"abstract":"We propose machine learning models to recognize state of non-concentration using eye-gaze data to increase the productivity. The experimental results show that Random Forest classifier with a 12 s window can divide the states with an F1-score more than 0.9.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114413888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Gaze Uncertainty on AOIs in Information Visualisations 注视不确定性对信息可视化中aoi的影响
Pub Date : 2022-06-08 DOI: 10.1145/3517031.3531166
Yao Wang, Maurice Koch, Mihai Bâce, D. Weiskopf, A. Bulling
Gaze-based analysis of areas of interest (AOIs) is widely used in information visualisation research to understand how people explore visualisations or assess the quality of visualisations concerning key characteristics such as memorability. However, nearby AOIs in visualisations amplify the uncertainty caused by the gaze estimation error, which strongly influences the mapping between gaze samples or fixations and different AOIs. We contribute a novel investigation into gaze uncertainty and quantify its impact on AOI-based analysis on visualisations using two novel metrics: the Flipping Candidate Rate (FCR) and Hit Any AOI Rate (HAAR). Our analysis of 40 real-world visualisations, including human gaze and AOI annotations, shows that gaze uncertainty frequently and significantly impacts the analysis conducted in AOI-based studies. Moreover, we analysed four visualisation types and found that bar and scatter plots are usually designed in a way that causes more uncertainty than line and pie plots in gaze-based analysis.
基于注视的兴趣区域分析(AOIs)广泛应用于信息可视化研究中,以了解人们如何探索可视化或评估可视化的质量,包括关键特征,如可记忆性。然而,在可视化中,附近的aoi放大了由凝视估计误差引起的不确定性,这强烈影响了凝视样本或注视与不同aoi之间的映射。我们对凝视不确定性进行了新颖的研究,并使用两个新颖的指标:翻转候选率(FCR)和击中任意AOI率(HAAR),量化了它对基于AOI的可视化分析的影响。我们对40个现实世界的可视化分析,包括人类凝视和AOI注释,表明凝视不确定性频繁且显著地影响了基于AOI的研究中进行的分析。此外,我们分析了四种可视化类型,发现在基于凝视的分析中,柱状图和散点图的设计方式通常比线状图和饼状图产生更多的不确定性。
{"title":"Impact of Gaze Uncertainty on AOIs in Information Visualisations","authors":"Yao Wang, Maurice Koch, Mihai Bâce, D. Weiskopf, A. Bulling","doi":"10.1145/3517031.3531166","DOIUrl":"https://doi.org/10.1145/3517031.3531166","url":null,"abstract":"Gaze-based analysis of areas of interest (AOIs) is widely used in information visualisation research to understand how people explore visualisations or assess the quality of visualisations concerning key characteristics such as memorability. However, nearby AOIs in visualisations amplify the uncertainty caused by the gaze estimation error, which strongly influences the mapping between gaze samples or fixations and different AOIs. We contribute a novel investigation into gaze uncertainty and quantify its impact on AOI-based analysis on visualisations using two novel metrics: the Flipping Candidate Rate (FCR) and Hit Any AOI Rate (HAAR). Our analysis of 40 real-world visualisations, including human gaze and AOI annotations, shows that gaze uncertainty frequently and significantly impacts the analysis conducted in AOI-based studies. Moreover, we analysed four visualisation types and found that bar and scatter plots are usually designed in a way that causes more uncertainty than line and pie plots in gaze-based analysis.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"47 65","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120942153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Tracker/Camera Calibration for Accurate Automatic Gaze Annotation of Images and Videos 跟踪器/相机校准准确的图像和视频的自动凝视注释
Pub Date : 2022-06-01 DOI: 10.1145/3517031.3529643
Swati Jindal, Harsimran Kaur, R. Manduchi
Modern appearance-based gaze tracking algorithms require vast amounts of training data, with images of a viewer annotated with “ground truth” gaze direction. The standard approach to obtain gaze annotations is to ask subjects to fixate at specific known locations, then use a head model to determine the location of “origin of gaze”. We propose using an IR gaze tracker to generate gaze annotations in natural settings that do not require the fixation of target points. This requires prior geometric calibration of the IR gaze tracker with the camera, such that the data produced by the IR tracker can be expressed in the camera’s reference frame. This contribution introduces a simple tracker/camera calibration procedure based on the PnP algorithm and demonstrates its use to obtain a full characterization of gaze direction that can be used for ground truth annotation.
现代的基于外观的凝视跟踪算法需要大量的训练数据,以及带有“地面真实”凝视方向注释的观看者图像。获得凝视注释的标准方法是要求被试注视特定的已知位置,然后使用头部模型确定“凝视原点”的位置。我们建议使用红外凝视跟踪器在不需要固定目标点的自然环境中生成凝视注释。这需要事先对红外凝视跟踪器与相机进行几何校准,以便红外跟踪器产生的数据可以在相机的参考系中表示。这篇贡献介绍了一个基于PnP算法的简单跟踪器/相机校准程序,并演示了它如何获得可用于地面真相注释的凝视方向的完整表征。
{"title":"Tracker/Camera Calibration for Accurate Automatic Gaze Annotation of Images and Videos","authors":"Swati Jindal, Harsimran Kaur, R. Manduchi","doi":"10.1145/3517031.3529643","DOIUrl":"https://doi.org/10.1145/3517031.3529643","url":null,"abstract":"Modern appearance-based gaze tracking algorithms require vast amounts of training data, with images of a viewer annotated with “ground truth” gaze direction. The standard approach to obtain gaze annotations is to ask subjects to fixate at specific known locations, then use a head model to determine the location of “origin of gaze”. We propose using an IR gaze tracker to generate gaze annotations in natural settings that do not require the fixation of target points. This requires prior geometric calibration of the IR gaze tracker with the camera, such that the data produced by the IR tracker can be expressed in the camera’s reference frame. This contribution introduces a simple tracker/camera calibration procedure based on the PnP algorithm and demonstrates its use to obtain a full characterization of gaze direction that can be used for ground truth annotation.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129797526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multidisciplinary Reading Patterns of Digital Documents 数字文献的多学科阅读模式
Pub Date : 2022-05-17 DOI: 10.1145/3517031.3531630
Bhanuka Mahanama, Gavindya Jayawardena, Yasasi Abeysinghe, V. Ashok, S. Jayarathna
Reading plays a vital role in updating the researchers on recent developments in the field, including but not limited to solutions to various problems and collaborative studies between disciplines. Prior studies identify reading patterns to vary depending on the level of expertise of the researcher on the content of the document. We present a pilot study of eye-tracking measures during a reading task with participants across different areas of expertise with the intention of characterizing the reading patterns using both eye movement and pupillary information.
阅读在更新研究人员在该领域的最新发展中起着至关重要的作用,包括但不限于解决各种问题和学科之间的合作研究。先前的研究发现阅读模式会根据研究人员对文件内容的专业水平而变化。我们在阅读任务中对不同专业领域的参与者进行了一项眼动追踪测量的试点研究,目的是利用眼动和瞳孔信息来描述阅读模式。
{"title":"Multidisciplinary Reading Patterns of Digital Documents","authors":"Bhanuka Mahanama, Gavindya Jayawardena, Yasasi Abeysinghe, V. Ashok, S. Jayarathna","doi":"10.1145/3517031.3531630","DOIUrl":"https://doi.org/10.1145/3517031.3531630","url":null,"abstract":"Reading plays a vital role in updating the researchers on recent developments in the field, including but not limited to solutions to various problems and collaborative studies between disciplines. Prior studies identify reading patterns to vary depending on the level of expertise of the researcher on the content of the document. We present a pilot study of eye-tracking measures during a reading task with participants across different areas of expertise with the intention of characterizing the reading patterns using both eye movement and pupillary information.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122791534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Interaction Design of Dwell Selection Toward Gaze-based AR/VR Interaction 基于注视的AR/VR交互的居住选择交互设计
Pub Date : 2022-04-18 DOI: 10.1145/3517031.3531628
Toshiya Isomoto, Shota Yamanaka, B. Shizuki
In this paper, we first position the current dwell selection among gaze-based interactions and its advantages against head-gaze selection, which is the mainstream interface for HMDs. Next, we show how dwell selection and head-gaze selection are used in an actual interaction situation. By comparing these two selection methods, we describe the potential of dwell selection as an essential AR/VR interaction.
在本文中,我们首先在基于凝视的交互中定位当前的驻留选择及其相对于头视选择的优势,头视选择是头显的主流界面。接下来,我们将展示如何在实际交互情况下使用驻留选择和头凝视选择。通过比较这两种选择方法,我们将居住选择的潜力描述为必不可少的AR/VR交互。
{"title":"Interaction Design of Dwell Selection Toward Gaze-based AR/VR Interaction","authors":"Toshiya Isomoto, Shota Yamanaka, B. Shizuki","doi":"10.1145/3517031.3531628","DOIUrl":"https://doi.org/10.1145/3517031.3531628","url":null,"abstract":"In this paper, we first position the current dwell selection among gaze-based interactions and its advantages against head-gaze selection, which is the mainstream interface for HMDs. Next, we show how dwell selection and head-gaze selection are used in an actual interaction situation. By comparing these two selection methods, we describe the potential of dwell selection as an essential AR/VR interaction.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134458725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
HPCGen: Hierarchical K-Means Clustering and Level Based Principal Components for Scan Path Genaration HPCGen:扫描路径生成的分层k均值聚类和基于水平的主成分
Pub Date : 2022-01-19 DOI: 10.1145/3517031.3529625
Wolfgang Fuhl
In this paper, we present a new approach for decomposing scan paths and its utility for generating new scan paths. For this purpose, we use the K-Means clustering procedure to the raw gaze data and subsequently iteratively to find more clusters in the found clusters. The found clusters are grouped for each level in the hierarchy, and the most important principal components are computed from the data contained in them. Using this tree hierarchy and the principal components, new scan paths can be generated that match the human behavior of the original data. We show that this generated data is very useful for generating new data for scan path classification but can also be used to generate fake scan paths. Code can be downloaded here https://atreus.informatik.uni-tuebingen.de/seafile/d/8e2ab8c3fdd444e1a135/?p=%2FHPCGen&mode=list.
本文提出了一种新的扫描路径分解方法及其在生成新扫描路径中的应用。为此,我们对原始凝视数据使用K-Means聚类过程,然后迭代地在发现的聚类中找到更多的聚类。找到的集群针对层次结构中的每个级别进行分组,并根据其中包含的数据计算最重要的主成分。使用这个树状层次结构和主成分,可以生成与原始数据的人类行为相匹配的新扫描路径。我们表明,生成的数据对于生成用于扫描路径分类的新数据非常有用,但也可用于生成假扫描路径。代码可以在这里下载https://atreus.informatik.uni-tuebingen.de/seafile/d/8e2ab8c3fdd444e1a135/?p=%2FHPCGen&mode=list。
{"title":"HPCGen: Hierarchical K-Means Clustering and Level Based Principal Components for Scan Path Genaration","authors":"Wolfgang Fuhl","doi":"10.1145/3517031.3529625","DOIUrl":"https://doi.org/10.1145/3517031.3529625","url":null,"abstract":"In this paper, we present a new approach for decomposing scan paths and its utility for generating new scan paths. For this purpose, we use the K-Means clustering procedure to the raw gaze data and subsequently iteratively to find more clusters in the found clusters. The found clusters are grouped for each level in the hierarchy, and the most important principal components are computed from the data contained in them. Using this tree hierarchy and the principal components, new scan paths can be generated that match the human behavior of the original data. We show that this generated data is very useful for generating new data for scan path classification but can also be used to generate fake scan paths. Code can be downloaded here https://atreus.informatik.uni-tuebingen.de/seafile/d/8e2ab8c3fdd444e1a135/?p=%2FHPCGen&mode=list.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"548 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123126816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An Assessment of the Eye Tracking Signal Quality Captured in the HoloLens 2 HoloLens捕获的眼动追踪信号质量评估2
Pub Date : 2021-11-14 DOI: 10.1145/3517031.3529626
Samantha Aziz, Oleg V. Komogortsev
We present an analysis of the eye tracking signal quality of the HoloLens 2’s integrated eye tracker. Signal quality was measured from eye movement data captured during a random saccades task from a new eye movement dataset collected on 30 healthy adults. We characterize the eye tracking signal quality of the device in terms of spatial accuracy, spatial precision, temporal precision, linearity, and crosstalk. Most notably, our evaluation of spatial accuracy reveals that the eye movement data in our dataset appears to be uncalibrated. Recalibrating the data using a subset of our dataset task produces notably better eye tracking signal quality.
我们对HoloLens 2的集成眼动仪的眼动追踪信号质量进行了分析。信号质量是根据随机扫视任务中捕获的眼动数据来测量的,这些数据来自于收集30名健康成年人的新眼动数据集。我们从空间精度、空间精度、时间精度、线性度和串扰等方面对设备的眼动追踪信号质量进行了表征。最值得注意的是,我们对空间精度的评估显示,我们数据集中的眼动数据似乎未经校准。使用我们的数据集任务子集重新校准数据可以显著提高眼动追踪信号质量。
{"title":"An Assessment of the Eye Tracking Signal Quality Captured in the HoloLens 2","authors":"Samantha Aziz, Oleg V. Komogortsev","doi":"10.1145/3517031.3529626","DOIUrl":"https://doi.org/10.1145/3517031.3529626","url":null,"abstract":"We present an analysis of the eye tracking signal quality of the HoloLens 2’s integrated eye tracker. Signal quality was measured from eye movement data captured during a random saccades task from a new eye movement dataset collected on 30 healthy adults. We characterize the eye tracking signal quality of the device in terms of spatial accuracy, spatial precision, temporal precision, linearity, and crosstalk. Most notably, our evaluation of spatial accuracy reveals that the eye movement data in our dataset appears to be uncalibrated. Recalibrating the data using a subset of our dataset task produces notably better eye tracking signal quality.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121115751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
期刊
2022 Symposium on Eye Tracking Research and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1