首页 > 最新文献

Vision (Switzerland)最新文献

英文 中文
Coherent Interpretation of Entire Visual Field Test Reports Using a Multimodal Large Language Model (ChatGPT). 使用多模态大语言模型(ChatGPT)连贯地解释整个视野测试报告。
Q2 Medicine Pub Date : 2025-04-11 DOI: 10.3390/vision9020033
Jeremy C K Tan

This study assesses the accuracy and consistency of a commercially available large language model (LLM) in extracting and interpreting sensitivity and reliability data from entire visual field (VF) test reports for the evaluation of glaucomatous defects. Single-page anonymised VF test reports from 60 eyes of 60 subjects were analysed by an LLM (ChatGPT 4o) across four domains-test reliability, defect type, defect severity and overall diagnosis. The main outcome measures were accuracy of data extraction, interpretation of glaucomatous field defects and diagnostic classification. The LLM displayed 100% accuracy in the extraction of global sensitivity and reliability metrics and in classifying test reliability. It also demonstrated high accuracy (96.7%) in diagnosing whether the VF defect was consistent with a healthy, suspect or glaucomatous eye. The accuracy in correctly defining the type of defect was moderate (73.3%), which only partially improved when provided with a more defined region of interest. The causes of incorrect defect type were mostly attributed to the wrong location, particularly confusing the superior and inferior hemifields. Numerical/text-based data extraction and interpretation was overall notably superior to image-based interpretation of VF defects. This study demonstrates the potential and also limitations of multimodal LLMs in processing multimodal medical investigation data such as VF reports.

本研究评估了商业上可用的大语言模型(LLM)在提取和解释全视野(VF)测试报告中用于青光眼缺陷评估的灵敏度和可靠性数据的准确性和一致性。通过LLM (ChatGPT 40)从测试可靠性、缺陷类型、缺陷严重程度和总体诊断四个方面对60名受试者60只眼睛的单页匿名VF测试报告进行分析。主要观察指标为资料提取的准确性、青光眼视野缺损的解释和诊断分类。LLM在提取全局灵敏度和可靠性指标以及对测试可靠性进行分类方面具有100%的准确率。在诊断VF缺损是否与健康、可疑或青光眼相一致方面也显示出很高的准确率(96.7%)。正确定义缺陷类型的准确性是中等的(73.3%),当提供了一个更明确的兴趣区域时,它只是部分地提高了。造成缺陷类型不正确的主要原因是位置错误,尤其是上下半野的混淆。基于数字/文本的数据提取和解释总体上明显优于基于图像的VF缺陷解释。本研究证明了多模式llm在处理多模式医学调查数据(如VF报告)方面的潜力和局限性。
{"title":"Coherent Interpretation of Entire Visual Field Test Reports Using a Multimodal Large Language Model (ChatGPT).","authors":"Jeremy C K Tan","doi":"10.3390/vision9020033","DOIUrl":"https://doi.org/10.3390/vision9020033","url":null,"abstract":"<p><p>This study assesses the accuracy and consistency of a commercially available large language model (LLM) in extracting and interpreting sensitivity and reliability data from entire visual field (VF) test reports for the evaluation of glaucomatous defects. Single-page anonymised VF test reports from 60 eyes of 60 subjects were analysed by an LLM (ChatGPT 4o) across four domains-test reliability, defect type, defect severity and overall diagnosis. The main outcome measures were accuracy of data extraction, interpretation of glaucomatous field defects and diagnostic classification. The LLM displayed 100% accuracy in the extraction of global sensitivity and reliability metrics and in classifying test reliability. It also demonstrated high accuracy (96.7%) in diagnosing whether the VF defect was consistent with a healthy, suspect or glaucomatous eye. The accuracy in correctly defining the type of defect was moderate (73.3%), which only partially improved when provided with a more defined region of interest. The causes of incorrect defect type were mostly attributed to the wrong location, particularly confusing the superior and inferior hemifields. Numerical/text-based data extraction and interpretation was overall notably superior to image-based interpretation of VF defects. This study demonstrates the potential and also limitations of multimodal LLMs in processing multimodal medical investigation data such as VF reports.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015771/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144051434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cup and Disc Segmentation in Smartphone Handheld Ophthalmoscope Images with a Composite Backbone and Double Decoder Architecture. 基于复合主干和双解码器结构的智能手机手持式检眼镜图像杯盘分割。
Q2 Medicine Pub Date : 2025-04-11 DOI: 10.3390/vision9020032
Thiago Paiva Freire, Geraldo Braz Júnior, João Dallyson Sousa de Almeida, José Ribamar Durand Rodrigues Junior

Glaucoma is a visual disease that affects millions of people, and early diagnosis can prevent total blindness. One way to diagnose the disease is through fundus image examination, which analyzes the optic disc and cup structures. However, screening programs in primary care are costly and unfeasible. Neural network models have been used to segment optic nerve structures, assisting physicians in this task and reducing fatigue. This work presents a methodology to enhance morphological biomarkers of the optic disc and cup in images obtained by a smartphone coupled to an ophthalmoscope through a deep neural network, which combines two backbones and a dual decoder approach to improve the segmentation of these structures, as well as a new way to combine the loss weights in the training process. The models obtained were numerically evaluated through Dice and IoU measures. The dice values obtained in the experiments reached a Dice of 95.92% and 85.30% for the optical disc and cup and an IoU of 92.22% and 75.68% for the optical disc and cup, respectively, in the BrG dataset. These findings indicate promising architectures in the fundus image segmentation task.

青光眼是一种影响数百万人的视觉疾病,早期诊断可以防止完全失明。诊断方法之一是通过眼底图像检查,分析视盘和杯状结构。然而,初级保健的筛查项目既昂贵又不可行。神经网络模型已被用于分割视神经结构,协助医生完成这项任务并减少疲劳。本文提出了一种通过深度神经网络增强智能手机与检眼镜耦合获得的图像中视盘和杯的形态生物标记物的方法,该方法结合了两个主干和双解码器方法来改进这些结构的分割,以及一种结合训练过程中损失权值的新方法。通过Dice和IoU测量对得到的模型进行了数值评价。实验得到的dice值在BrG数据集中,光盘和杯子的dice值分别达到95.92%和85.30%,光盘和杯子的IoU值分别达到92.22%和75.68%。这些发现为眼底图像分割任务提供了有前途的架构。
{"title":"Cup and Disc Segmentation in Smartphone Handheld Ophthalmoscope Images with a Composite Backbone and Double Decoder Architecture.","authors":"Thiago Paiva Freire, Geraldo Braz Júnior, João Dallyson Sousa de Almeida, José Ribamar Durand Rodrigues Junior","doi":"10.3390/vision9020032","DOIUrl":"https://doi.org/10.3390/vision9020032","url":null,"abstract":"<p><p>Glaucoma is a visual disease that affects millions of people, and early diagnosis can prevent total blindness. One way to diagnose the disease is through fundus image examination, which analyzes the optic disc and cup structures. However, screening programs in primary care are costly and unfeasible. Neural network models have been used to segment optic nerve structures, assisting physicians in this task and reducing fatigue. This work presents a methodology to enhance morphological biomarkers of the optic disc and cup in images obtained by a smartphone coupled to an ophthalmoscope through a deep neural network, which combines two backbones and a dual decoder approach to improve the segmentation of these structures, as well as a new way to combine the loss weights in the training process. The models obtained were numerically evaluated through Dice and IoU measures. The dice values obtained in the experiments reached a Dice of 95.92% and 85.30% for the optical disc and cup and an IoU of 92.22% and 75.68% for the optical disc and cup, respectively, in the BrG dataset. These findings indicate promising architectures in the fundus image segmentation task.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015843/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144015550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence vs. Human Cognition: A Comparative Analysis of ChatGPT and Candidates Sitting the European Board of Ophthalmology Diploma Examination. 人工智能与人类认知:ChatGPT和参加欧洲眼科文凭考试的考生的比较分析。
Q2 Medicine Pub Date : 2025-04-09 DOI: 10.3390/vision9020031
Anna P Maino, Jakub Klikowski, Brendan Strong, Wahid Ghaffari, Michał Woźniak, Tristan Bourcier, Andrzej Grzybowski

Background/objectives: This paper aims to assess ChatGPT's performance in answering European Board of Ophthalmology Diploma (EBOD) examination papers and to compare these results to pass benchmarks and candidate results.

Methods: This cross-sectional study used a sample of past exam papers from 2012, 2013, 2020-2023 EBOD examinations. This study analyzed ChatGPT's responses to 440 multiple choice questions (MCQs), each containing five true/false statements (2200 statements in total) and 48 single best answer (SBA) questions.

Results: ChatGPT, for MCQs, scored on average 64.39%. ChatGPT's strongest metric performance for MCQs was precision (68.76%). ChatGPT performed best at answering pathology MCQs (Grubbs test p < 0.05). Optics and refraction had the lowest-scoring MCQ performance across all metrics. ChatGPT-3.5 Turbo performed worse than human candidates and ChatGPT-4o on easy questions (75% vs. 100% accuracy) but outperformed humans and ChatGPT-4o on challenging questions (50% vs. 28% accuracy). ChatGPT's SBA performance averaged 28.43%, with the highest score and strongest performance in precision (29.36%). Pathology SBA questions were consistently the lowest-scoring topic across most metrics. ChatGPT demonstrated a nonsignificant tendency to select option 1 more frequently (p = 0.19). When answering SBAs, human candidates scored higher than ChatGPT in all metric areas measured.

Conclusions: ChatGPT performed stronger for true/false questions, scoring a pass mark in most instances. Performance was poorer for SBA questions, suggesting that ChatGPT's ability in information retrieval is better than that in knowledge integration. ChatGPT could become a valuable tool in ophthalmic education, allowing exam boards to test their exam papers to ensure they are pitched at the right level, marking open-ended questions and providing detailed feedback.

背景/目的:本文旨在评估ChatGPT在回答欧洲眼科文凭(EBOD)考试试卷中的表现,并将这些结果与通过基准和候选人结果进行比较。方法:本横断面研究使用2012年、2013年、2020-2023年EBOD考试的试卷样本。这项研究分析了ChatGPT对440个选择题(mcq)的回答,每个选择题包含5个真假陈述(总共2200个陈述)和48个单一最佳答案(SBA)问题。结果:mcq的ChatGPT平均得分为64.39%。ChatGPT在mcq中表现最好的指标是精度(68.76%)。ChatGPT对病理mcq的回答效果最好(Grubbs检验p < 0.05)。光学和折射在所有指标的MCQ表现中得分最低。ChatGPT-3.5 Turbo在简单问题上的表现不如人类候选人和chatgpt - 40(75%对100%的准确率),但在挑战性问题上的表现优于人类和chatgpt - 40(50%对28%的准确率)。ChatGPT的SBA性能平均为28.43%,在精度方面得分最高,表现最强(29.36%)。病理SBA问题始终是大多数指标中得分最低的主题。ChatGPT更频繁地选择选项1的趋势不显著(p = 0.19)。在回答SBAs时,人类候选人在所有度量领域的得分都高于ChatGPT。结论:ChatGPT在真假问题上表现更强,在大多数情况下得分及格。SBA问题的表现较差,说明ChatGPT在信息检索方面的能力强于知识整合方面的能力。ChatGPT可以成为眼科教育的一个有价值的工具,允许考试委员会测试他们的试卷,以确保他们的水平正确,标记开放式问题,并提供详细的反馈。
{"title":"Artificial Intelligence vs. Human Cognition: A Comparative Analysis of ChatGPT and Candidates Sitting the European Board of Ophthalmology Diploma Examination.","authors":"Anna P Maino, Jakub Klikowski, Brendan Strong, Wahid Ghaffari, Michał Woźniak, Tristan Bourcier, Andrzej Grzybowski","doi":"10.3390/vision9020031","DOIUrl":"https://doi.org/10.3390/vision9020031","url":null,"abstract":"<p><strong>Background/objectives: </strong>This paper aims to assess ChatGPT's performance in answering European Board of Ophthalmology Diploma (EBOD) examination papers and to compare these results to pass benchmarks and candidate results.</p><p><strong>Methods: </strong>This cross-sectional study used a sample of past exam papers from 2012, 2013, 2020-2023 EBOD examinations. This study analyzed ChatGPT's responses to 440 multiple choice questions (MCQs), each containing five true/false statements (2200 statements in total) and 48 single best answer (SBA) questions.</p><p><strong>Results: </strong>ChatGPT, for MCQs, scored on average 64.39%. ChatGPT's strongest metric performance for MCQs was precision (68.76%). ChatGPT performed best at answering pathology MCQs (Grubbs test <i>p</i> < 0.05). Optics and refraction had the lowest-scoring MCQ performance across all metrics. ChatGPT-3.5 Turbo performed worse than human candidates and ChatGPT-4o on easy questions (75% vs. 100% accuracy) but outperformed humans and ChatGPT-4o on challenging questions (50% vs. 28% accuracy). ChatGPT's SBA performance averaged 28.43%, with the highest score and strongest performance in precision (29.36%). Pathology SBA questions were consistently the lowest-scoring topic across most metrics. ChatGPT demonstrated a nonsignificant tendency to select option 1 more frequently (<i>p</i> = 0.19). When answering SBAs, human candidates scored higher than ChatGPT in all metric areas measured.</p><p><strong>Conclusions: </strong>ChatGPT performed stronger for true/false questions, scoring a pass mark in most instances. Performance was poorer for SBA questions, suggesting that ChatGPT's ability in information retrieval is better than that in knowledge integration. ChatGPT could become a valuable tool in ophthalmic education, allowing exam boards to test their exam papers to ensure they are pitched at the right level, marking open-ended questions and providing detailed feedback.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015923/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144051950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain Functional Connectivity During First- and Third-Person Visual Imagery. 第一和第三人称视觉意象中的脑功能连通性。
Q2 Medicine Pub Date : 2025-04-06 DOI: 10.3390/vision9020030
Ekaterina Pechenkova, Mary Rachinskaya, Varvara Vasilenko, Olesya Blazhenkova, Elena Mershina

The ability to adopt different perspectives, or vantage points, is fundamental to human cognition, affecting reasoning, memory, and imagery. While the first-person perspective allows individuals to experience a scene through their own eyes, the third-person perspective involves an external viewpoint, which is thought to demand greater cognitive effort and different neural processing. Despite the frequent use of perspective switching across various contexts, including modern media and in therapeutic settings, the neural mechanisms differentiating these two perspectives in visual imagery remain largely underexplored. In an exploratory fMRI study, we compared both activation and task-based functional connectivity underlying first-person and third-person perspective taking in the same 26 participants performing two spatial egocentric imagery tasks, namely imaginary tennis and house navigation. No significant differences in activation emerged between the first-person and third-person conditions. The network-based statistics analysis revealed a small subnetwork of the early visual and posterior temporal areas that manifested stronger functional connectivity during the first-person perspective, suggesting a closer sensory recruitment loop, or, in different terms, a loop between long-term memory and the "visual buffer" circuits. The absence of a strong neural distinction between the first-person and third-person perspectives suggests that third-person imagery may not fully decenter individuals from the scene, as is often assumed.

采用不同观点或有利位置的能力是人类认知的基础,影响推理、记忆和意象。虽然第一人称视角允许个人通过自己的眼睛体验场景,但第三人称视角涉及外部视角,这被认为需要更大的认知努力和不同的神经处理。尽管在包括现代媒体和治疗环境在内的各种环境中经常使用视角转换,但在视觉图像中区分这两种视角的神经机制仍未得到充分探索。在一项探索性的功能磁共振成像研究中,我们比较了同样26名参与者在执行两个空间自我中心成像任务,即想象网球和房屋导航时,第一人称和第三人称视角下的激活和基于任务的功能连接。在第一人称和第三人称情况下,激活没有显著差异。基于网络的统计分析显示,在第一人称视角下,早期视觉区和后颞区有一个小的子网络,显示出更强的功能连通性,这表明有一个更紧密的感觉招募回路,或者换句话说,在长期记忆和“视觉缓冲”回路之间有一个回路。第一人称和第三人称视角之间缺乏强烈的神经区分,这表明第三人称图像可能不会像人们通常认为的那样,使个体完全脱离场景。
{"title":"Brain Functional Connectivity During First- and Third-Person Visual Imagery.","authors":"Ekaterina Pechenkova, Mary Rachinskaya, Varvara Vasilenko, Olesya Blazhenkova, Elena Mershina","doi":"10.3390/vision9020030","DOIUrl":"https://doi.org/10.3390/vision9020030","url":null,"abstract":"<p><p>The ability to adopt different perspectives, or vantage points, is fundamental to human cognition, affecting reasoning, memory, and imagery. While the first-person perspective allows individuals to experience a scene through their own eyes, the third-person perspective involves an external viewpoint, which is thought to demand greater cognitive effort and different neural processing. Despite the frequent use of perspective switching across various contexts, including modern media and in therapeutic settings, the neural mechanisms differentiating these two perspectives in visual imagery remain largely underexplored. In an exploratory fMRI study, we compared both activation and task-based functional connectivity underlying first-person and third-person perspective taking in the same 26 participants performing two spatial egocentric imagery tasks, namely imaginary tennis and house navigation. No significant differences in activation emerged between the first-person and third-person conditions. The network-based statistics analysis revealed a small subnetwork of the early visual and posterior temporal areas that manifested stronger functional connectivity during the first-person perspective, suggesting a closer sensory recruitment loop, or, in different terms, a loop between long-term memory and the \"visual buffer\" circuits. The absence of a strong neural distinction between the first-person and third-person perspectives suggests that third-person imagery may not fully decenter individuals from the scene, as is often assumed.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015856/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144038742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Attention in Depth: Event-Related and Steady-State Visual Evoked Potentials During Attentional Shifts Between Depth Planes in a Novel Stimulation Setup. 深度注意探索:一种新型刺激装置中深度平面间注意转移过程中的事件相关和稳态视觉诱发电位。
Q2 Medicine Pub Date : 2025-04-03 DOI: 10.3390/vision9020028
Jonas Jänig, Norman Forschack, Christopher Gundlach, Matthias M Müller

Visuo-spatial attention acts as a filter for the flood of visual information. Until recently, experimental research in this area focused on neural dynamics of shifting attention in 2D space, leaving attentional shifts in depth less explored. In this study, twenty-three participants were cued to attend to one of two overlapping random-dot kinematograms (RDKs) in different stereoscopic depths in a novel experimental setup. These RDKs flickered at two different frequencies to evoke Steady-State Visual Evoked Potentials (SSVEPs), a neural signature of early visual stimulus processing. Subjects were instructed to detect coherent motion events in the to-be-attended-to plane/RDK. Behavioral data showed that subjects were able to perform the task and selectively respond to events at the cued depth. Event-Related Potentials (ERPs) elicited by these events-namely the Selection Negativity (SN) and the P3b-showed greater amplitudes for coherent motion events in the to-be-attended-to compared to the to-be-ignored plane/RDK, indicating that attention was shifted accordingly. Although our new experimental setting reliably evoked SSVEPs, SSVEP amplitude time courses did not differ between the to-be-attended-to and to-be-ignored stimuli. These results suggest that early visual areas may not optimally represent depth-selective attention, which might rely more on higher processing stages, as suggested by the ERP results.

视觉空间注意力就像过滤大量视觉信息的过滤器。直到最近,这一领域的实验研究都集中在二维空间中注意力转移的神经动力学上,而对深度注意力转移的探索却很少。在这项研究中,在一个新的实验装置中,23名参与者被提示参加两个重叠的随机点运动学图(rdk)中的一个,在不同的立体深度。这些rdk以两种不同的频率闪烁,以唤起稳态视觉诱发电位(ssvep),这是早期视觉刺激处理的神经特征。受试者被指示检测待参与平面/RDK中的相干运动事件。行为数据显示,受试者能够完成任务,并有选择地对提示深度的事件做出反应。这些事件(即选择负性(SN)和p3b)引发的事件相关电位(ERPs)在待关注平面/RDK的相干运动事件中显示出比被忽略平面/RDK更大的振幅,表明注意力相应转移。虽然我们的新实验设置可靠地诱发了SSVEP,但SSVEP的振幅时间过程在被关注和被忽视的刺激之间没有差异。这些结果表明,早期视觉区域可能不是深度选择性注意的最佳代表,深度选择性注意可能更多地依赖于更高的加工阶段,正如ERP结果所表明的那样。
{"title":"Exploring Attention in Depth: Event-Related and Steady-State Visual Evoked Potentials During Attentional Shifts Between Depth Planes in a Novel Stimulation Setup.","authors":"Jonas Jänig, Norman Forschack, Christopher Gundlach, Matthias M Müller","doi":"10.3390/vision9020028","DOIUrl":"https://doi.org/10.3390/vision9020028","url":null,"abstract":"<p><p>Visuo-spatial attention acts as a filter for the flood of visual information. Until recently, experimental research in this area focused on neural dynamics of shifting attention in 2D space, leaving attentional shifts in depth less explored. In this study, twenty-three participants were cued to attend to one of two overlapping random-dot kinematograms (RDKs) in different stereoscopic depths in a novel experimental setup. These RDKs flickered at two different frequencies to evoke Steady-State Visual Evoked Potentials (SSVEPs), a neural signature of early visual stimulus processing. Subjects were instructed to detect coherent motion events in the to-be-attended-to plane/RDK. Behavioral data showed that subjects were able to perform the task and selectively respond to events at the cued depth. Event-Related Potentials (ERPs) elicited by these events-namely the Selection Negativity (SN) and the P3b-showed greater amplitudes for coherent motion events in the to-be-attended-to compared to the to-be-ignored plane/RDK, indicating that attention was shifted accordingly. Although our new experimental setting reliably evoked SSVEPs, SSVEP amplitude time courses did not differ between the to-be-attended-to and to-be-ignored stimuli. These results suggest that early visual areas may not optimally represent depth-selective attention, which might rely more on higher processing stages, as suggested by the ERP results.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015859/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144033194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaze Error Estimation and Linear Transformation to Improve Accuracy of Video-Based Eye Trackers. 提高视频眼动仪精度的注视误差估计和线性变换。
Q2 Medicine Pub Date : 2025-04-03 DOI: 10.3390/vision9020029
Varun Padikal, Alex Plonkowski, Penelope F Lawton, Laura K Young, Jenny C A Read

Eye tracking technology plays a crucial role in various fields such as psychology, medical training, marketing, and human-computer interaction. However, achieving high accuracy over a larger field of view in eye tracking systems remains a significant challenge, both in free viewing and in a head-stabilized condition. In this paper, we propose a simple approach to improve the accuracy of video-based eye trackers through the implementation of linear coordinate transformations. This method involves applying stretching, shearing, translation, or their combinations to correct gaze accuracy errors. Our investigation shows that re-calibrating the eye tracker via linear transformations significantly improves the accuracy of video-based tracker over a large field of view.

眼动追踪技术在心理学、医学培训、市场营销、人机交互等领域发挥着至关重要的作用。然而,在眼动追踪系统中实现更大视场的高精度仍然是一个重大挑战,无论是在自由观看还是在头部稳定的情况下。在本文中,我们提出了一种简单的方法,通过实现线性坐标变换来提高基于视频的眼动仪的精度。这种方法包括应用拉伸、剪切、平移或它们的组合来纠正凝视精度误差。我们的研究表明,通过线性变换重新校准眼动仪可以显著提高大视场范围内基于视频的眼动仪的精度。
{"title":"Gaze Error Estimation and Linear Transformation to Improve Accuracy of Video-Based Eye Trackers.","authors":"Varun Padikal, Alex Plonkowski, Penelope F Lawton, Laura K Young, Jenny C A Read","doi":"10.3390/vision9020029","DOIUrl":"https://doi.org/10.3390/vision9020029","url":null,"abstract":"<p><p>Eye tracking technology plays a crucial role in various fields such as psychology, medical training, marketing, and human-computer interaction. However, achieving high accuracy over a larger field of view in eye tracking systems remains a significant challenge, both in free viewing and in a head-stabilized condition. In this paper, we propose a simple approach to improve the accuracy of video-based eye trackers through the implementation of linear coordinate transformations. This method involves applying stretching, shearing, translation, or their combinations to correct gaze accuracy errors. Our investigation shows that re-calibrating the eye tracker via linear transformations significantly improves the accuracy of video-based tracker over a large field of view.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015841/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tear Film Changes and Ocular Symptoms Associated with Soft Contact Lens Wear. 软性隐形眼镜佩戴后泪膜变化及眼部症状
Q2 Medicine Pub Date : 2025-04-01 DOI: 10.3390/vision9020027
Eduardo Insua Pereira, Madalena Lira, Ana Paula Sampaio

Discomfort is one of the leading causes associated with contact lens dropout. This study investigated changes in the tear film parameters induced by lens wear and their relationship with ocular symptomology. Thirty-four lens wearers (32.9 ± 9.1 years, 7 men) and thirty-three non-lens wearers (29.4 ± 6.8 years, 12 men) participated in this clinical setting. Subjects were categorised into asymptomatic (n = 11), moderate (n = 15), or severe symptomatic (n = 8). Clinical evaluations were performed in the morning, including blink frequency and completeness, pre-corneal (NIBUT) and pre-lens non-invasive break-up (PL-NIBUT), lipid interference patterns, and tear meniscus height. Contact lens wearers had a higher percentage of incomplete blinks (37% vs. 19%, p < 0.001) and reduced tear meniscus height compared to controls (0.24 ± 0.08 vs. 0.28 ± 0.10 mm, p = 0.014). PL-NIBUT was shorter than NIBUT (7.6 ± 6.2 vs. 10.7 ± 9.3 s. p = 0.002). Significant statistical differences between the groups were found in the PL-NIBUT (p = 0.01) and NIBUT (p = 0.05), with asymptomatic recording higher times than symptomatic. Long-term use of silicone-hydrogel lenses can affect tear stability, production, and adequate distribution through blinking. Ocular symptomology correlates with tear stability parameters in both lens wearers and non-wearers.

不适是导致隐形眼镜脱落的主要原因之一。本研究探讨晶状体磨损引起的泪膜参数变化及其与眼部症状的关系。34名晶状体配戴者(32.9±9.1岁,7名男性)和33名非晶状体配戴者(29.4±6.8岁,12名男性)参加了这项临床研究。受试者分为无症状组(n = 11)、中度组(n = 15)和重度组(n = 8)。临床评估在早上进行,包括眨眼频率和完整性,角膜前(NIBUT)和晶状体前非侵入性破裂(PL-NIBUT),脂质干扰模式和撕裂半月板高度。与对照组相比,隐形眼镜佩戴者的不完全眨眼比例更高(37% vs. 19%, p < 0.001),撕裂半月板高度降低(0.24±0.08 vs. 0.28±0.10 mm, p = 0.014)。PL-NIBUT比NIBUT短(7.6±6.2 vs 10.7±9.3,p = 0.002)。两组间pli -NIBUT (p = 0.01)和NIBUT (p = 0.05)差异有统计学意义,无症状记录次数高于有症状记录次数。长期使用硅水凝胶镜片会影响泪液的稳定性、产生和充分分布。眼部症状与晶状体配戴者和非配戴者的泪液稳定性参数相关。
{"title":"Tear Film Changes and Ocular Symptoms Associated with Soft Contact Lens Wear.","authors":"Eduardo Insua Pereira, Madalena Lira, Ana Paula Sampaio","doi":"10.3390/vision9020027","DOIUrl":"https://doi.org/10.3390/vision9020027","url":null,"abstract":"<p><p>Discomfort is one of the leading causes associated with contact lens dropout. This study investigated changes in the tear film parameters induced by lens wear and their relationship with ocular symptomology. Thirty-four lens wearers (32.9 ± 9.1 years, 7 men) and thirty-three non-lens wearers (29.4 ± 6.8 years, 12 men) participated in this clinical setting. Subjects were categorised into asymptomatic (n = 11), moderate (n = 15), or severe symptomatic (n = 8). Clinical evaluations were performed in the morning, including blink frequency and completeness, pre-corneal (NIBUT) and pre-lens non-invasive break-up (PL-NIBUT), lipid interference patterns, and tear meniscus height. Contact lens wearers had a higher percentage of incomplete blinks (37% vs. 19%, <i>p</i> < 0.001) and reduced tear meniscus height compared to controls (0.24 ± 0.08 vs. 0.28 ± 0.10 mm, <i>p</i> = 0.014). PL-NIBUT was shorter than NIBUT (7.6 ± 6.2 vs. 10.7 ± 9.3 s. <i>p</i> = 0.002). Significant statistical differences between the groups were found in the PL-NIBUT (<i>p</i> = 0.01) and NIBUT (<i>p</i> = 0.05), with asymptomatic recording higher times than symptomatic. Long-term use of silicone-hydrogel lenses can affect tear stability, production, and adequate distribution through blinking. Ocular symptomology correlates with tear stability parameters in both lens wearers and non-wearers.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015879/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144062737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Emerging Treatments for Persistent Corneal Epithelial Defects. 持续性角膜上皮缺损的新疗法。
Q2 Medicine Pub Date : 2025-04-01 DOI: 10.3390/vision9020026
Jeonghyun Esther Kwon, Christie Kang, Amirhossein Moghtader, Sumaiya Shahjahan, Zahra Bibak Bejandi, Ahmad Alzein, Ali R Djalilian

Persistent corneal epithelial defects (PCEDs) are a challenging ocular condition characterized by the failure of complete corneal epithelial healing after an insult or injury, even after 14 days of standard care. There is a lack of therapeutics that target this condition and encourage re-epithelialization of the corneal surface in a timely and efficient manner. This review aims to provide an overview of current standards of management for PCEDs, highlighting novel, emerging treatments in this field. While many of the current non-surgical treatments aim to provide lubrication and mechanical support, novel non-surgical approaches are undergoing development to harness the proliferative and healing properties of human mesenchymal stem cells, platelets, lufepirsen, hyaluronic acid, thymosin ß4, p-derived peptide, and insulin-like growth factor for the treatment of PCEDs. Novel surgical treatments focus on corneal neurotization and limbal cell reconstruction using novel scaffold materials and cell-sources. This review provides insights into future PCED treatments that build upon current management guidelines.

持续性角膜上皮缺损(PCEDs)是一种具有挑战性的眼部疾病,其特征是即使经过14天的标准治疗,角膜上皮也无法在损伤或损伤后完全愈合。缺乏针对这种情况的治疗方法,并及时有效地促进角膜表面的再上皮化。本文综述了目前pced的管理标准,重点介绍了该领域新兴的治疗方法。虽然目前许多非手术治疗旨在提供润滑和机械支持,但新的非手术治疗方法正在开发中,以利用人间充质干细胞、血小板、lufepirsen、透明质酸、胸腺素ß4、p衍生肽和胰岛素样生长因子的增殖和愈合特性来治疗pced。新的外科治疗方法集中在角膜神经化和角膜缘细胞重建,使用新的支架材料和细胞来源。这篇综述为未来的PCED治疗提供了基于当前管理指南的见解。
{"title":"Emerging Treatments for Persistent Corneal Epithelial Defects.","authors":"Jeonghyun Esther Kwon, Christie Kang, Amirhossein Moghtader, Sumaiya Shahjahan, Zahra Bibak Bejandi, Ahmad Alzein, Ali R Djalilian","doi":"10.3390/vision9020026","DOIUrl":"https://doi.org/10.3390/vision9020026","url":null,"abstract":"<p><p>Persistent corneal epithelial defects (PCEDs) are a challenging ocular condition characterized by the failure of complete corneal epithelial healing after an insult or injury, even after 14 days of standard care. There is a lack of therapeutics that target this condition and encourage re-epithelialization of the corneal surface in a timely and efficient manner. This review aims to provide an overview of current standards of management for PCEDs, highlighting novel, emerging treatments in this field. While many of the current non-surgical treatments aim to provide lubrication and mechanical support, novel non-surgical approaches are undergoing development to harness the proliferative and healing properties of human mesenchymal stem cells, platelets, lufepirsen, hyaluronic acid, thymosin ß4, <i>p</i>-derived peptide, and insulin-like growth factor for the treatment of PCEDs. Novel surgical treatments focus on corneal neurotization and limbal cell reconstruction using novel scaffold materials and cell-sources. This review provides insights into future PCED treatments that build upon current management guidelines.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015846/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144018280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
μετὰ τὰ ϕυσικά: Vision Far Beyond Physics. μετ ά τ ο ς ν σικά:超越物理的视野。
Q2 Medicine Pub Date : 2025-03-26 DOI: 10.3390/vision9020025
Liliana Albertazzi

Vision Science is an area of study that focuses on specific aspects of visual perception and is conducted mainly in the restricted and controlled context of laboratories. In so doing, the methodological procedures adopted necessarily reduce the variables of natural perception. For the time being, it is extremely difficult to perform psychophysical, neurophysiological, and phenomenological experiments in open scenery, even if that is our natural visual experience. This study discusses four points whose status in Vision Science is still controversial. Namely, the copresence of distinct visual phenomena of primary and secondary processes in natural vision; the role of visual imagination in seeing; the factors ruling the perception of global ambiguity and enigmatic and emotional atmosphere in the visual experience of a scene; and if the phenomena of subjective vision are considered, what kind of new laboratories are available for studying visual perception in open scenery. In the framework of experimental phenomenology and the use of pictorial art as a complement and test for perceptual phenomena, a case study from painting showing the copresence of perceptual and mental visual processes is also discussed and analyzed. This has involved measuring color and light in specific zones of the painting chosen for analysis, relative to visual templates, using Natural Color System notation cards.

视觉科学是一个专注于视觉感知的特定方面的研究领域,主要在实验室的限制和控制环境中进行。在这样做的过程中,所采用的方法程序必然会减少自然感知的变量。目前,在开阔的风景中进行心理物理学、神经生理学和现象学实验是极其困难的,即使那是我们自然的视觉体验。本研究讨论了在视觉科学中地位仍有争议的四点。即自然视觉中主要和次要过程的不同视觉现象的共同存在;视觉想象:视觉想象在观看中的作用;场景视觉体验中支配全局模糊性和神秘性情感氛围感知的因素如果考虑到主观视觉现象,又有哪些新的实验室可以用来研究开阔风景中的视觉感知?在实验现象学和使用图像艺术作为感性现象的补充和测试的框架下,还讨论和分析了一个来自绘画的案例研究,显示了感性和心理视觉过程的共同存在。这涉及到测量颜色和光的特定区域的绘画选择进行分析,相对于视觉模板,使用自然色彩系统符号卡。
{"title":"μετὰ τὰ ϕυσικά: Vision Far Beyond Physics.","authors":"Liliana Albertazzi","doi":"10.3390/vision9020025","DOIUrl":"https://doi.org/10.3390/vision9020025","url":null,"abstract":"<p><p>Vision Science is an area of study that focuses on specific aspects of visual perception and is conducted mainly in the restricted and controlled context of laboratories. In so doing, the methodological procedures adopted necessarily reduce the variables of natural perception. For the time being, it is extremely difficult to perform psychophysical, neurophysiological, and phenomenological experiments in open scenery, even if that is our natural visual experience. This study discusses four points whose status in Vision Science is still controversial. Namely, the copresence of distinct visual phenomena of primary and secondary processes in natural vision; the role of visual imagination in seeing; the factors ruling the perception of global ambiguity and enigmatic and emotional atmosphere in the visual experience of a scene; and if the phenomena of subjective vision are considered, what kind of new laboratories are available for studying visual perception in open scenery. In the framework of experimental phenomenology and the use of pictorial art as a complement and test for perceptual phenomena, a case study from painting showing the copresence of perceptual and mental visual processes is also discussed and analyzed. This has involved measuring color and light in specific zones of the painting chosen for analysis, relative to visual templates, using Natural Color System notation cards.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015877/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144001502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Visual Input and Kinesiophobia on Postural Control and Quality of Life in Older Adults During One-Leg Standing Tasks. 视觉输入和运动恐惧症对老年人单腿站立时姿势控制和生活质量的影响。
Q2 Medicine Pub Date : 2025-03-20 DOI: 10.3390/vision9010024
Paul S Sung, Dongchul Lee

Visual conditions significantly influence fear of movement (FOM), which is a condition that impairs postural control and quality of life (QOL). This study examined how visual conditions influence sway velocity during repeated one-leg standing tasks and explored the potential relationship between postural control, FOM, and QOL in older adults with and without FOM. Thirty-seven older adults with FOM and 37 controls participated in the study. Postural sway velocity was measured across three repeated trials under visual conditions in both anteroposterior (AP) and mediolateral (ML) directions. The groups demonstrated significant interaction under visual conditions (F = 7.43, p = 0.01). In the eyes-closed condition, the FOM group exhibited faster ML sway velocity than the control group, with significant differences across all three trials. There was a significant interaction between sway direction and vision (F = 27.41, p = 0.001). In addition, the FOM demonstrated strong negative correlations with several QOL measures on social functioning (r = -0.69, p = 0.001) and role limitations due to emotional problems (r = -0.58, p = 0.001) in the FOM group. While FOM influenced sway velocity during balance tasks, visual input emerged as a key determinant of postural control. The FOM group demonstrated a heightened reliance on vision, suggesting an increased need for vision-dependent strategies to maintain balance.

视觉条件显著影响运动恐惧(FOM),这是一种损害姿势控制和生活质量(QOL)的条件。本研究考察了视觉条件对重复单腿站立任务中摇摆速度的影响,并探讨了姿势控制、FOM和生活质量在有和没有FOM的老年人中的潜在关系。37名患有FOM的老年人和37名对照组参加了这项研究。体位摇摆速度在视觉条件下在前后(AP)和中外侧(ML)方向上进行了三次重复试验。两组在视觉条件下表现出显著的交互作用(F = 7.43, p = 0.01)。在闭眼条件下,FOM组比对照组表现出更快的ML摇摆速度,在三个试验中都有显著差异。摇摆方向与视力之间存在显著的交互作用(F = 27.41, p = 0.001)。此外,在FOM组中,FOM与社会功能(r = -0.69, p = 0.001)和由于情绪问题引起的角色限制(r = -0.58, p = 0.001)的几个生活质量指标表现出强烈的负相关。当FOM影响平衡任务中的摇摆速度时,视觉输入成为姿势控制的关键决定因素。FOM组表现出对视觉的高度依赖,这表明他们更需要依赖视觉的策略来保持平衡。
{"title":"Impact of Visual Input and Kinesiophobia on Postural Control and Quality of Life in Older Adults During One-Leg Standing Tasks.","authors":"Paul S Sung, Dongchul Lee","doi":"10.3390/vision9010024","DOIUrl":"10.3390/vision9010024","url":null,"abstract":"<p><p>Visual conditions significantly influence fear of movement (FOM), which is a condition that impairs postural control and quality of life (QOL). This study examined how visual conditions influence sway velocity during repeated one-leg standing tasks and explored the potential relationship between postural control, FOM, and QOL in older adults with and without FOM. Thirty-seven older adults with FOM and 37 controls participated in the study. Postural sway velocity was measured across three repeated trials under visual conditions in both anteroposterior (AP) and mediolateral (ML) directions. The groups demonstrated significant interaction under visual conditions (F = 7.43, <i>p</i> = 0.01). In the eyes-closed condition, the FOM group exhibited faster ML sway velocity than the control group, with significant differences across all three trials. There was a significant interaction between sway direction and vision (F = 27.41, <i>p</i> = 0.001). In addition, the FOM demonstrated strong negative correlations with several QOL measures on social functioning (r = -0.69, <i>p</i> = 0.001) and role limitations due to emotional problems (r = -0.58, <i>p</i> = 0.001) in the FOM group. While FOM influenced sway velocity during balance tasks, visual input emerged as a key determinant of postural control. The FOM group demonstrated a heightened reliance on vision, suggesting an increased need for vision-dependent strategies to maintain balance.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11946431/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143732151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Vision (Switzerland)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1