首页 > 最新文献

Displays最新文献

英文 中文
Bridging the performance gap of 3D object detection in adverse weather conditions via camera-radar distillation (ChinaMM) 摄像机-雷达精馏法弥补恶劣天气条件下三维目标检测的性能差距
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2025-12-11 DOI: 10.1016/j.displa.2025.103320
Chongze Wang , Ruiqi Cheng , Haoqing Yu , Xuan Gong , Hai-Miao Hu
Robust 3D object detection in challenging weather scenarios remains a significant challenge due to sensor and algorithm degradation caused by various environmental noises. In this paper, we propose a novel camera-radar-based 3D object detection framework that leverages a cross-modality knowledge distillation method to improve detection accuracy in adverse conditions, such as rain and snow. Specifically, we introduce a teacher-student training paradigm, where the teacher model is trained under clear weather and guides the student model trained under weather-degraded environments. We design three novel distillation losses focusing on spatial alignment, semantic consistency, and prediction refinement between different modalities to facilitate effective knowledge transfer. Moreover, a weather simulation module is introduced to generate adverse-weather-like input, enabling the student model to learn robust features under challenging conditions better. A gated fusion module is also integrated to adaptively fuse camera and radar features, enhancing robustness to modality-specific degradation. Experimental results on the nuScenes dataset reveal our model outperforms multiple state-of-the-art methods, achieving superior results across common detection metrics (mAP, NDS) and per-class AP, particularly under challenging weather, showing improvements of 3.5–3.9 % mAP and 4.3–4.8 % NDS in rainy and snowy scenes.
由于各种环境噪声引起的传感器和算法退化,在恶劣天气情况下的鲁棒3D目标检测仍然是一个重大挑战。在本文中,我们提出了一种新的基于相机-雷达的三维目标检测框架,该框架利用跨模态知识蒸馏方法来提高在恶劣条件下(如雨雪)的检测精度。具体来说,我们引入了一个师生训练范式,其中教师模型在晴朗天气下训练,并指导在天气退化环境下训练的学生模型。我们设计了三种新的精馏损失,关注不同模式之间的空间对齐、语义一致性和预测精化,以促进有效的知识转移。此外,还引入了天气模拟模块来生成类似恶劣天气的输入,使学生模型能够在具有挑战性的条件下更好地学习鲁棒特征。此外,还集成了门控融合模块,可自适应融合相机和雷达特征,增强了对特定模态退化的鲁棒性。在nuScenes数据集上的实验结果表明,我们的模型优于多种最先进的方法,在常见的检测指标(mAP, NDS)和类别AP上取得了卓越的结果,特别是在恶劣天气下,在雨雪场景中,mAP和NDS分别提高了3.5 - 3.9%和4.3 - 4.8%。
{"title":"Bridging the performance gap of 3D object detection in adverse weather conditions via camera-radar distillation (ChinaMM)","authors":"Chongze Wang ,&nbsp;Ruiqi Cheng ,&nbsp;Haoqing Yu ,&nbsp;Xuan Gong ,&nbsp;Hai-Miao Hu","doi":"10.1016/j.displa.2025.103320","DOIUrl":"10.1016/j.displa.2025.103320","url":null,"abstract":"<div><div>Robust 3D object detection in challenging weather scenarios remains a significant challenge due to sensor and algorithm degradation caused by various environmental noises. In this paper, we propose a novel camera-radar-based 3D object detection framework that leverages a cross-modality knowledge distillation method to improve detection accuracy in adverse conditions, such as rain and snow. Specifically, we introduce a teacher-student training paradigm, where the teacher model is trained under clear weather and guides the student model trained under weather-degraded environments. We design three novel distillation losses focusing on spatial alignment, semantic consistency, and prediction refinement between different modalities to facilitate effective knowledge transfer. Moreover, a weather simulation module is introduced to generate adverse-weather-like input, enabling the student model to learn robust features under challenging conditions better. A gated fusion module is also integrated to adaptively fuse camera and radar features, enhancing robustness to modality-specific degradation. Experimental results on the nuScenes dataset reveal our model outperforms multiple state-of-the-art methods, achieving superior results across common detection metrics (mAP, NDS) and per-class AP, particularly under challenging weather, showing improvements of 3.5–3.9 % mAP and 4.3–4.8 % NDS in rainy and snowy scenes.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103320"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bioinspired micro-/nano-composite structures for simultaneous enhancement of light extraction efficiency and output uniformity in Micro-LEDs 生物启发微/纳米复合结构,同时提高光提取效率和输出均匀性
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2025-11-13 DOI: 10.1016/j.displa.2025.103286
Jingyu Liu , Jiawei Zhang , Zhenyou Zou , Yibin Lin , Jinyu Ye , Wenfu Huang , Chaoxing Wu , Yongai Zhang , Jie Sun , Qun Yan , Xiongtu Zhou
The strong total internal reflection (TIR) in micro light-emitting diodes (Micro-LEDs) significantly limits light extraction efficiency (LEE) and uniformity of light distribution, thereby hindering their industrial applications. Inspired by the layered surface structures found in firefly lanterns, this study proposes a flexible bioinspired micro-/nano-composite structure that effectively enhances both LEE and the uniformity of light output. Finite-Difference Time-Domain (FDTD) simulations demonstrate that microstructures contribute to directional light extraction, whereas nanostructures facilitate overall optical optimization. A novel fabrication approach integrating grayscale photolithography, mechanical stretching, and plasma treatment was developed, enabling the realization of micro-/nano-composite structures with tunable design parameters. Experimental results indicate a 40.5% increase in external quantum efficiency (EQE) and a 41.6% improvement in power efficiency (PE) for blue Micro-LEDs, accompanied by enhanced angular light distribution, leading to wider viewing angles and near-ideal light uniformity. This advancement effectively resolves the longstanding challenge of balancing efficiency and uniformity in light extraction, thereby facilitating the industrialization of Micro-LED technology.
微发光二极管(micro - led)的强全内反射(TIR)严重限制了光提取效率(LEE)和光分布均匀性,从而阻碍了其工业应用。受萤火虫灯笼中发现的分层表面结构的启发,本研究提出了一种灵活的仿生微/纳米复合结构,有效地提高了LEE和光输出的均匀性。时域有限差分(FDTD)模拟表明,微结构有助于定向光提取,而纳米结构有助于整体光学优化。开发了一种集成灰度光刻、机械拉伸和等离子体处理的新型制造方法,实现了具有可调设计参数的微/纳米复合材料结构。实验结果表明,蓝色micro - led的外量子效率(EQE)提高了40.5%,功率效率(PE)提高了41.6%,同时增强了角光分布,从而实现了更宽的视角和接近理想的光均匀性。这一进步有效地解决了平衡光提取效率和均匀性的长期挑战,从而促进了Micro-LED技术的产业化。
{"title":"Bioinspired micro-/nano-composite structures for simultaneous enhancement of light extraction efficiency and output uniformity in Micro-LEDs","authors":"Jingyu Liu ,&nbsp;Jiawei Zhang ,&nbsp;Zhenyou Zou ,&nbsp;Yibin Lin ,&nbsp;Jinyu Ye ,&nbsp;Wenfu Huang ,&nbsp;Chaoxing Wu ,&nbsp;Yongai Zhang ,&nbsp;Jie Sun ,&nbsp;Qun Yan ,&nbsp;Xiongtu Zhou","doi":"10.1016/j.displa.2025.103286","DOIUrl":"10.1016/j.displa.2025.103286","url":null,"abstract":"<div><div>The strong total internal reflection (TIR) in micro light-emitting diodes (Micro-LEDs) significantly limits light extraction efficiency (LEE) and uniformity of light distribution, thereby hindering their industrial applications. Inspired by the layered surface structures found in firefly lanterns, this study proposes a flexible bioinspired micro-/nano-composite structure that effectively enhances both LEE and the uniformity of light output. Finite-Difference Time-Domain (FDTD) simulations demonstrate that microstructures contribute to directional light extraction, whereas nanostructures facilitate overall optical optimization. A novel fabrication approach integrating grayscale photolithography, mechanical stretching, and plasma treatment was developed, enabling the realization of micro-/nano-composite structures with tunable design parameters. Experimental results indicate a 40.5% increase in external quantum efficiency (EQE) and a 41.6% improvement in power efficiency (PE) for blue Micro-LEDs, accompanied by enhanced angular light distribution, leading to wider viewing angles and near-ideal light uniformity. This advancement effectively resolves the longstanding challenge of balancing efficiency and uniformity in light extraction, thereby facilitating the industrialization of Micro-LED technology.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103286"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight deformable attention for event-based monocular depth estimation 基于事件的单目深度估计轻量级可变形注意
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2025-12-01 DOI: 10.1016/j.displa.2025.103303
Jianye Yang, Shaofan Wang, Jingyi Wang, Yanfeng Sun, Baocai Yin
Event cameras are neuromorphically inspired sensors that output brightness changes in the form of a stream of asynchronous events instead of intensity frames. Event-based monocular depth estimation forms a foundation of widespread high dynamic vision applications. Existing monocular depth estimation networks, such as CNNs and transformers, suffer from the insufficient exploration of spatio-temporal correlation, and the high complexity. In this paper, we propose the Lightweight Deformable Attention Network (LDANet) for circumventing the two issues. The key component of LDANet is the Mixed Attention with Temporal Embedding (MATE) module, which consists of a lightweight deformable attention layer and a temporal embedding layer. The former, as an improvement of deformable attention, is equipped with a drifted token representation and a K-nearest multi-head deformable-attention block, capturing the locally-spatial correlation. The latter is equipped with a cross-attention layer by querying the previous temporal event frame, encouraging to memorize the history of depth clues and capturing temporal correlation. Experiments on a real scenario dataset and a simulation scenario dataset show that, LDANet achieves a satisfactory balance between the inference efficiency and depth estimation accuracy. The code is available at https://github.com/wangsfan/LDA.
事件相机是受神经形态启发的传感器,它以异步事件流的形式输出亮度变化,而不是强度帧。基于事件的单目深度估计是高动态视觉广泛应用的基础。现有的单目深度估计网络,如cnn和transformer,存在对时空相关性探索不足、复杂度高的问题。在本文中,我们提出轻量级可变形注意力网络(LDANet)来规避这两个问题。LDANet的关键组件是混合注意与时间嵌入(MATE)模块,该模块由一个轻量级的可变形注意层和一个时间嵌入层组成。前者作为可变形注意的改进,配备了漂移标记表示和k -最近多头可变形注意块,捕获局部空间相关性。后者通过查询以前的时间事件框架,鼓励记忆深度线索的历史并捕获时间相关性,从而配备了跨注意层。在真实场景数据集和仿真场景数据集上的实验表明,LDANet在推理效率和深度估计精度之间取得了令人满意的平衡。代码可在https://github.com/wangsfan/LDA上获得。
{"title":"Lightweight deformable attention for event-based monocular depth estimation","authors":"Jianye Yang,&nbsp;Shaofan Wang,&nbsp;Jingyi Wang,&nbsp;Yanfeng Sun,&nbsp;Baocai Yin","doi":"10.1016/j.displa.2025.103303","DOIUrl":"10.1016/j.displa.2025.103303","url":null,"abstract":"<div><div>Event cameras are neuromorphically inspired sensors that output brightness changes in the form of a stream of asynchronous events instead of intensity frames. Event-based monocular depth estimation forms a foundation of widespread high dynamic vision applications. Existing monocular depth estimation networks, such as CNNs and transformers, suffer from the insufficient exploration of spatio-temporal correlation, and the high complexity. In this paper, we propose the <u>L</u>ightweight <u>D</u>eformable <u>A</u>ttention <u>Net</u>work (LDANet) for circumventing the two issues. The key component of LDANet is the Mixed Attention with Temporal Embedding (MATE) module, which consists of a lightweight deformable attention layer and a temporal embedding layer. The former, as an improvement of deformable attention, is equipped with a drifted token representation and a <span><math><mi>K</mi></math></span>-nearest multi-head deformable-attention block, capturing the locally-spatial correlation. The latter is equipped with a cross-attention layer by querying the previous temporal event frame, encouraging to memorize the history of depth clues and capturing temporal correlation. Experiments on a real scenario dataset and a simulation scenario dataset show that, LDANet achieves a satisfactory balance between the inference efficiency and depth estimation accuracy. The code is available at <span><span>https://github.com/wangsfan/LDA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103303"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differences in streaming quality impact viewer expectations, attitudes and reactions to video 流媒体质量的差异会影响观众对视频的期望、态度和反应
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2026-01-12 DOI: 10.1016/j.displa.2026.103350
Christopher A. Sanchez, Nisha Raghunath, Chelsea Ahart
Given the massive amount of visual media consumed across the world everyday, an open question is whether deviations from high-quality streaming can negatively impact viewer’s opinions and attitudes towards viewed content? Previous research has shown that reductions in perceptual quality can negatively impact attitudes in other contexts. These changes in quality often lead to corresponding changes in attitudes. Are users sensitive to changes in video quality, and does this impact reactions to viewed content? For example, do users enjoy lower quality videos as much as higher-quality versions? Do quality differences also make viewers less receptive to the content of videos? Across two studies, participants watched a video in lower- or higher-quality, and were then queried regarding their viewing experience. This included ratings of attitudes towards video streaming and video content, and also included measures of factual recall. Results indicated that viewers significantly prefer videos presented in higher quality, which drives future viewing intentions. Further, while factual memory for information was equivalent across video quality, participants who viewed the higher-quality video were more likely to show an affective reaction to the video, and also change their attitudes relative to the presented content. These results have implications for the design and delivery of online video content, and suggests that any deviations from higher-quality presentations can bias opinions relative to the viewed content. Lower-quality videos decreased attitudes towards content, and also negatively impacted viewers’ receptiveness to presented content.
鉴于世界各地每天消费的大量视觉媒体,一个悬而未决的问题是,偏离高质量的流媒体是否会对观众对所观看内容的意见和态度产生负面影响?先前的研究表明,感知质量的降低会对其他情况下的态度产生负面影响。这些品质的变化往往导致态度的相应变化。用户对视频质量的变化是否敏感?这是否会影响用户对观看内容的反应?例如,用户是否像喜欢高质量视频一样喜欢低质量视频?质量差异是否也会使观众对视频内容的接受度降低?在两项研究中,参与者观看了低质量或高质量的视频,然后询问他们的观看体验。这包括对视频流媒体和视频内容的态度评级,也包括对事实回忆的测量。结果表明,观众明显更喜欢高质量的视频,这推动了未来的观看意图。此外,虽然对信息的事实记忆在视频质量上是相同的,但观看高质量视频的参与者更有可能对视频表现出情感反应,并且也会相对于所呈现的内容改变他们的态度。这些结果对在线视频内容的设计和交付具有启示意义,并表明任何与高质量演示的偏差都可能使人们对所观看的内容产生偏见。低质量的视频降低了观众对内容的态度,也对观众对所呈现内容的接受程度产生了负面影响。
{"title":"Differences in streaming quality impact viewer expectations, attitudes and reactions to video","authors":"Christopher A. Sanchez,&nbsp;Nisha Raghunath,&nbsp;Chelsea Ahart","doi":"10.1016/j.displa.2026.103350","DOIUrl":"10.1016/j.displa.2026.103350","url":null,"abstract":"<div><div>Given the massive amount of visual media consumed across the world everyday, an open question is whether deviations from high-quality streaming can negatively impact viewer’s opinions and attitudes towards viewed content? Previous research has shown that reductions in perceptual quality can negatively impact attitudes in other contexts. These changes in quality often lead to corresponding changes in attitudes. Are users sensitive to changes in video quality, and does this impact reactions to viewed content? For example, do users enjoy lower quality videos as much as higher-quality versions? Do quality differences also make viewers less receptive to the content of videos? Across two studies, participants watched a video in lower- or higher-quality, and were then queried regarding their viewing experience. This included ratings of attitudes towards video streaming and video content, and also included measures of factual recall. Results indicated that viewers significantly prefer videos presented in higher quality, which drives future viewing intentions. Further, while factual memory for information was equivalent across video quality, participants who viewed the higher-quality video were more likely to show an affective reaction to the video, and also change their attitudes relative to the presented content. These results have implications for the design and delivery of online video content, and suggests that any deviations from higher-quality presentations can bias opinions relative to the viewed content. Lower-quality videos decreased attitudes towards content, and also negatively impacted viewers’ receptiveness to presented content.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103350"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new iterative inverse display model 一种新的迭代逆显示模型
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2026-01-16 DOI: 10.1016/j.displa.2026.103342
María José Pérez-Peñalver , S.-W. Lee , Cristina Jordán , Esther Sanabria-Codesal , Samuel Morillas
In this paper, we propose a new inverse model for display characterization based on the direct model developed in Kim and Lee (2015). We use an iterative method to compute what inputs are able to produce a desired color expressed in device independent color coordinates. Whereas iterative approaches have been used in the past for this task, the main novelty in our proposal is the use of specific heuristics based on the former display model and color science principles to achieve an efficient and accurate convergence. On the one hand, to set the initial point of the iterative process, we use orthogonal projections of the desired color chromaticity, xy, onto the display’s chromaticity triangle to find the initial ratio the RGB coordinates need to have. Subsequently, we use a factor product, preserving RGB proportions, to initially approximate the desired color’s luminance. This factor is obtained through a nonlinear modeling of the relation between RGB and luminance. On the other hand, to reduce the number of iterations needed, we use the direct model mentioned above: to set the RGB values of the next iteration we look at the differences between color prediction provided by the direct model for the current RGB values and desired color coordinates but looking separately at chromaticity and luminance following the same reasoning as for the initial point. As we will see from the experimental results, the method is accurate, efficient and robust. With respect to state of the art, method performance is specially good for low quality displays where physical assumptions made by other models do not hold completely.
在本文中,我们基于Kim和Lee(2015)开发的直接模型提出了一种新的显示表征逆模型。我们使用迭代方法来计算哪些输入能够产生以设备无关的颜色坐标表示的所需颜色。虽然迭代方法在过去已经被用于这项任务,但我们的提议的主要新颖之处在于使用基于前显示模型和颜色科学原理的特定启发式方法来实现高效和准确的收敛。一方面,为了设置迭代过程的初始点,我们使用所需颜色色度xy的正交投影到显示器的色度三角形上,以找到RGB坐标需要具有的初始比例。随后,我们使用因子乘积,保留RGB比例,以初步近似所需颜色的亮度。该因子是通过对RGB和亮度之间的关系进行非线性建模得到的。另一方面,为了减少所需的迭代次数,我们使用上面提到的直接模型:为了设置下一次迭代的RGB值,我们查看直接模型为当前RGB值和期望的颜色坐标提供的颜色预测之间的差异,但按照与初始点相同的推理分别查看色度和亮度。实验结果表明,该方法准确、高效、鲁棒性好。就目前的技术水平而言,在其他模型所做的物理假设不完全成立的低质量显示中,方法性能特别好。
{"title":"A new iterative inverse display model","authors":"María José Pérez-Peñalver ,&nbsp;S.-W. Lee ,&nbsp;Cristina Jordán ,&nbsp;Esther Sanabria-Codesal ,&nbsp;Samuel Morillas","doi":"10.1016/j.displa.2026.103342","DOIUrl":"10.1016/j.displa.2026.103342","url":null,"abstract":"<div><div>In this paper, we propose a new inverse model for display characterization based on the direct model developed in Kim and Lee (2015). We use an iterative method to compute what inputs are able to produce a desired color expressed in device independent color coordinates. Whereas iterative approaches have been used in the past for this task, the main novelty in our proposal is the use of specific heuristics based on the former display model and color science principles to achieve an efficient and accurate convergence. On the one hand, to set the initial point of the iterative process, we use orthogonal projections of the desired color chromaticity, <span><math><mrow><mi>x</mi><mi>y</mi></mrow></math></span>, onto the display’s chromaticity triangle to find the initial ratio the RGB coordinates need to have. Subsequently, we use a factor product, preserving RGB proportions, to initially approximate the desired color’s luminance. This factor is obtained through a nonlinear modeling of the relation between RGB and luminance. On the other hand, to reduce the number of iterations needed, we use the direct model mentioned above: to set the RGB values of the next iteration we look at the differences between color prediction provided by the direct model for the current RGB values and desired color coordinates but looking separately at chromaticity and luminance following the same reasoning as for the initial point. As we will see from the experimental results, the method is accurate, efficient and robust. With respect to state of the art, method performance is specially good for low quality displays where physical assumptions made by other models do not hold completely.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103342"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and evaluation of Avatar: An ultra-low-latency immersive human–machine interface for teleoperation Avatar的设计与评估:用于远程操作的超低延迟沉浸式人机界面
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2025-11-19 DOI: 10.1016/j.displa.2025.103292
Junjie Li , Dewei Han , Jian Xu , Kang Li , Zhaoyuan Ma
Spatially separated teleoperation is crucial for inaccessible or hazardous scenarios but requires intuitive human–machine interfaces (HMIs) to ensure situational awareness, especially visual perception. While 360°panoramic vision offers immersion and a wide field of view, its high latency reduces efficiency and quality and causes motion sickness. This paper presents the Avatar system, an ultra-low-latency panoramic vision platform for teleoperation and telepresence. Using a convenient method, Avatar’s measured capture-to-display latency is only 220 ms. Two experiments with 43 participants demonstrated that Avatar achieves near-scene perception efficiency in near-field visual search. Its ultra-low latency also ensured high efficiency and quality in teleoperation tasks. Analysis of subjective questionnaires and physiological indicators confirmed that Avatar provides operators with intense immersion and presence. The system’s design and verification guide future universal, efficient HMI development for diverse applications.
空间分离远程操作对于难以接近或危险的场景至关重要,但需要直观的人机界面(hmi)来确保态势感知,特别是视觉感知。虽然360°全景视觉提供沉浸感和广阔的视野,但其高延迟降低了效率和质量,并导致晕动病。本文介绍了Avatar系统,一个用于远程操作和远程呈现的超低延迟全景视觉平台。使用一种方便的方法,Avatar的捕获到显示延迟仅为220毫秒。两个43人参与的实验表明,Avatar在近场视觉搜索中达到了近场景感知效率。它的超低延迟也保证了远程操作任务的高效率和高质量。主观问卷调查和生理指标分析证实,《阿凡达》为操作者提供了强烈的沉浸感和身临其境感。该系统的设计和验证指导了未来通用、高效的各种应用的人机界面开发。
{"title":"Design and evaluation of Avatar: An ultra-low-latency immersive human–machine interface for teleoperation","authors":"Junjie Li ,&nbsp;Dewei Han ,&nbsp;Jian Xu ,&nbsp;Kang Li ,&nbsp;Zhaoyuan Ma","doi":"10.1016/j.displa.2025.103292","DOIUrl":"10.1016/j.displa.2025.103292","url":null,"abstract":"<div><div>Spatially separated teleoperation is crucial for inaccessible or hazardous scenarios but requires intuitive human–machine interfaces (HMIs) to ensure situational awareness, especially visual perception. While 360°panoramic vision offers immersion and a wide field of view, its high latency reduces efficiency and quality and causes motion sickness. This paper presents the Avatar system, an ultra-low-latency panoramic vision platform for teleoperation and telepresence. Using a convenient method, Avatar’s measured capture-to-display latency is only 220 ms. Two experiments with 43 participants demonstrated that Avatar achieves near-scene perception efficiency in near-field visual search. Its ultra-low latency also ensured high efficiency and quality in teleoperation tasks. Analysis of subjective questionnaires and physiological indicators confirmed that Avatar provides operators with intense immersion and presence. The system’s design and verification guide future universal, efficient HMI development for diverse applications.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103292"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ultrasound image quality assessment of robot screening based on dual perspective multi feature collaboration 基于双视角多特征协同的机器人筛查超声图像质量评价
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2025-12-15 DOI: 10.1016/j.displa.2025.103319
Weihua He , Li Liang , Fei Ouyang , Guangming Yang , Peng Ding , Tong Zhang , Zhiyong Zhang
With advancements in robotics and artificial intelligence, Robotic Autonomous Ultrasound Screening (RAUSS) has emerged as a critical research area in medical technology. A major challenge in RAUSS is the automatic assessment of ultrasound image quality. In clinical practice, physicians evaluate images based on both pixel-level technical metrics and anatomical content. However, existing methods often emphasize positive anatomical features while neglecting negative factors such as noise and artifacts. To address this, we propose a dual-perspective multi-feature collaborative network (DM-Net) for ultrasound image quality assessment. Built on ResNet, the model extracts both positive anatomical features and negative artifacts, integrating them through a cross-attention mechanism for comprehensive quality evaluation. Experimental results show that the proposed method achieves superior consistency with expert evaluations, with a PLCC of 0.8318, SROCC of 0.8334, and an accuracy of 76.07%. It outperforms conventional methods used in robotic systems and aligns more closely with clinical assessments. Additionally, the system processes each image in just 0.062 s, meeting real-time requirements for robotic screening. This work provides a clinically relevant quality feedback solution for RAUSS and lays the foundation for future research on ultrasound video assessment.
随着机器人技术和人工智能的进步,机器人自主超声筛查(RAUSS)已成为医疗技术的一个重要研究领域。RAUSS的一个主要挑战是超声图像质量的自动评估。在临床实践中,医生根据像素级技术指标和解剖内容来评估图像。然而,现有的方法往往强调积极的解剖特征,而忽略了负面因素,如噪声和伪影。为了解决这个问题,我们提出了一种双视角多特征协同网络(DM-Net)用于超声图像质量评估。该模型建立在ResNet上,提取正面解剖特征和负面伪影,通过交叉注意机制将两者整合,进行综合质量评价。实验结果表明,该方法与专家评价具有较好的一致性,PLCC为0.8318,SROCC为0.8334,准确率为76.07%。它优于机器人系统中使用的传统方法,并且更接近临床评估。此外,该系统在0.062秒内处理每张图像,满足机器人筛选的实时要求。本工作为RAUSS提供了临床相关的质量反馈解决方案,为今后超声视频评估的研究奠定了基础。
{"title":"Ultrasound image quality assessment of robot screening based on dual perspective multi feature collaboration","authors":"Weihua He ,&nbsp;Li Liang ,&nbsp;Fei Ouyang ,&nbsp;Guangming Yang ,&nbsp;Peng Ding ,&nbsp;Tong Zhang ,&nbsp;Zhiyong Zhang","doi":"10.1016/j.displa.2025.103319","DOIUrl":"10.1016/j.displa.2025.103319","url":null,"abstract":"<div><div>With advancements in robotics and artificial intelligence, Robotic Autonomous Ultrasound Screening (RAUSS) has emerged as a critical research area in medical technology. A major challenge in RAUSS is the automatic assessment of ultrasound image quality. In clinical practice, physicians evaluate images based on both pixel-level technical metrics and anatomical content. However, existing methods often emphasize positive anatomical features while neglecting negative factors such as noise and artifacts. To address this, we propose a dual-perspective multi-feature collaborative network (DM-Net) for ultrasound image quality assessment. Built on ResNet, the model extracts both positive anatomical features and negative artifacts, integrating them through a cross-attention mechanism for comprehensive quality evaluation. Experimental results show that the proposed method achieves superior consistency with expert evaluations, with a PLCC of 0.8318, SROCC of 0.8334, and an accuracy of 76.07%. It outperforms conventional methods used in robotic systems and aligns more closely with clinical assessments. Additionally, the system processes each image in just 0.062 s, meeting real-time requirements for robotic screening. This work provides a clinically relevant quality feedback solution for RAUSS and lays the foundation for future research on ultrasound video assessment.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103319"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RTSIQA: A database and method for real-world traffic scenes image quality assessment RTSIQA:一个真实交通场景图像质量评估的数据库和方法
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2025-11-28 DOI: 10.1016/j.displa.2025.103299
Fangfang Lu , Haoyang Ni , Yijie Huang , Nan Guo , Kaiwei Zhang , Wei Sun , Xiongkuo Min
Traffic scene image quality assessment (IQA) is critical for intelligent transportation systems and autonomous driving applications. However, existing IQA methods are primarily designed for general real-world scenes and struggle to adapt to the structured elements and statistical characteristics unique to traffic scenes. Moreover, these methods overlook the distinct assessment needs arising from the spatially imbalanced perceptual importance in traffic scenes: some small regions (e.g., vehicles, pedestrians, traffic signals) are vital for driving safety, whereas some large regions (e.g., sky), despite their spatial dominance, are less critical. In addition, different traffic objects exhibit distinct degradation patterns due to their unique physical properties and texture structures, rendering a global quality score insufficient to represent differences in quality among these elements. Furthermore, the lack of IQA databases specifically for real-world traffic scenes has constrained further research development. To address these challenges, we construct a new real-world traffic scene IQA database providing both whole image quality scores and per-category quality scores for traffic object categories. Furthermore, we develop an adaptive multi-branch no-reference IQA network based on a dual-network architecture. This network extracts multi-scale features through pre-trained Swin Transformer combined with a semantic structure compensation module to enhance local structure modeling capability. It introduces a multi-branch assessment module utilizing object detection to identify traffic object location and category, achieving differentiated quality assessment for various traffic object categories. Experimental results show that the proposed method effectively outputs image quality for different objects within the same image on our constructed database and performs excellently on multiple general IQA databases.
交通场景图像质量评估(IQA)对于智能交通系统和自动驾驶应用至关重要。然而,现有的IQA方法主要是为一般的现实场景设计的,难以适应交通场景特有的结构化元素和统计特征。此外,这些方法忽略了交通场景中感知重要性的空间不平衡所带来的不同评估需求:一些小区域(如车辆、行人、交通信号)对驾驶安全至关重要,而一些大区域(如天空)尽管在空间上占主导地位,但却不那么重要。此外,不同的交通对象由于其独特的物理性质和纹理结构而表现出不同的退化模式,使得全局质量评分不足以表示这些元素之间的质量差异。此外,缺乏专门用于现实交通场景的IQA数据库限制了进一步的研究发展。为了解决这些挑战,我们构建了一个新的现实世界交通场景IQA数据库,提供了交通对象类别的整体图像质量分数和每类别质量分数。此外,我们还开发了一种基于双网络架构的自适应多分支无参考IQA网络。该网络通过预训练的Swin Transformer结合语义结构补偿模块提取多尺度特征,增强局部结构建模能力。引入多分支评估模块,利用目标检测识别交通对象位置和类别,实现对不同交通对象类别的差异化质量评估。实验结果表明,该方法可以在构建的数据库上有效地输出同一图像中不同对象的图像质量,并在多个通用IQA数据库上表现出色。
{"title":"RTSIQA: A database and method for real-world traffic scenes image quality assessment","authors":"Fangfang Lu ,&nbsp;Haoyang Ni ,&nbsp;Yijie Huang ,&nbsp;Nan Guo ,&nbsp;Kaiwei Zhang ,&nbsp;Wei Sun ,&nbsp;Xiongkuo Min","doi":"10.1016/j.displa.2025.103299","DOIUrl":"10.1016/j.displa.2025.103299","url":null,"abstract":"<div><div>Traffic scene image quality assessment (IQA) is critical for intelligent transportation systems and autonomous driving applications. However, existing IQA methods are primarily designed for general real-world scenes and struggle to adapt to the structured elements and statistical characteristics unique to traffic scenes. Moreover, these methods overlook the distinct assessment needs arising from the spatially imbalanced perceptual importance in traffic scenes: some small regions (e.g., vehicles, pedestrians, traffic signals) are vital for driving safety, whereas some large regions (e.g., sky), despite their spatial dominance, are less critical. In addition, different traffic objects exhibit distinct degradation patterns due to their unique physical properties and texture structures, rendering a global quality score insufficient to represent differences in quality among these elements. Furthermore, the lack of IQA databases specifically for real-world traffic scenes has constrained further research development. To address these challenges, we construct a new real-world traffic scene IQA database providing both whole image quality scores and per-category quality scores for traffic object categories. Furthermore, we develop an adaptive multi-branch no-reference IQA network based on a dual-network architecture. This network extracts multi-scale features through pre-trained Swin Transformer combined with a semantic structure compensation module to enhance local structure modeling capability. It introduces a multi-branch assessment module utilizing object detection to identify traffic object location and category, achieving differentiated quality assessment for various traffic object categories. Experimental results show that the proposed method effectively outputs image quality for different objects within the same image on our constructed database and performs excellently on multiple general IQA databases.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103299"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards camouflaged object detection via global guidance and cascading refinement 通过全局引导和级联细化实现伪装目标检测
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2025-11-06 DOI: 10.1016/j.displa.2025.103278
Dan Wu, Mengyin Wang, Fuming Sun
Camouflaged object detection is characterized by targets with fuzzy boundaries, diverse sizes, and backgrounds similar to the target objects. Due to these characteristics, existing methods tend to improve detection performance by building very complex models while ignoring computational efficiency. In addition, the difficulty in capturing the target makes it difficult to localize, and the huge background noise of the target leads to the loss of detailed features in the process. To address the above problems, we propose an efficient Global Guidance and Cascading Refinement Network (GCNet) with a streamlined structure. Firstly, considering the model size, we employ a lightweight SMT as the backbone. Secondly, we design a Rough Position Module (RPM) to coarsely localize the target by collecting global semantic information and guiding global features to anchor near the target location with high quality. Finally, we introduce a Feature Refinement Module (FRM), which employs a reverse attention mechanism to enhance feature discrimination and helps to highlight the camouflaged regions by refining features through an efficient cascading manner. Extensive experimental results show that GCNet outperforms 20 current methods on four benchmark datasets. Importantly, GCNet boasts a low number of parameters, a low computational complexity, and a very competitive inference speed, successfully balancing the incompatibility between model size and recognition accuracy. The codes are released at https://github.com/wd61419/GCNet.
伪装目标检测的特点是目标边界模糊、大小多样、背景与目标相似。由于这些特点,现有的方法往往通过建立非常复杂的模型来提高检测性能,而忽略了计算效率。此外,目标的难以捕获导致定位困难,目标巨大的背景噪声导致过程中细节特征的丢失。为了解决上述问题,我们提出了一种具有流线型结构的高效全局制导和级联细化网络(GCNet)。首先,考虑到模型的大小,我们采用轻量级的SMT作为主干。其次,设计了粗糙定位模块(Rough Position Module, RPM),通过收集全局语义信息,引导全局特征高质量地锚定在目标位置附近,对目标进行粗定位;最后,我们引入了一个特征细化模块(FRM),该模块采用反向注意机制来增强特征识别,并通过有效的级联方式来细化特征,从而帮助突出被伪装的区域。大量的实验结果表明,GCNet在4个基准数据集上优于目前20种方法。重要的是,GCNet具有较少的参数,较低的计算复杂度和极具竞争力的推理速度,成功地平衡了模型大小和识别精度之间的不兼容性。这些代码在https://github.com/wd61419/GCNet上发布。
{"title":"Towards camouflaged object detection via global guidance and cascading refinement","authors":"Dan Wu,&nbsp;Mengyin Wang,&nbsp;Fuming Sun","doi":"10.1016/j.displa.2025.103278","DOIUrl":"10.1016/j.displa.2025.103278","url":null,"abstract":"<div><div>Camouflaged object detection is characterized by targets with fuzzy boundaries, diverse sizes, and backgrounds similar to the target objects. Due to these characteristics, existing methods tend to improve detection performance by building very complex models while ignoring computational efficiency. In addition, the difficulty in capturing the target makes it difficult to localize, and the huge background noise of the target leads to the loss of detailed features in the process. To address the above problems, we propose an efficient Global Guidance and Cascading Refinement Network (GCNet) with a streamlined structure. Firstly, considering the model size, we employ a lightweight SMT as the backbone. Secondly, we design a Rough Position Module (RPM) to coarsely localize the target by collecting global semantic information and guiding global features to anchor near the target location with high quality. Finally, we introduce a Feature Refinement Module (FRM), which employs a reverse attention mechanism to enhance feature discrimination and helps to highlight the camouflaged regions by refining features through an efficient cascading manner. Extensive experimental results show that GCNet outperforms 20 current methods on four benchmark datasets. Importantly, GCNet boasts a low number of parameters, a low computational complexity, and a very competitive inference speed, successfully balancing the incompatibility between model size and recognition accuracy. The codes are released at <span><span>https://github.com/wd61419/GCNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103278"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145486353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing medical image segmentation: A self-supervised approach with global feature enhancement and edge constraint guidance 增强医学图像分割:基于全局特征增强和边缘约束引导的自监督方法
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2025-11-26 DOI: 10.1016/j.displa.2025.103300
Miao Wang , Zechen Zheng , Congqian Wang , Chao Fan , Xuelei He
Segmenting medical images has grown in importance as a computer-aided diagnostic tool. However, unlabeled medical data, due to the lack of clear supervision signals, may lead to unclear optimization goals and the learning of pseudo-correlation features. To deal with these issues, a self-supervised medical image segmentation model based on edge attention and global feature enhancement (GFEM) has been set forth. This model conducts branch extraction of the local and global information of the image through global feature enhancement. A feature fusion module (MFF) based on mamba structure was utilized to enhance the relation of local and global feature. To pursue the accurate segmentation, the edge attention module and the compound edge loss function (CEEG-Loss) are combined to guide the edge information of the segmented object. The model was evaluated on Abdomen and CHAOS datasets with average 79.70% and 78.81% Dice. Extensive evaluations confirm our model outperforms baselines significantly and remains competitive against other methods.
分割医学图像作为一种计算机辅助诊断工具已经变得越来越重要。然而,未标记的医疗数据由于缺乏明确的监督信号,可能导致优化目标不明确,学习到伪相关特征。针对这些问题,提出了一种基于边缘关注和全局特征增强(GFEM)的自监督医学图像分割模型。该模型通过全局特征增强对图像的局部和全局信息进行分支提取。利用基于曼巴结构的特征融合模块(MFF)增强局部特征与全局特征之间的关系。为了追求准确的分割,结合边缘注意模块和复合边缘损失函数(CEEG-Loss)来引导被分割对象的边缘信息。在腹部和混沌数据集上对模型进行评估,平均Dice为79.70%和78.81%。广泛的评估证实我们的模型明显优于基线,并且与其他方法相比仍然具有竞争力。
{"title":"Enhancing medical image segmentation: A self-supervised approach with global feature enhancement and edge constraint guidance","authors":"Miao Wang ,&nbsp;Zechen Zheng ,&nbsp;Congqian Wang ,&nbsp;Chao Fan ,&nbsp;Xuelei He","doi":"10.1016/j.displa.2025.103300","DOIUrl":"10.1016/j.displa.2025.103300","url":null,"abstract":"<div><div>Segmenting medical images has grown in importance as a computer-aided diagnostic tool. However, unlabeled medical data, due to the lack of clear supervision signals, may lead to unclear optimization goals and the learning of pseudo-correlation features. To deal with these issues, a self-supervised medical image segmentation model based on edge attention and global feature enhancement (GFEM) has been set forth. This model conducts branch extraction of the local and global information of the image through global feature enhancement. A feature fusion module (MFF) based on mamba structure was utilized to enhance the relation of local and global feature. To pursue the accurate segmentation, the edge attention module and the compound edge loss function (CEEG-Loss) are combined to guide the edge information of the segmented object. The model was evaluated on Abdomen and CHAOS datasets with average 79.70% and 78.81% Dice. Extensive evaluations confirm our model outperforms baselines significantly and remains competitive against other methods.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103300"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Displays
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1