首页 > 最新文献

Displays最新文献

英文 中文
Structural hint-guided colorization network for sketch colorization 用于草图着色的结构提示引导着色网络
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-02-05 DOI: 10.1016/j.displa.2026.103373
Yunjiao Ma, Zhenzhen He, Jun Xiang, Ning Zhang, Ruru Pan
Sketch colorization, using black-and-white lines for structure, faces inefficiency in manual methods, driving the need for intelligent solutions. Existing techniques, such as natural language, label-guided, and color-hint methods, suffer from spatial imprecision, limited color diversity, or poor modeling of long-range dependency and control of color change. The proposed Structural Hint-Guided Colorization Network generates color hints via superpixel decomposition with line density analysis to adapt to structural complexity, integrates a Transformer branch into the Residual Network for global dependencies, and uses a hybrid loss for controlled color transitions. Experiments show balanced precision and flexibility in complex sketches with a Peak Signal-to-Noise Ratio (PSNR) of 23.221 and a Structural Similarity Index Measure (SSIM) of 0.853. Compared with the baseline, an improvement of 5.63% PSNR and 3.90% SSIM validates the effectiveness of the proposed method.
素描着色,使用黑白线作为结构,面对手工方法的低效率,推动了对智能解决方案的需求。现有的技术,如自然语言、标签引导和颜色提示方法,存在空间不精确、颜色多样性有限或对颜色变化的长期依赖和控制建模不良的问题。提出的结构提示引导着色网络通过超像素分解和线密度分析来生成颜色提示,以适应结构的复杂性,将Transformer分支集成到残差网络中以实现全局依赖,并使用混合损失来控制颜色过渡。实验结果表明,该算法的峰值信噪比(PSNR)为23.221,结构相似度指数(SSIM)为0.853,能够很好地平衡复杂草图的精度和灵活性。与基线相比,PSNR提高5.63%,SSIM提高3.90%,验证了所提方法的有效性。
{"title":"Structural hint-guided colorization network for sketch colorization","authors":"Yunjiao Ma,&nbsp;Zhenzhen He,&nbsp;Jun Xiang,&nbsp;Ning Zhang,&nbsp;Ruru Pan","doi":"10.1016/j.displa.2026.103373","DOIUrl":"10.1016/j.displa.2026.103373","url":null,"abstract":"<div><div>Sketch colorization, using black-and-white lines for structure, faces inefficiency in manual methods, driving the need for intelligent solutions. Existing techniques, such as natural language, label-guided, and color-hint methods, suffer from spatial imprecision, limited color diversity, or poor modeling of long-range dependency and control of color change. The proposed Structural Hint-Guided Colorization Network generates color hints via superpixel decomposition with line density analysis to adapt to structural complexity, integrates a Transformer branch into the Residual Network for global dependencies, and uses a hybrid loss for controlled color transitions. Experiments show balanced precision and flexibility in complex sketches with a Peak Signal-to-Noise Ratio (PSNR) of 23.221 and a Structural Similarity Index Measure (SSIM) of 0.853. Compared with the baseline, an improvement of 5.63% PSNR and 3.90% SSIM validates the effectiveness of the proposed method.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103373"},"PeriodicalIF":3.4,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-view 3D point cloud registration method based on generated multi-scale information granules 基于生成的多尺度信息颗粒的多视角三维点云配准方法
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-02-04 DOI: 10.1016/j.displa.2026.103372
Chen Yang , Jixiang Nie , Hui Chen , Weina Wang , Wanquan Liu
Point cloud registration typically relies on point-pair feature extraction. However, point cloud features are low-dimensional, and point-wise processing lacks topological structure and leads to high computational complexity. Address to these challenges, a multi-view 3D point cloud registration method based on generated multi-scale information granules is proposed to build the completed 3D reconstruction. Specifically, during the granule generation process, Fast Persistent Feature Histograms (FPFH) are integrated into Fuzzy C-means clustering to ensure the preservation of geometric features while reducing computational cost. Furthermore, to ensure feature completeness across regions with varying densities, a surface complexity threshold is employed to merge fine-grained granules and eliminate relatively flat surfaces. This approach avoids over-segmentation and redundancy, thereby improving the efficiency of point cloud processing. Finally, to tackle the uneven distribution of overlapping areas and noise-induced mismatches, a hierarchical GMM-based 3D registration framework based on multi-scale information granules is constructed. Point cloud granules are dynamically updated in real time to ensure registration between granules with complete geometric features, thus improving registration accuracy. Experiments conducted on benchmark datasets and real-world collected data demonstrate that the proposed method outperforms existing methods in multi-view registration, offering improved accuracy and efficiency.
点云配准通常依赖于点对特征提取。然而,点云特征是低维的,逐点处理缺乏拓扑结构,导致计算复杂度高。针对这些挑战,提出了一种基于生成的多尺度信息颗粒的多视角三维点云配准方法来构建完整的三维重建。具体而言,在颗粒生成过程中,将快速持久特征直方图(Fast Persistent Feature Histograms, FPFH)集成到模糊c均值聚类中,以保证几何特征的保留,同时降低计算成本。此外,为了确保不同密度区域的特征完整性,采用表面复杂性阈值来合并细粒度颗粒并消除相对平坦的表面。该方法避免了过度分割和冗余,从而提高了点云处理的效率。最后,针对重叠区域分布不均匀和噪声引起的配准不匹配问题,构建了基于多尺度信息颗粒的分层gmm三维配准框架。点云颗粒实时动态更新,保证颗粒间的配准具有完整的几何特征,从而提高配准精度。在基准数据集和实际采集数据上进行的实验表明,该方法在多视图配准方面优于现有方法,提高了精度和效率。
{"title":"Multi-view 3D point cloud registration method based on generated multi-scale information granules","authors":"Chen Yang ,&nbsp;Jixiang Nie ,&nbsp;Hui Chen ,&nbsp;Weina Wang ,&nbsp;Wanquan Liu","doi":"10.1016/j.displa.2026.103372","DOIUrl":"10.1016/j.displa.2026.103372","url":null,"abstract":"<div><div>Point cloud registration typically relies on point-pair feature extraction. However, point cloud features are low-dimensional, and point-wise processing lacks topological structure and leads to high computational complexity. Address to these challenges, a multi-view 3D point cloud registration method based on generated multi-scale information granules is proposed to build the completed 3D reconstruction. Specifically, during the granule generation process, Fast Persistent Feature Histograms (FPFH) are integrated into Fuzzy C-means clustering to ensure the preservation of geometric features while reducing computational cost. Furthermore, to ensure feature completeness across regions with varying densities, a surface complexity threshold is employed to merge fine-grained granules and eliminate relatively flat surfaces. This approach avoids over-segmentation and redundancy, thereby improving the efficiency of point cloud processing. Finally, to tackle the uneven distribution of overlapping areas and noise-induced mismatches, a hierarchical GMM-based 3D registration framework based on multi-scale information granules is constructed. Point cloud granules are dynamically updated in real time to ensure registration between granules with complete geometric features, thus improving registration accuracy. Experiments conducted on benchmark datasets and real-world collected data demonstrate that the proposed method outperforms existing methods in multi-view registration, offering improved accuracy and efficiency.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103372"},"PeriodicalIF":3.4,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust image steganography based on residual and multi-attention enhanced Generative Adversarial Networks 基于残差和多注意增强生成对抗网络的鲁棒图像隐写
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-02-04 DOI: 10.1016/j.displa.2026.103384
Yuling Luo, Zhaohui Chen, Baoshan Lu, Yiting Huang, Qiang Fu, Sheng Qin, Junxiu Liu
Generative Adversarial Networks (GAN) have significantly improved data security in image steganography. However, existing GAN-based approaches often fail to consider the impact of transmission noise and rely on separately trained encoder–decoder architectures, which hinder the accurate recovery of hidden image data. To address these limitations, we propose a Residual and Multi-Attention Enhanced GAN (RME-GAN) for image steganography, which integrates residual networks, attention mechanisms, and multi-objective optimization to effectively enhance the recovery quality of secret images. In the generator, a residual preprocessing network combined with a global attention mechanism is employed to efficiently extract transmission noise features. In the extractor, a gated attention module is introduced to align the encoder and decoder features, thereby improving decoding accuracy. Moreover, a multi-objective loss function is formulated to jointly optimize both encoder and decoder through end-to-end training, enhancing the consistency between them. Experimental results on widely used datasets, including LFW, ImageNet, and Pascal, demonstrate that the proposed RME-GAN achieves superior robustness against noise and significantly improves Peak Signal-to-Noise Ratio (PSNR) and Structure Similarity Index Measure (SSIM) performance compared to existing methods.
生成对抗网络(GAN)显著提高了图像隐写中的数据安全性。然而,现有的基于gan的方法往往没有考虑传输噪声的影响,并且依赖于单独训练的编码器-解码器架构,这阻碍了隐藏图像数据的准确恢复。为了解决这些问题,我们提出了一种残差和多注意增强GAN (RME-GAN)图像隐写算法,该算法将残差网络、注意机制和多目标优化相结合,有效地提高了秘密图像的恢复质量。在发生器中,残差预处理网络结合全局关注机制,有效提取传输噪声特征。在提取器中,引入了一个门控注意模块来对齐编码器和解码器的特征,从而提高了解码精度。并建立了多目标损失函数,通过端到端训练对编码器和解码器进行联合优化,增强了编码器和解码器的一致性。在LFW、ImageNet和Pascal等广泛使用的数据集上的实验结果表明,与现有方法相比,所提出的RME-GAN具有优越的抗噪声鲁棒性,显著提高了峰值信噪比(PSNR)和结构相似指数测量(SSIM)性能。
{"title":"Robust image steganography based on residual and multi-attention enhanced Generative Adversarial Networks","authors":"Yuling Luo,&nbsp;Zhaohui Chen,&nbsp;Baoshan Lu,&nbsp;Yiting Huang,&nbsp;Qiang Fu,&nbsp;Sheng Qin,&nbsp;Junxiu Liu","doi":"10.1016/j.displa.2026.103384","DOIUrl":"10.1016/j.displa.2026.103384","url":null,"abstract":"<div><div>Generative Adversarial Networks (GAN) have significantly improved data security in image steganography. However, existing GAN-based approaches often fail to consider the impact of transmission noise and rely on separately trained encoder–decoder architectures, which hinder the accurate recovery of hidden image data. To address these limitations, we propose a Residual and Multi-Attention Enhanced GAN (RME-GAN) for image steganography, which integrates residual networks, attention mechanisms, and multi-objective optimization to effectively enhance the recovery quality of secret images. In the generator, a residual preprocessing network combined with a global attention mechanism is employed to efficiently extract transmission noise features. In the extractor, a gated attention module is introduced to align the encoder and decoder features, thereby improving decoding accuracy. Moreover, a multi-objective loss function is formulated to jointly optimize both encoder and decoder through end-to-end training, enhancing the consistency between them. Experimental results on widely used datasets, including LFW, ImageNet, and Pascal, demonstrate that the proposed RME-GAN achieves superior robustness against noise and significantly improves Peak Signal-to-Noise Ratio (PSNR) and Structure Similarity Index Measure (SSIM) performance compared to existing methods.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103384"},"PeriodicalIF":3.4,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Does display type matter for change detection? comparing immersive and non-immersive displays under low and high semantic availability 显示类型对变更检测有影响吗?比较低和高语义可用性下的沉浸式和非沉浸式显示
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-02-04 DOI: 10.1016/j.displa.2026.103379
Changrui Zhu , Ernst Kruijff , Harvey Stedman , Vijay M. Pawar , Simon Julier
Change detection is a cognitively challenging process that involves three stages: spotting (becoming aware of a change); localising (establishing the specific location of the change); and identifying (recognising the nature of the change). Each of these stages has the potential to be influenced by both the way the data is presented (e.g., display type) and the fidelity of that data. To explore these issues, we conducted two studies, both of which looked at the effects of display type (immersive virtual reality (VR) or desktop monitor (DM)), and the semantic availability of the scene (low or high realism).
Study 1 (N=38) explored the VR–DM differences in a broad scope, which examined six change types spanning both spatial and non-spatial changes—disappear, appear, translation, rotation, replacement, and colour. However, there were no significant differences between VR and DM in spotting, localising, and identifying at either level of (semantic) realism. Study 2 (N=20) followed this up by exploring only two types of spatial change (translation and rotation) at a much finer degree of granularity while retaining the same experimental paradigm with necessary refinement. Study 2 showed a significant VR advantage over DM, with different patterns across realism conditions: In low-realism scenes, VR significantly outperformed DM on localisation and change-type identification overall, with the largest VR–DM contrasts observed for the smallest translations. In high-realism scenes, the only significant effect was a display-by-magnitude interaction for change-type identification at the smallest translations. Taken both studies together, VR benefits are most likely for subtle spatial changes, particularly small translations, when the semantic availability is limited. Questionnaire ratings also suggested that reliance on visual features varies with semantic availability. Semantic cues were rated significantly higher than other features in high realism scenes only. Finally, there is no significant difference between VR and DM in terms of workload, motion sickness and self-confidence, suggesting that the perceptual advantages of VR come with no additional physical or cognitive costs for change detection.
变化检测是一个具有认知挑战性的过程,涉及三个阶段:发现(意识到变化);定位(确定变更的具体位置);识别(认识到变化的本质)。这些阶段中的每一个都可能受到数据呈现方式(例如,显示类型)和数据保真度的影响。为了探索这些问题,我们进行了两项研究,这两项研究都着眼于显示类型(沉浸式虚拟现实(VR)或桌面显示器(DM))的影响,以及场景的语义可用性(低或高真实感)。研究1 (N=38)在大范围内探讨了VR-DM的差异,研究了六种跨越空间和非空间变化的变化类型——消失、出现、平移、旋转、替换和颜色。然而,VR和DM在发现、定位和识别(语义)真实感的任何一个层面上都没有显著差异。研究2 (N=20)在此基础上进一步探索了两种类型的空间变化(平移和旋转),同时保留了相同的实验范式,并进行了必要的细化。研究2显示了VR优于DM的显著优势,在不同的现实主义条件下具有不同的模式:在低现实主义场景中,VR在定位和变化类型识别方面总体上明显优于DM,在最小的翻译中观察到最大的VR和DM对比。在高真实感场景中,唯一显著的效果是在最小的翻译中用于变化类型识别的按量级显示交互作用。将这两项研究结合起来,当语义可用性有限时,VR的好处最有可能用于细微的空间变化,特别是小的翻译。问卷评分还表明,对视觉特征的依赖程度随语义可用性的变化而变化。仅在高真实感场景中,语义线索的评分明显高于其他特征。最后,在工作量、晕动病和自信方面,VR和DM之间没有显著差异,这表明VR的感知优势并没有带来额外的身体或认知成本。
{"title":"Does display type matter for change detection? comparing immersive and non-immersive displays under low and high semantic availability","authors":"Changrui Zhu ,&nbsp;Ernst Kruijff ,&nbsp;Harvey Stedman ,&nbsp;Vijay M. Pawar ,&nbsp;Simon Julier","doi":"10.1016/j.displa.2026.103379","DOIUrl":"10.1016/j.displa.2026.103379","url":null,"abstract":"<div><div>Change detection is a cognitively challenging process that involves three stages: spotting (becoming aware of a change); localising (establishing the specific location of the change); and identifying (recognising the nature of the change). Each of these stages has the potential to be influenced by both the way the data is presented (e.g., display type) and the fidelity of that data. To explore these issues, we conducted two studies, both of which looked at the effects of display type (immersive virtual reality (VR) or desktop monitor (DM)), and the semantic availability of the scene (low or high realism).</div><div>Study 1 (<span><math><mrow><mi>N</mi><mo>=</mo><mn>38</mn></mrow></math></span>) explored the VR–DM differences in a broad scope, which examined six change types spanning both spatial and non-spatial changes—<em>disappear</em>, <em>appear</em>, <em>translation</em>, <em>rotation</em>, <em>replacement</em>, and <em>colour</em>. However, there were no significant differences between VR and DM in spotting, localising, and identifying at either level of (semantic) realism. Study 2 (<span><math><mrow><mi>N</mi><mo>=</mo><mn>20</mn></mrow></math></span>) followed this up by exploring only two types of spatial change (<em>translation</em> and <em>rotation</em>) at a much finer degree of granularity while retaining the same experimental paradigm with necessary refinement. Study 2 showed a significant VR advantage over DM, with different patterns across realism conditions: In low-realism scenes, VR significantly outperformed DM on localisation and change-type identification overall, with the largest VR–DM contrasts observed for the smallest <em>translations</em>. In high-realism scenes, the only significant effect was a display-by-magnitude interaction for change-type identification at the smallest <em>translations</em>. Taken both studies together, VR benefits are most likely for subtle spatial changes, particularly small <em>translations</em>, when the semantic availability is limited. Questionnaire ratings also suggested that reliance on visual features varies with semantic availability. Semantic cues were rated significantly higher than other features in high realism scenes only. Finally, there is no significant difference between VR and DM in terms of workload, motion sickness and self-confidence, suggesting that the perceptual advantages of VR come with no additional physical or cognitive costs for change detection.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103379"},"PeriodicalIF":3.4,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A visual Attention-Based model for VR sickness assessment 基于视觉注意力的VR疾病评估模型
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-02-04 DOI: 10.1016/j.displa.2026.103376
Yongqing Cai, Cheng Han, Wei Quan, Yuechen Zhang
With the annual increase in Virtual Reality (VR) products and content, an increasing number of users are engaging with VR videos. However, many users experience discomfort such as headaches and dizziness during VR experiences, a phenomenon known as VR sickness. To enhance user comfort during VR experiences, this study proposes a VR sickness assessment model based on visual attention mechanisms, enabling automatic classification of VR content so users can select experiences suitable for their needs. The proposed model comprises an attention stream subnetwork, inspired by user attention mechanisms, and a motion stream subnetwork, jointly forming a dual-stream evaluation system. Leveraging a transformer architecture, the model establishes self-attention mechanisms over temporal and spatial sequences to capture their interdependent features. A multi-level fusion strategy is employed to extract low-level, high-level, and global features, while attention mechanisms adaptively integrate these multi-level features, achieving precise VR sickness assessment results. Experiments conducted on publicly available datasets demonstrate the effectiveness of the visual attention mechanism in improving model assessment accuracy. The model achieved 88.18% and 92.22% accuracy on two public datasets, respectively, representing a significant performance improvement compared to existing studies.
随着虚拟现实(VR)产品和内容的逐年增加,越来越多的用户参与到VR视频中。然而,许多用户在VR体验过程中会感到头痛和头晕等不适,这种现象被称为VR病。为了提高用户在VR体验中的舒适度,本研究提出了一种基于视觉注意机制的VR疾病评估模型,实现VR内容的自动分类,使用户可以选择适合自己需求的体验。该模型包括一个受用户注意机制启发的注意流子网和一个运动流子网,共同构成一个双流评价系统。利用转换器架构,该模型在时间和空间序列上建立自关注机制,以捕获它们相互依赖的特征。采用多层次融合策略提取低水平、高水平和全局特征,注意机制自适应整合这些多层次特征,获得精确的VR疾病评估结果。在公开数据集上进行的实验证明了视觉注意机制在提高模型评估精度方面的有效性。该模型在两个公共数据集上的准确率分别达到了88.18%和92.22%,与现有研究相比,性能有了显著提高。
{"title":"A visual Attention-Based model for VR sickness assessment","authors":"Yongqing Cai,&nbsp;Cheng Han,&nbsp;Wei Quan,&nbsp;Yuechen Zhang","doi":"10.1016/j.displa.2026.103376","DOIUrl":"10.1016/j.displa.2026.103376","url":null,"abstract":"<div><div>With the annual increase in Virtual Reality (VR) products and content, an increasing number of users are engaging with VR videos. However, many users experience discomfort such as headaches and dizziness during VR experiences, a phenomenon known as VR sickness. To enhance user comfort during VR experiences, this study proposes a VR sickness assessment model based on visual attention mechanisms, enabling automatic classification of VR content so users can select experiences suitable for their needs. The proposed model comprises an attention stream subnetwork, inspired by user attention mechanisms, and a motion stream subnetwork, jointly forming a dual-stream evaluation system. Leveraging a transformer architecture, the model establishes self-attention mechanisms over temporal and spatial sequences to capture their interdependent features. A multi-level fusion strategy is employed to extract low-level, high-level, and global features, while attention mechanisms adaptively integrate these multi-level features, achieving precise VR sickness assessment results. Experiments conducted on publicly available datasets demonstrate the effectiveness of the visual attention mechanism in improving model assessment accuracy. The model achieved 88.18% and 92.22% accuracy on two public datasets, respectively, representing a significant performance improvement compared to existing studies.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103376"},"PeriodicalIF":3.4,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Localization knowledge-driven segmentation of arteries in ultrasound images 超声图像中动脉的定位知识驱动分割
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-02-04 DOI: 10.1016/j.displa.2026.103381
Mengxue Yan , Zirui Wang , Zhenfeng Li , Peng Wang , Pang Wu , Xianxiang Chen , Lidong Du , Li Li , Hongbo Chang , Zhen Fang
Accurate segmentation of the arterial lumen in ultrasound images is crucial for clinical diagnosis and hemodynamic assessment, but is challenged by inherent image properties such as low contrast, artifacts, and surrounding tissues with similar morphology. These factors conjointly lead to significant localization ambiguity, which severely hampers the performance of segmentation models. To address this issue, we propose a novel localization knowledge-driven Segmentation (LKDS) framework, which guides accurate segmentation through explicit localization. The proposed framework first acquires robust localization knowledge through a Localization Prior Learning (LPL) process on a coarsely-annotated dataset, which is then efficiently transferred and adapted to target datasets via a few-shot pseudo-labeling strategy. Operationally, the LKDS framework generates a dynamic Localization Map (LM) for each image to explicitly guide a subsequent network in performing accurate segmentation. Extensive experiments on two distinct arterial ultrasound datasets show that our LKDS framework not only accelerates training convergence but also significantly outperforms state-of-the-art implicit segmentation methods. Our work demonstrates that explicitly incorporating localization knowledge is an effective strategy for significantly enhancing the performance of arterial segmentation.
超声图像中动脉腔的准确分割对于临床诊断和血流动力学评估至关重要,但由于其固有的图像特性(如低对比度、伪影和周围组织具有相似形态)而受到挑战。这些因素共同导致了严重的定位歧义,严重影响了分割模型的性能。为了解决这一问题,我们提出了一种新的定位知识驱动分割(LKDS)框架,该框架通过显式定位指导准确的分割。该框架首先在粗糙标注的数据集上通过定位先验学习(LPL)过程获得鲁棒的定位知识,然后通过少量伪标记策略有效地转移和适应目标数据集。在操作上,LKDS框架为每个图像生成一个动态的定位地图(LM),以明确地指导后续网络执行准确的分割。在两个不同的动脉超声数据集上进行的大量实验表明,我们的LKDS框架不仅加速了训练收敛,而且显著优于最先进的隐式分割方法。我们的工作表明,明确结合定位知识是显著提高动脉分割性能的有效策略。
{"title":"Localization knowledge-driven segmentation of arteries in ultrasound images","authors":"Mengxue Yan ,&nbsp;Zirui Wang ,&nbsp;Zhenfeng Li ,&nbsp;Peng Wang ,&nbsp;Pang Wu ,&nbsp;Xianxiang Chen ,&nbsp;Lidong Du ,&nbsp;Li Li ,&nbsp;Hongbo Chang ,&nbsp;Zhen Fang","doi":"10.1016/j.displa.2026.103381","DOIUrl":"10.1016/j.displa.2026.103381","url":null,"abstract":"<div><div>Accurate segmentation of the arterial lumen in ultrasound images is crucial for clinical diagnosis and hemodynamic assessment, but is challenged by inherent image properties such as low contrast, artifacts, and surrounding tissues with similar morphology. These factors conjointly lead to significant localization ambiguity, which severely hampers the performance of segmentation models. To address this issue, we propose a novel localization knowledge-driven Segmentation (LKDS) framework, which guides accurate segmentation through explicit localization. The proposed framework first acquires robust localization knowledge through a Localization Prior Learning (LPL) process on a coarsely-annotated dataset, which is then efficiently transferred and adapted to target datasets via a few-shot pseudo-labeling strategy. Operationally, the LKDS framework generates a dynamic Localization Map (LM) for each image to explicitly guide a subsequent network in performing accurate segmentation. Extensive experiments on two distinct arterial ultrasound datasets show that our LKDS framework not only accelerates training convergence but also significantly outperforms state-of-the-art implicit segmentation methods. Our work demonstrates that explicitly incorporating localization knowledge is an effective strategy for significantly enhancing the performance of arterial segmentation.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103381"},"PeriodicalIF":3.4,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of different viewing modes in virtual reality games on visual function parameters and subjective symptoms: A cross-sectional study 虚拟现实游戏中不同观看方式对视觉功能参数和主观症状影响的横断面研究
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-02-04 DOI: 10.1016/j.displa.2026.103371
Meng Liu , Lei Fan , Jiechun Lin , Zhenhao Song , Qi Li , Yingxiang Han , Dajiang Wang , Xiaofei Wang
Virtual reality (VR) games are increasingly applied in various fields, yet prolonged immersion can induce VR sickness. In unidirectional VR game modes, targets mainly appear in front of the player, requiring minimal head movement, whereas multidirectional modes present targets from multiple directions, demanding greater head and body rotation for a full 360° experience. Although VR-induced discomfort has been studied, the specific effects of different modes on binocular visual function and VR sickness remain insufficiently understood. This study examined how two different viewing modes of the VR game Beat Saber—unidirectional and multidirectional—affect binocular visual function and subjective symptoms. Thirty-three participants played each mode for 30 min using a head-mounted display. The Simulator Sickness Questionnaire (SSQ), Visual Fatigue Scale, and phoropter-based binocular visual function tests were conducted before and after gameplay. Significant increases were observed in total and subscale SSQ scores and visual fatigue scores after both modes, indicating that VR gaming induces adverse symptoms. The accommodative convergence to accommodation ratio (AC/A) decreased significantly after the multidirectional mode, suggesting greater effects on binocular accommodation and vergence in multidirectional mode. Near exophoria was negatively correlated with visual fatigue after both the unidirectional and multidirectional modes; accommodative response (AR) correlated positively with visual fatigue after the unidirectional mode; and negative relative accommodation (NRA) correlated negatively with visual fatigue after the multidirectional mode. These findings provide insights into how VR gameplay mode influences VR sickness, visual fatigue, and binocular visual function, supporting the development of VR design standards and user experience optimization.
虚拟现实(VR)游戏越来越多地应用于各个领域,但长时间沉浸会导致VR病。在单向VR游戏模式中,目标主要出现在玩家面前,需要最小的头部运动,而多向模式则从多个方向呈现目标,需要更大的头部和身体旋转来获得完整的360°体验。虽然已经研究了VR引起的不适,但不同模式对双眼视觉功能和VR疾病的具体影响仍然不够清楚。本研究考察了虚拟现实游戏《Beat saber》的两种不同观看模式(单向和多向)对双眼视觉功能和主观症状的影响。33名参与者使用头戴式显示器,每种模式玩30分钟。在游戏前后分别进行了模拟疾病问卷(SSQ)、视觉疲劳量表和基于光镜的双眼视觉功能测试。两种模式后SSQ总分、亚量表评分和视觉疲劳评分均显著升高,提示VR游戏诱发不良症状。多向模式后,调节收敛与调节比(AC/A)显著降低,说明多向模式对双眼调节和收敛的影响更大。在单向和多向模式下,近外斜视与视疲劳均呈负相关;单向模式后,调节反应(AR)与视疲劳呈正相关;负相对调节(NRA)与多向模式后的视疲劳呈负相关。这些发现为VR游戏模式如何影响VR疾病、视觉疲劳和双目视觉功能提供了见解,为VR设计标准的制定和用户体验优化提供了支持。
{"title":"Effects of different viewing modes in virtual reality games on visual function parameters and subjective symptoms: A cross-sectional study","authors":"Meng Liu ,&nbsp;Lei Fan ,&nbsp;Jiechun Lin ,&nbsp;Zhenhao Song ,&nbsp;Qi Li ,&nbsp;Yingxiang Han ,&nbsp;Dajiang Wang ,&nbsp;Xiaofei Wang","doi":"10.1016/j.displa.2026.103371","DOIUrl":"10.1016/j.displa.2026.103371","url":null,"abstract":"<div><div>Virtual reality (VR) games are increasingly applied in various fields, yet prolonged immersion can induce VR sickness. In unidirectional VR game modes, targets mainly appear in front of the player, requiring minimal head movement, whereas multidirectional modes present targets from multiple directions, demanding greater head and body rotation for a full 360° experience. Although VR-induced discomfort has been studied, the specific effects of different modes on binocular visual function and VR sickness remain insufficiently understood. This study examined how two different viewing modes of the VR game Beat Saber—unidirectional and multidirectional—affect binocular visual function and subjective symptoms. Thirty-three participants played each mode for 30 min using a head-mounted display. The Simulator Sickness Questionnaire (SSQ), Visual Fatigue Scale, and phoropter-based binocular visual function tests were conducted before and after gameplay. Significant increases were observed in total and subscale SSQ scores and visual fatigue scores after both modes, indicating that VR gaming induces adverse symptoms. The accommodative convergence to accommodation ratio (AC/A) decreased significantly after the multidirectional mode, suggesting greater effects on binocular accommodation and vergence in multidirectional mode. Near exophoria was negatively correlated with visual fatigue after both the unidirectional and multidirectional modes; accommodative response (AR) correlated positively with visual fatigue after the unidirectional mode; and negative relative accommodation (NRA) correlated negatively with visual fatigue after the multidirectional mode. These findings provide insights into how VR gameplay mode influences VR sickness, visual fatigue, and binocular visual function, supporting the development of VR design standards and user experience optimization.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103371"},"PeriodicalIF":3.4,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Electroless plating of high-quality Ni microbumps for high-density micro-LEDs realized via surface plasma treatment and solution wettability enhancement 通过表面等离子体处理和溶液润湿性增强,实现了高密度微型led高质量Ni微凸点的化学镀
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-02-03 DOI: 10.1016/j.displa.2026.103370
Shuaishuai Wang , Zhihua Wang , Peiquan Zheng , Yijian Zhou , Chengwu Wu , Taifu Lang , Xin Lin , Xin Wu , Caihong Yan , Qun Yan , Kaixin Zhang , Jie Sun
To fabricate high-quality bump arrays on Micro-LEDs, thereby enhance the yield and reliability of high-density Micro-LED devices. We employed plasma treatment on Micro-LED samples and added a specific concentration of surfactant to the electroless plating solution, which enhance Micro-LEDs surface and electroless plating solution wettability, creating a conducive environment for fabrication of bumps. In comparison to traditional nickel bump fabrication on high-density substrates, we successfully fabricated fast growth rate, high uniformity, excellent shear strength and low surface roughness of nickel bumps on Micro-LEDs via the synergistic effect of plasma treatment and wettability electroless plating, the bump growth rate, array uniformity and shear strength improved by 56.8%, 86% and 377%, respectively, while surface roughness decreased by 94.4%. This work provides a critical pathway for fabricating high-quality nickel bumps and enhancing the yield and reliability of highly integrated Micro-LED devices.
在Micro-LED上制造高质量的凹凸阵列,从而提高高密度Micro-LED器件的良率和可靠性。我们对Micro-LED样品进行了等离子体处理,并在化学镀液中添加了一定浓度的表面活性剂,增强了Micro-LED表面和化学镀液的润湿性,为制造凸点创造了有利的环境。与传统高密度基板上的镍凸点制备工艺相比,通过等离子体处理和润湿性化学镀的协同作用,在microled上成功制备出生长速度快、均匀性高、抗剪强度优异和表面粗糙度低的镍凸点,其生长速度、阵列均匀性和抗剪强度分别提高了56.8%、86%和377%,表面粗糙度降低了94.4%。这项工作为制造高质量的镍凸点和提高高集成Micro-LED器件的良率和可靠性提供了关键途径。
{"title":"Electroless plating of high-quality Ni microbumps for high-density micro-LEDs realized via surface plasma treatment and solution wettability enhancement","authors":"Shuaishuai Wang ,&nbsp;Zhihua Wang ,&nbsp;Peiquan Zheng ,&nbsp;Yijian Zhou ,&nbsp;Chengwu Wu ,&nbsp;Taifu Lang ,&nbsp;Xin Lin ,&nbsp;Xin Wu ,&nbsp;Caihong Yan ,&nbsp;Qun Yan ,&nbsp;Kaixin Zhang ,&nbsp;Jie Sun","doi":"10.1016/j.displa.2026.103370","DOIUrl":"10.1016/j.displa.2026.103370","url":null,"abstract":"<div><div>To fabricate high-quality bump arrays on Micro-LEDs, thereby enhance the yield and reliability of high-density Micro-LED devices. We employed plasma treatment on Micro-LED samples and added a specific concentration of surfactant to the electroless plating solution, which enhance Micro-LEDs surface and electroless plating solution wettability, creating a conducive environment for fabrication of bumps. In comparison to traditional nickel bump fabrication on high-density substrates, we successfully fabricated fast growth rate, high uniformity, excellent shear strength and low surface roughness of nickel bumps on Micro-LEDs via the synergistic effect of plasma treatment and wettability electroless plating, the bump growth rate, array uniformity and shear strength improved by 56.8%, 86% and 377%, respectively, while surface roughness decreased by 94.4%. This work provides a critical pathway for fabricating high-quality nickel bumps and enhancing the yield and reliability of highly integrated Micro-LED devices.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103370"},"PeriodicalIF":3.4,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Change detection of large-field-of-view video images in low-light environments with cross-scale feature fusion and pseudo-change mitigation 基于跨尺度特征融合和伪变化缓解的低光环境下大视场视频图像变化检测
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-02-02 DOI: 10.1016/j.displa.2026.103374
Yani Guo , Zhenhong Jia , Gang Zhou , Xiaohui Huang , Yue Li , Mingyan Li , Guohong Chen , Junjie Li
Numerous obstacles are faced in change detection tasks for large-field-of-view video images (e.g., those acquired by Eagle Eye devices) in low-light environments, mainly due to the difficulty in differentiating genuine changes from illumination-induced pseudo-changes, vulnerability to intricate noise interference, and constrained robustness in multi-scale change detection. To address these issues, a deep learning framework for large-field-of-view change detection in low-light environments is proposed in this paper, consisting of three core modules: Cross-scale Attention Feature Fusion, Difference Enhancement and Optimization, and Pseudo-Change Suppression and Multi-scale Fusion. Initially, the Cross-scale Attention Feature Fusion (CAF) module employs a cross-scale attention mechanism to fuse multi-scale features, capturing change information at various scales. Structural differences are then enhanced by the Difference Enhancement and Optimization (DEO) module through frequency-domain decomposition and boundary-aware strategies, mitigating the impact of illumination variations. Subsequently, illumination-induced pseudo-changes are suppressed by the Pseudo-Change Suppression and Multi-scale Fusion (PSF) module with Pseudo-Change Filtering Attention, and multi-scale feature fusion is performed to generate accurate change maps. Additionally, an end-to-end optimization strategy is introduced, incorporating contrastive learning and self-supervised pseudo-label generation, to further enhance the model’s robustness and generalization across various low-light scenarios. Experimental results demonstrate that, compared with other methods, The method described in this paper improved the F1 score by 3.65% and accuracy by 1.84%, verifying its ability to accurately distinguish between real and false changes in low-light environments.
低光环境下大视场视频图像(如Eagle Eye设备获取的视频图像)的变化检测任务面临诸多障碍,主要是由于难以区分真实变化与光照诱导的伪变化,易受复杂的噪声干扰,以及多尺度变化检测的鲁棒性受限。针对这些问题,本文提出了一种用于低光环境下大视场变化检测的深度学习框架,该框架由跨尺度注意特征融合、差异增强与优化、伪变化抑制与多尺度融合三个核心模块组成。最初,跨尺度注意特征融合(CAF)模块采用跨尺度注意机制融合多尺度特征,捕捉不同尺度的变化信息。然后,差分增强和优化(DEO)模块通过频域分解和边界感知策略增强结构差异,减轻光照变化的影响。随后,利用伪变化滤波的伪变化抑制和多尺度融合(PSF)模块对光照引起的伪变化进行抑制,并进行多尺度特征融合,生成精确的变化图。此外,引入了端到端优化策略,结合对比学习和自监督伪标签生成,进一步增强了模型在各种低光照场景下的鲁棒性和泛化性。实验结果表明,与其他方法相比,本文方法的F1分数提高了3.65%,准确率提高了1.84%,验证了其在弱光环境下准确区分真假变化的能力。
{"title":"Change detection of large-field-of-view video images in low-light environments with cross-scale feature fusion and pseudo-change mitigation","authors":"Yani Guo ,&nbsp;Zhenhong Jia ,&nbsp;Gang Zhou ,&nbsp;Xiaohui Huang ,&nbsp;Yue Li ,&nbsp;Mingyan Li ,&nbsp;Guohong Chen ,&nbsp;Junjie Li","doi":"10.1016/j.displa.2026.103374","DOIUrl":"10.1016/j.displa.2026.103374","url":null,"abstract":"<div><div>Numerous obstacles are faced in change detection tasks for large-field-of-view video images (e.g., those acquired by Eagle Eye devices) in low-light environments, mainly due to the difficulty in differentiating genuine changes from illumination-induced pseudo-changes, vulnerability to intricate noise interference, and constrained robustness in multi-scale change detection. To address these issues, a deep learning framework for large-field-of-view change detection in low-light environments is proposed in this paper, consisting of three core modules: Cross-scale Attention Feature Fusion, Difference Enhancement and Optimization, and Pseudo-Change Suppression and Multi-scale Fusion. Initially, the Cross-scale Attention Feature Fusion (CAF) module employs a cross-scale attention mechanism to fuse multi-scale features, capturing change information at various scales. Structural differences are then enhanced by the Difference Enhancement and Optimization (DEO) module through frequency-domain decomposition and boundary-aware strategies, mitigating the impact of illumination variations. Subsequently, illumination-induced pseudo-changes are suppressed by the Pseudo-Change Suppression and Multi-scale Fusion (PSF) module with Pseudo-Change Filtering Attention, and multi-scale feature fusion is performed to generate accurate change maps. Additionally, an end-to-end optimization strategy is introduced, incorporating contrastive learning and self-supervised pseudo-label generation, to further enhance the model’s robustness and generalization across various low-light scenarios. Experimental results demonstrate that, compared with other methods, The method described in this paper improved the F1 score by 3.65% and accuracy by 1.84%, verifying its ability to accurately distinguish between real and false changes in low-light environments.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103374"},"PeriodicalIF":3.4,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Committee Elections with Candidate Attribute Constraints 具有候选人属性约束的委员会选举
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-02-02 DOI: 10.1016/j.displa.2026.103377
Aizhong Zhou , Fengbo Wang , Jiong Guo , Yutao Liu
The Mixture of Experts (MoE) is a neural network architecture which is widely used in fields such as natural language processing (such as large language models, multilingual translation), computer vision (such as medical image analysis, multi-modal learning), and recommendation systems. A core problem of the MoE is how to select an expert assigned to a specific task among all experts. This problem can be transformed into an election problem where each expert is a candidate and the winner of election (a candidate or some candidates) is the expert who is assigned to the task by considering the votes. We study a variant of committee elections from the perspective of computational complexity. Given a set of candidates, each possessing a set of attributes and a profit value, and a set of constraints specified as propositional logical expressions on the attributes, the task is to select a committee of k candidates that satisfies all constraints and whose total profit meets a given threshold. Regarding the classical complexity, we design two polynomial time algorithms for two special conditions and provide some NP-hardness results. Moreover, we examine the parameterized complexity and get some FPT, W[1]-hard and para-NP-hard results.
混合专家(MoE)是一种神经网络架构,广泛应用于自然语言处理(如大型语言模型、多语言翻译)、计算机视觉(如医学图像分析、多模态学习)和推荐系统等领域。教育部的核心问题是如何从众多专家中挑选出分配到特定任务的专家。这个问题可以转化为一个选举问题,其中每个专家都是一个候选人,选举的获胜者(一个候选人或几个候选人)是通过考虑投票分配给任务的专家。我们从计算复杂性的角度研究了委员会选举的一种变体。给定一组候选者,每个候选者都有一组属性和一个利润值,以及一组以属性上的命题逻辑表达式指定的约束,任务是选择一个由k个候选者组成的委员会,满足所有约束,其总利润满足给定阈值。针对经典复杂度,我们针对两种特殊情况设计了两种多项式时间算法,并给出了一些np -硬度结果。此外,我们还研究了参数化的复杂度,得到了一些FPT、w[1]-hard和para-NP-hard的结果。
{"title":"Committee Elections with Candidate Attribute Constraints","authors":"Aizhong Zhou ,&nbsp;Fengbo Wang ,&nbsp;Jiong Guo ,&nbsp;Yutao Liu","doi":"10.1016/j.displa.2026.103377","DOIUrl":"10.1016/j.displa.2026.103377","url":null,"abstract":"<div><div>The Mixture of Experts (MoE) is a neural network architecture which is widely used in fields such as natural language processing (such as large language models, multilingual translation), computer vision (such as medical image analysis, multi-modal learning), and recommendation systems. A core problem of the MoE is how to select an expert assigned to a specific task among all experts. This problem can be transformed into an election problem where each expert is a candidate and the winner of election (a candidate or some candidates) is the expert who is assigned to the task by considering the votes. We study a variant of committee elections from the perspective of computational complexity. Given a set of candidates, each possessing a set of attributes and a profit value, and a set of constraints specified as propositional logical expressions on the attributes, the task is to select a committee of <span><math><mi>k</mi></math></span> candidates that satisfies all constraints and whose total profit meets a given threshold. Regarding the classical complexity, we design two polynomial time algorithms for two special conditions and provide some NP-hardness results. Moreover, we examine the parameterized complexity and get some FPT, W[1]-hard and para-NP-hard results.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"93 ","pages":"Article 103377"},"PeriodicalIF":3.4,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Displays
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1