首页 > 最新文献

Displays最新文献

英文 中文
Affective conveyance assessment of AI-generative static visual user interfaces based on valence-arousal emotion model 基于价-唤醒情感模型的人工智能生成静态视觉用户界面情感传递评估
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-20 DOI: 10.1016/j.displa.2025.103261
Jing Chen , Huimin Tao , Jiahui Wu , Quanjingzi Yuan , Lin Ma , Dengkai Chen , Mingjiu Yu
Generative AI can rapidly create user interfaces (UIs) with distinct emotional tones, yet few studies rigorously test how effectively such UIs convey emotion. Using the Valence–Arousal (VA) framework, we prompted generative AI to produce 40 static visual UIs targeting specific emotions and evaluated them with a mixed-methods protocol in which participants completed Check-All-That-Apply (CATA) descriptors while eye-tracking recorded saccade speed and pupil diameter. Analyses showed that UIs generated from different prompts formed three perceptual categories—positive valence, negative/high arousal, and negative/low arousal—with partial overlap between positive prompts (e.g., “Delighted” and “Relaxed”) and clearer distinctions for negative prompts (“Alarmed”, “Bored”), a pattern mirrored by differences in scanning speed. These findings indicate that AI-generated UIs can embed meaningful affective cues that shape how users feel when viewing on-screen elements, and the combination of subjective and physiological measures offers a practical framework for emotion-focused UI evaluation while motivating further work on refining prompt specificity, incorporating diverse emotion models, and testing broader user demographics.
生成式人工智能可以快速创建具有不同情感色调的用户界面(ui),但很少有研究严格测试这种ui传达情感的有效性。使用效价觉醒(VA)框架,我们促使生成式AI生成40个针对特定情绪的静态视觉ui,并使用混合方法协议对其进行评估,其中参与者完成check - all - thatapply (CATA)描述符,同时眼动追踪记录扫视速度和瞳孔直径。分析表明,由不同提示产生的ui形成了三种知觉类别——正效价、负/高唤醒和负/低唤醒——积极提示(如“高兴”和“放松”)和消极提示(如“警觉”、“无聊”)之间的部分重叠,这种模式反映在扫描速度的差异上。这些发现表明,人工智能生成的UI可以嵌入有意义的情感线索,塑造用户在观看屏幕元素时的感受,主观和生理测量的结合为以情感为中心的UI评估提供了一个实用的框架,同时激励进一步完善提示特异性、整合不同的情感模型和测试更广泛的用户人口统计数据。
{"title":"Affective conveyance assessment of AI-generative static visual user interfaces based on valence-arousal emotion model","authors":"Jing Chen ,&nbsp;Huimin Tao ,&nbsp;Jiahui Wu ,&nbsp;Quanjingzi Yuan ,&nbsp;Lin Ma ,&nbsp;Dengkai Chen ,&nbsp;Mingjiu Yu","doi":"10.1016/j.displa.2025.103261","DOIUrl":"10.1016/j.displa.2025.103261","url":null,"abstract":"<div><div>Generative AI can rapidly create user interfaces (UIs) with distinct emotional tones, yet few studies rigorously test how effectively such UIs convey emotion. Using the Valence–Arousal (VA) framework, we prompted generative AI to produce 40 static visual UIs targeting specific emotions and evaluated them with a mixed-methods protocol in which participants completed Check-All-That-Apply (CATA) descriptors while eye-tracking recorded saccade speed and pupil diameter. Analyses showed that UIs generated from different prompts formed three perceptual categories—positive valence, negative/high arousal, and negative/low arousal—with partial overlap between positive prompts (e.g., “Delighted” and “Relaxed”) and clearer distinctions for negative prompts (“Alarmed”, “Bored”), a pattern mirrored by differences in scanning speed. These findings indicate that AI-generated UIs can embed meaningful affective cues that shape how users feel when viewing on-screen elements, and the combination of subjective and physiological measures offers a practical framework for emotion-focused UI evaluation while motivating further work on refining prompt specificity, incorporating diverse emotion models, and testing broader user demographics.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103261"},"PeriodicalIF":3.4,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145361553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AIBench: Towards trustworthy evaluation under the 45°law AIBench: 45°法下的可信评价
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-20 DOI: 10.1016/j.displa.2025.103255
Zicheng Zhang, Junying Wang, Yijin Guo, Farong Wen, Zijian Chen, Hanqing Wang, Wenzhe Li, Lu Sun, Yingjie Zhou, Jianbo Zhang, Bowen Yan, Ziheng Jia, Jiahao Xiao, Yuan Tian, Xiangyang Zhu, Kaiwei Zhang, Chunyi Li, Xiaohong Liu, Xiongkuo Min, Qi Jia, Guangtao Zhai
This paper presents AIBench, a flexible and rapidly updating benchmark that aggregates evaluation results from commercial platforms, popular open-source leaderboards, and internal evaluation benchmarks. While existing leaderboards primarily emphasize model capabilities, they often overlook safety evaluations and lack integrated cost-performance information, factors critical for informed decision-making by enterprises and end users. To address this gap, AIBench provides a comprehensive evaluation of foundation models across four key dimensions: Safety, Intelligence, Speed, and Price. Inspired by the 45°Law of Intelligence-Safety Balance, we visualize the trade-off patterns among leading models, offering a bird’s-eye view of how top-tier companies position their models along these two axes. In addition, to support the development of Specialized Generalist Intelligence (SGI), AIBench incorporates a general–special evaluation framework, designed to assess whether models excelling in specialized domains can also maintain strong general-purpose performance. AIBench also tracks performance evolution over time, revealing longitudinal trends in model development. Furthermore, we periodically curate and incorporate insights from the evaluation community to ensure that the benchmark remains timely and relevant. AIBench is intended to serve as a transparent, dynamic, and actionable benchmark for trustworthy evaluation, aiding both researchers and practitioners in navigating the rapidly evolving landscape of foundation models. AIBench is publicly available and maintained at: https://aiben.ch.
本文介绍了AIBench,这是一个灵活且快速更新的基准测试,它汇集了来自商业平台、流行的开源排行榜和内部评估基准的评估结果。虽然现有的排行榜主要强调模型的能力,但它们往往忽视了安全评估,缺乏综合的成本绩效信息,而这些因素对企业和最终用户的知情决策至关重要。为了解决这一差距,AIBench提供了四个关键维度的基础模型的全面评估:安全性,智能,速度和价格。受45°智能-安全平衡定律的启发,我们可视化了领先模型之间的权衡模式,提供了顶级公司如何沿着这两个轴定位其模型的概览。此外,为了支持专业通才智能(SGI)的发展,AIBench集成了一个通用-特殊评估框架,旨在评估在专业领域表现出色的模型是否也能保持强大的通用性能。AIBench还跟踪性能随时间的演变,揭示模型开发的纵向趋势。此外,我们定期整理和合并来自评估社区的见解,以确保基准保持及时和相关。AIBench的目的是作为一个透明的,动态的,可操作的基准值得信赖的评估,帮助研究人员和从业者在导航基础模型的快速发展的景观。AIBench是公开的,其维护地址为:https://aiben.ch。
{"title":"AIBench: Towards trustworthy evaluation under the 45°law","authors":"Zicheng Zhang,&nbsp;Junying Wang,&nbsp;Yijin Guo,&nbsp;Farong Wen,&nbsp;Zijian Chen,&nbsp;Hanqing Wang,&nbsp;Wenzhe Li,&nbsp;Lu Sun,&nbsp;Yingjie Zhou,&nbsp;Jianbo Zhang,&nbsp;Bowen Yan,&nbsp;Ziheng Jia,&nbsp;Jiahao Xiao,&nbsp;Yuan Tian,&nbsp;Xiangyang Zhu,&nbsp;Kaiwei Zhang,&nbsp;Chunyi Li,&nbsp;Xiaohong Liu,&nbsp;Xiongkuo Min,&nbsp;Qi Jia,&nbsp;Guangtao Zhai","doi":"10.1016/j.displa.2025.103255","DOIUrl":"10.1016/j.displa.2025.103255","url":null,"abstract":"<div><div>This paper presents <strong>AIBench</strong>, a flexible and rapidly updating benchmark that aggregates evaluation results from commercial platforms, popular open-source leaderboards, and internal evaluation benchmarks. While existing leaderboards primarily emphasize model capabilities, they often overlook safety evaluations and lack integrated cost-performance information, factors critical for informed decision-making by enterprises and end users. To address this gap, <strong>AIBench</strong> provides a comprehensive evaluation of foundation models across four key dimensions: <strong>Safety</strong>, <strong>Intelligence</strong>, <strong>Speed</strong>, and <strong>Price</strong>. Inspired by the <em>45°Law of Intelligence-Safety Balance</em>, we visualize the trade-off patterns among leading models, offering a bird’s-eye view of how top-tier companies position their models along these two axes. In addition, to support the development of Specialized Generalist Intelligence (SGI), AIBench incorporates a general–special evaluation framework, designed to assess whether models excelling in specialized domains can also maintain strong general-purpose performance. <strong>AIBench</strong> also tracks performance evolution over time, revealing longitudinal trends in model development. Furthermore, we periodically curate and incorporate insights from the evaluation community to ensure that the benchmark remains timely and relevant. <strong>AIBench</strong> is intended to serve as a transparent, dynamic, and actionable benchmark for trustworthy evaluation, aiding both researchers and practitioners in navigating the rapidly evolving landscape of foundation models. <strong>AIBench</strong> is publicly available and maintained at: <span><span>https://aiben.ch</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103255"},"PeriodicalIF":3.4,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145361551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analyzing SHAP values of XGBoost algorithms to understand driving features affecting take-over time from vehicle alert to driver action 分析XGBoost算法的SHAP值,了解影响从车辆警报到驾驶员动作接管时间的驾驶特征
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-18 DOI: 10.1016/j.displa.2025.103263
Marios Sekadakis , Thodoris Garefalakis , Peter Moertl , George Yannis
This study investigates the factors influencing Take-Over Time (TOT) during transitions from automated to manual driving, emphasizing the novelty of applying XGBoost modeling combined with SHAP analysis to uncover non-linear and implicit dependencies between features. Using high-frequency data from a driving simulator, key variables such as automation level, driving measurements, different types of obstacles, and Human-Machine Interface (HMI) conditions were analyzed to understand their effects on TOT. The XGBoost model was optimized using a cross-validation approach, achieving strong predictive performance (R2 = 0.871 for testing set). Feature importance analysis revealed that Automated Driving (AD) level 2 or 3 was the most influential factor, underscoring how extended time budgets and reduced driver engagement interact in shaping TOT. Higher automation levels resulted in longer TOT, with SHAP values consistently positive for AD Level 3, demonstrating the added value of explainable machine learning in clarifying these patterns. Dynamic driving parameters, such as deceleration and speed variability, were also significant. Strong negative deceleration values were generally associated with shorter TOT, reflecting quicker responses under urgent braking. Speed showed a moderate positive effect on TOT at 80–110 km/h, with drivers taking additional time to assess the environment, but higher speeds (above 110 km/h) resulted in quicker responses. Beyond these established effects, SHAP analysis revealed how automation level, obstacle environment, and HMI design jointly condition driver responses. The HADRIAN HMI, slightly increasing TOT compared to the baseline, simultaneously seems to demonstrate potential safety benefits through tailored guidance and improved situational awareness. By combining methodological innovation with contextual insights, this study contributes to a deeper understanding of takeover behavior and provides actionable evidence for optimizing adaptive HMI design and takeover strategies in AD systems.
本研究探讨了从自动驾驶到手动驾驶过渡过程中接管时间(TOT)的影响因素,强调了将XGBoost建模与SHAP分析相结合来揭示特征之间非线性和隐含依赖关系的新新性。利用来自驾驶模拟器的高频数据,分析了自动化水平、驾驶测量、不同类型障碍物和人机界面(HMI)条件等关键变量,以了解它们对TOT的影响。采用交叉验证方法对XGBoost模型进行优化,获得了较强的预测性能(测试集R2 = 0.871)。功能重要性分析显示,自动驾驶(AD) 2级或3级是最具影响力的因素,强调了延长的时间预算和减少的驾驶员参与如何影响TOT。更高的自动化水平导致更长的TOT,对于AD级别3,SHAP值始终为正值,这表明了可解释的机器学习在澄清这些模式方面的附加价值。动态驾驶参数,如减速和速度变异性,也很重要。较强的负减速值通常与较短的TOT相关,反映了紧急制动时更快的响应。车速在80-110公里/小时时对TOT有一定的积极影响,驾驶员会花更多的时间来评估环境,但车速越高(110公里/小时以上),反应越快。除了这些既定的影响,SHAP分析揭示了自动化水平、障碍环境和HMI设计如何共同影响驾驶员的反应。与基线相比,HADRIAN HMI略微增加了TOT,同时似乎通过定制制导和改进的态势感知展示了潜在的安全优势。通过将方法论创新与情境洞察相结合,本研究有助于更深入地理解收购行为,并为优化AD系统中的自适应HMI设计和收购策略提供可操作的证据。
{"title":"Analyzing SHAP values of XGBoost algorithms to understand driving features affecting take-over time from vehicle alert to driver action","authors":"Marios Sekadakis ,&nbsp;Thodoris Garefalakis ,&nbsp;Peter Moertl ,&nbsp;George Yannis","doi":"10.1016/j.displa.2025.103263","DOIUrl":"10.1016/j.displa.2025.103263","url":null,"abstract":"<div><div>This study investigates the factors influencing Take-Over Time (TOT) during transitions from automated to manual driving, emphasizing the novelty of applying XGBoost modeling combined with SHAP analysis to uncover non-linear and implicit dependencies between features. Using high-frequency data from a driving simulator, key variables such as automation level, driving measurements, different types of obstacles, and Human-Machine Interface (HMI) conditions were analyzed to understand their effects on TOT. The XGBoost model was optimized using a cross-validation approach, achieving strong predictive performance (R<sup>2</sup> = 0.871 for testing set). Feature importance analysis revealed that Automated Driving (AD) level 2 or 3 was the most influential factor, underscoring how extended time budgets and reduced driver engagement interact in shaping TOT. Higher automation levels resulted in longer TOT, with SHAP values consistently positive for AD Level 3, demonstrating the added value of explainable machine learning in clarifying these patterns. Dynamic driving parameters, such as deceleration and speed variability, were also significant. Strong negative deceleration values were generally associated with shorter TOT, reflecting quicker responses under urgent braking. Speed showed a moderate positive effect on TOT at 80–110 km/h, with drivers taking additional time to assess the environment, but higher speeds (above 110 km/h) resulted in quicker responses. Beyond these established effects, SHAP analysis revealed how automation level, obstacle environment, and HMI design jointly condition driver responses. The HADRIAN HMI, slightly increasing TOT compared to the baseline, simultaneously seems to demonstrate potential safety benefits through tailored guidance and improved situational awareness. By combining methodological innovation with contextual insights, this study contributes to a deeper understanding of takeover behavior and provides actionable evidence for optimizing adaptive HMI design and takeover strategies in AD systems.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103263"},"PeriodicalIF":3.4,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145361550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight deep learning with multi-scale feature fusion for high-precision and low-latency eye tracking 基于多尺度特征融合的轻量级深度学习,实现高精度、低延迟眼动追踪
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-18 DOI: 10.1016/j.displa.2025.103260
Liwan Lin , Zongyu Wu , Yijun Lu , Zhong Chen , Weijie Guo
As a cornerstone technology for metaverse interactions, advanced eye-tracking systems require solutions that address dynamic adaptability and computational efficiency challenges. To this end, this work proposes a novel calibration-free eye-tracking system employing a lightweight deep learning architecture with multi-module fusion, improving the gaze estimation accuracy, robustness, and real-time performance. The multi-scale feature extraction and an attention mechanism have been utilized to effectively capture gaze-related feature. The prediction on gaze point has been optimized through a multi-layer feature fusion integrating spatial and channel information. The mean square error loss and attention weight regularization loss have been incorporated to balance the prediction accuracy and feature stability during training. The highest gaze estimation accuracy of 1.76° and a real-time inference latency of 9.71 ms have been achieved, outperforming the traditional methods in both accuracy and efficiency.
作为元宇宙交互的基础技术,先进的眼动追踪系统需要解决动态适应性和计算效率挑战的解决方案。为此,本研究提出了一种新型的无校准眼动追踪系统,采用轻量级深度学习架构和多模块融合,提高了注视估计的准确性、鲁棒性和实时性。利用多尺度特征提取和注意机制有效地捕获凝视相关特征。通过融合空间信息和通道信息的多层特征融合,对注视点预测进行优化。在训练过程中,采用均方误差损失和注意权正则化损失来平衡预测精度和特征稳定性。该算法的注视估计精度为1.76°,实时推理延迟为9.71 ms,在精度和效率上均优于传统方法。
{"title":"Lightweight deep learning with multi-scale feature fusion for high-precision and low-latency eye tracking","authors":"Liwan Lin ,&nbsp;Zongyu Wu ,&nbsp;Yijun Lu ,&nbsp;Zhong Chen ,&nbsp;Weijie Guo","doi":"10.1016/j.displa.2025.103260","DOIUrl":"10.1016/j.displa.2025.103260","url":null,"abstract":"<div><div>As a cornerstone technology for metaverse interactions, advanced eye-tracking systems require solutions that address dynamic adaptability and computational efficiency challenges. To this end, this work proposes a novel calibration-free eye-tracking system employing a lightweight deep learning architecture with multi-module fusion, improving the gaze estimation accuracy, robustness, and real-time performance. The multi-scale feature extraction and an attention mechanism have been utilized to effectively capture gaze-related feature. The prediction on gaze point has been optimized through a multi-layer feature fusion integrating spatial and channel information. The mean square error loss and attention weight regularization loss have been incorporated to balance the prediction accuracy and feature stability during training. The highest gaze estimation accuracy of 1.76° and a real-time inference latency of 9.71 ms have been achieved, outperforming the traditional methods in both accuracy and efficiency.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103260"},"PeriodicalIF":3.4,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient image retargeting with Bezier curves 有效的图像重定位与贝塞尔曲线
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-16 DOI: 10.1016/j.displa.2025.103258
Guojin Pei, Chen Xu, Huihui Wei, Genke Yang, Jian Chu
Image retargeting based on Seam Carving (SC) achieves content-aware size adjustment by iteratively removing or inserting the minimal-energy seam. However, its computational efficiency and quality are constrained by the discrete, pixel-wise representation of seam paths and the Dynamic Programming (DP) process. To address these, we proposes a novel retargeting method based on the Bezier curve and Mamba. First, we introduce cubic Bezier curves to represent seams. The seam can be precisely described by only six scalar parameters derived from four control points, significantly reducing representation complexity (e.g., from O(H) to O(1) per vertical seam. H is the image height). Second, we improve the energy map by introducing the geometric distance weight ω and a new fusion method termed Fluid Diffusion. This effectively integrates gradient information, edge structures, visual saliency, and depth cues, providing more robust importance guidance for seam. Finally, replacing the DP process in original SC, we construct the Mamba-based regression network BSCNet. It directly regresses the control points of curve seam (reducing the computational complexity per seam from quadratic to constant). Experimental results demonstrate that BSCNet outperforms reference methods in both image quality and computational efficiency. Specifically, compared to SC, BSCNet improves the TOPIQ score by 9.82% and reduces the BRISQUE score by 13.79%. In addition, BSCNet achieves an inference speed 22.62 times faster than that of SC in our evaluation. In conclusion, the proposed method overcomes the limitations of traditional SC by combining parametric seam representation and efficient regression-based seam searching, offering a promising solution for efficient and high-quality image retargeting.
基于缝雕刻(Seam Carving, SC)的图像重定位通过迭代地移除或插入最小能量缝来实现对内容的尺寸调整。然而,它的计算效率和质量受到离散的、逐像素的接缝路径表示和动态规划(DP)过程的限制。为了解决这些问题,我们提出了一种新的基于Bezier曲线和Mamba的重定向方法。首先,我们引入三次贝塞尔曲线来表示接缝。从四个控制点得到的6个标量参数可以精确地描述接缝,这大大降低了表示的复杂性(例如,每个垂直接缝从O(H)到O(1))。H是图像高度)。其次,通过引入几何距离权值ω和一种新的融合方法——流体扩散,改进了能量图;这有效地集成了梯度信息、边缘结构、视觉显著性和深度线索,为接缝提供了更强大的重要性指导。最后,我们构建了基于mamba的回归网络BSCNet,取代了原SC中的DP过程。它直接回归曲线缝的控制点(将每条缝的计算复杂度从二次元降低到常数)。实验结果表明,BSCNet在图像质量和计算效率方面都优于参考方法。具体来说,与SC相比,BSCNet将TOPIQ分数提高了9.82%,将BRISQUE分数降低了13.79%。此外,在我们的评估中,BSCNet的推理速度比SC快22.62倍。综上所述,该方法将参数化的接缝表示与高效的基于回归的接缝搜索相结合,克服了传统SC方法的局限性,为实现高效、高质量的图像重定位提供了一种有前景的解决方案。
{"title":"Efficient image retargeting with Bezier curves","authors":"Guojin Pei,&nbsp;Chen Xu,&nbsp;Huihui Wei,&nbsp;Genke Yang,&nbsp;Jian Chu","doi":"10.1016/j.displa.2025.103258","DOIUrl":"10.1016/j.displa.2025.103258","url":null,"abstract":"<div><div>Image retargeting based on Seam Carving (SC) achieves content-aware size adjustment by iteratively removing or inserting the minimal-energy seam. However, its computational efficiency and quality are constrained by the discrete, pixel-wise representation of seam paths and the Dynamic Programming (DP) process. To address these, we proposes a novel retargeting method based on the Bezier curve and Mamba. First, we introduce cubic Bezier curves to represent seams. The seam can be precisely described by only six scalar parameters derived from four control points, significantly reducing representation complexity (e.g., from <em>O</em>(H) to <em>O</em>(1) per vertical seam. H is the image height). Second, we improve the energy map by introducing the geometric distance weight <em>ω</em> and a new fusion method termed <em>Fluid Diffusion</em>. This effectively integrates gradient information, edge structures, visual saliency, and depth cues, providing more robust importance guidance for seam. Finally, replacing the DP process in original SC, we construct the Mamba-based regression network BSCNet. It directly regresses the control points of curve seam (reducing the computational complexity per seam from quadratic to constant). Experimental results demonstrate that BSCNet outperforms reference methods in both image quality and computational efficiency. Specifically, compared to SC, BSCNet improves the TOPIQ score by 9.82% and reduces the BRISQUE score by 13.79%. In addition, BSCNet achieves an inference speed 22.62 times faster than that of SC in our evaluation. In conclusion, the proposed method overcomes the limitations of traditional SC by combining parametric seam representation and efficient regression-based seam searching, offering a promising solution for efficient and high-quality image retargeting.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103258"},"PeriodicalIF":3.4,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145361622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing drawing skills with augmented reality: A study on gesture and eye Movement-Based training☆☆ 增强现实技术提高绘画技能:基于手势和眼动训练的研究☆☆
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-16 DOI: 10.1016/j.displa.2025.103256
Junqin Chen , Ruoqing Xie , Lirong Chen , Runhui Feng , Jin Xie , Linfeng Yang , Meipeng Huang
Industrial drawing skills are typically transmitted from experts to novices through demonstration and practice, yet the embodied (sensorimotor) and cognitive components of such skills remain difficult to quantify. This study investigates whether augmented reality (AR)-assisted human–computer interaction training can digitize and facilitate the transfer of drawing skills from experts to novices. We created a virtual drawing environment to present expert postural trajectories to trainees via AR devices. Gesture postures and eye movements were recorded before and after training and analyzed using dynamic metrics, including gesture joint velocities and eye angular velocities. Skill acquisition and transfer were evaluated based on objective performance scores from expert raters and subjective participant feedback. The results indicate that AR-guided training induced rapid changes in joint velocity profiles and eye angular velocity, accompanied by reduced and more focused spectral energy expenditure. Improvements in objective scores were supported by positive subjective evaluations, and subjective assessments correlated with the observed kinematic changes. These findings demonstrate that AR-mediated human–computer interaction can effectively facilitate the transfer of both sensorimotor and cognitive aspects of drawing skill to novices. This approach shows promise for enhancing the efficiency of industrial design education and other skill-training applications.
工业绘画技能通常是通过演示和实践从专家传授给新手的,然而这种技能的具体体现(感觉运动)和认知成分仍然难以量化。本研究探讨了增强现实(AR)辅助的人机交互训练是否能够数字化并促进绘画技能从专家到新手的转移。我们创建了一个虚拟的绘图环境,通过AR设备向学员展示专家的姿势轨迹。在训练前后记录手势姿势和眼球运动,并使用动态指标进行分析,包括手势关节速度和眼睛角速度。根据专家评分者的客观表现分数和参与者的主观反馈对技能习得和迁移进行评估。结果表明,ar引导训练诱导关节速度分布和眼角速度的快速变化,伴随着光谱能量消耗的减少和更集中。客观评分的提高得到积极主观评价的支持,主观评价与观察到的运动学变化相关。这些发现表明,ar介导的人机交互可以有效地促进新手绘画技能的感觉运动和认知方面的转移。这种方法有望提高工业设计教育和其他技能培训应用的效率。
{"title":"Enhancing drawing skills with augmented reality: A study on gesture and eye Movement-Based training☆☆","authors":"Junqin Chen ,&nbsp;Ruoqing Xie ,&nbsp;Lirong Chen ,&nbsp;Runhui Feng ,&nbsp;Jin Xie ,&nbsp;Linfeng Yang ,&nbsp;Meipeng Huang","doi":"10.1016/j.displa.2025.103256","DOIUrl":"10.1016/j.displa.2025.103256","url":null,"abstract":"<div><div>Industrial drawing skills are typically transmitted from experts to novices through demonstration and practice, yet the embodied (sensorimotor) and cognitive components of such skills remain difficult to quantify. This study investigates whether augmented reality (AR)-assisted human–computer interaction training can digitize and facilitate the transfer of drawing skills from experts to novices. We created a virtual drawing environment to present expert postural trajectories to trainees via AR devices. Gesture postures and eye movements were recorded before and after training and analyzed using dynamic metrics, including gesture joint velocities and eye angular velocities. Skill acquisition and transfer were evaluated based on objective performance scores from expert raters and subjective participant feedback. The results indicate that AR-guided training induced rapid changes in joint velocity profiles and eye angular velocity, accompanied by reduced and more focused spectral energy expenditure. Improvements in objective scores were supported by positive subjective evaluations, and subjective assessments correlated with the observed kinematic changes. These findings demonstrate that AR-mediated human–computer interaction can effectively facilitate the transfer of both sensorimotor and cognitive aspects of drawing skill to novices. This approach shows promise for enhancing the efficiency of industrial design education and other skill-training applications.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103256"},"PeriodicalIF":3.4,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145361571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatio-temporal attention feature fusion: A video quality assessment method for User-Generated Content 时空注意力特征融合:一种用户生成内容视频质量评估方法
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-14 DOI: 10.1016/j.displa.2025.103259
Da Ai , Ting He , Mingyue Lu , Dianwei Wang , Ying Liu
With the rapid proliferation of mobile Internet and social media, individual users have become significant contributors to video content creation. The quality of User-Generated Content (UGC) videos plays a crucial role in determining their dissemination effectiveness. Consequently, UGC quality assessment has emerged as one of the critical research topics in the field of image processing. To address the limitations of existing evaluation methods—such as inadequate detection of dynamic distortions, suboptimal spatio-temporal modeling, and degraded performance in evaluating long-sequence videos—we propose STAFF-Net, a UGC video quality assessment model based on spatio-temporal feature fusion. This model comprises three key modules. A multi-level static feature key domain weighting module is designed to efficiently capture key information consistent with human visual characteristics. An optical flow motion feature extraction module is integrated to capture the motion dynamics within videos. Additionally, a spatio-temporal Transformer encoder module with temporal attention weighting is developed. By leveraging the multi-head attention mechanism, it models global spatial dependencies. The incorporated temporal attention weighting module enhances the temporal correlations between video frames, thereby improving the model’s ability to learn dependencies across different segments of long sequences. Experiment results on the public UGC-VQA datasets demonstrate that the proposed method surpasses most state-of-the-art approaches in terms of SROCC and PLCC metrics. Moreover, Mean Opinion Score (MOS) evaluations exhibit excellent subjective consistency, validating the effectiveness of our proposed method.
随着移动互联网和社交媒体的迅速普及,个人用户已经成为视频内容创作的重要贡献者。用户生成内容(UGC)视频的质量对其传播效果起着至关重要的作用。因此,UGC质量评估已成为图像处理领域的重要研究课题之一。为了解决现有评估方法的局限性,例如对动态扭曲的检测不足,次优时空建模以及评估长序列视频的性能下降,我们提出了STAFF-Net,一种基于时空特征融合的UGC视频质量评估模型。该模型包括三个关键模块。设计了多级静态特征键域加权模块,有效捕获符合人类视觉特征的关键信息。集成了光流运动特征提取模块,以捕获视频中的运动动态。此外,还开发了具有时间注意权重的时空转换器编码器模块。通过利用多头注意机制,它模拟了全球空间依赖性。合并的时间注意力加权模块增强了视频帧之间的时间相关性,从而提高了模型学习长序列不同片段之间依赖关系的能力。在公共UGC-VQA数据集上的实验结果表明,所提出的方法在SROCC和PLCC指标方面优于大多数最先进的方法。此外,平均意见得分(MOS)评价表现出优异的主观一致性,验证了我们提出的方法的有效性。
{"title":"Spatio-temporal attention feature fusion: A video quality assessment method for User-Generated Content","authors":"Da Ai ,&nbsp;Ting He ,&nbsp;Mingyue Lu ,&nbsp;Dianwei Wang ,&nbsp;Ying Liu","doi":"10.1016/j.displa.2025.103259","DOIUrl":"10.1016/j.displa.2025.103259","url":null,"abstract":"<div><div>With the rapid proliferation of mobile Internet and social media, individual users have become significant contributors to video content creation. The quality of User-Generated Content (UGC) videos plays a crucial role in determining their dissemination effectiveness. Consequently, UGC quality assessment has emerged as one of the critical research topics in the field of image processing. To address the limitations of existing evaluation methods—such as inadequate detection of dynamic distortions, suboptimal spatio-temporal modeling, and degraded performance in evaluating long-sequence videos—we propose STAFF-Net, a UGC video quality assessment model based on spatio-temporal feature fusion. This model comprises three key modules. A multi-level static feature key domain weighting module is designed to efficiently capture key information consistent with human visual characteristics. An optical flow motion feature extraction module is integrated to capture the motion dynamics within videos. Additionally, a spatio-temporal Transformer encoder module with temporal attention weighting is developed. By leveraging the multi-head attention mechanism, it models global spatial dependencies. The incorporated temporal attention weighting module enhances the temporal correlations between video frames, thereby improving the model’s ability to learn dependencies across different segments of long sequences. Experiment results on the public UGC-VQA datasets demonstrate that the proposed method surpasses most state-of-the-art approaches in terms of SROCC and PLCC metrics. Moreover, Mean Opinion Score (MOS) evaluations exhibit excellent subjective consistency, validating the effectiveness of our proposed method.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103259"},"PeriodicalIF":3.4,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145319477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The effect of surrounding avatars’ speed and body composition on users’ physical activity and exertion perception in VR GYM VR GYM中周围虚拟人物的速度和身体构成对用户体力活动和体力消耗感知的影响
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-14 DOI: 10.1016/j.displa.2025.103253
Bingcheng Ke , Tzu-Yang Wang , Takaya Yuizono
This study investigates the effects of the visual presentation of surrounding avatars on users’ physical activity and perceived exertion in a virtual reality (VR) gym. Previous research has demonstrated that the presence and performance of others can significantly affect people’s exercise behavior; however, the specific effects of surrounding avatars’ exercise speed and body composition on users’ behavior and psychological experiences in the VR gym remain to be further explored. This study focused on two key visual representations of surrounding avatars: (1) exercise speed (fast and slow) and (2) body composition (normal weight and overweight). Participants cycled on a stationary bike in a VR gym while observing surrounding avatars exercising on a treadmill, during which their pedaling frequency, heart rate (HR), electromyography (EMG), perceived exertion, and self-perceived fitness were measured. The results showed that a faster exercise speed of surrounding avatars significantly increased users’ pedaling frequency, and overweight avatars enhanced users’ positive self-perception of fitness compared to normal-weight avatars. Furthermore, an interaction effect was observed: under fast exercise conditions, overweight avatars elicited higher heart rate and EMG values. Notably, the changes in pedaling frequency, EMG, and perceived exertion persisted even after the avatars left the VR gym.
本研究调查了虚拟现实(VR)健身房中,周围化身的视觉呈现对用户身体活动和感知劳累的影响。先前的研究表明,他人的存在和表现可以显著影响人们的运动行为;然而,周围虚拟人物的运动速度和身体构成对用户在VR健身房中的行为和心理体验的具体影响还有待进一步探索。本研究主要关注两个关键的虚拟形象视觉表征:(1)运动速度(快和慢)和(2)身体构成(正常体重和超重)。参与者在虚拟现实健身房里骑固定自行车,同时观察周围在跑步机上锻炼的虚拟人物,在此期间测量他们的踩踏频率、心率(HR)、肌电图(EMG)、感知运动和自我感知健康。结果表明,周围虚拟人物的运动速度越快,用户蹬车频率越高;与正常体重的虚拟人物相比,超重虚拟人物对用户健康的积极自我感知越强。此外,还观察到一种相互作用效应:在快速运动条件下,超重的虚拟人物会引起更高的心率和肌电图值。值得注意的是,即使在虚拟现实健身房里,他们在蹬车频率、肌电图和感知运动方面的变化仍然存在。
{"title":"The effect of surrounding avatars’ speed and body composition on users’ physical activity and exertion perception in VR GYM","authors":"Bingcheng Ke ,&nbsp;Tzu-Yang Wang ,&nbsp;Takaya Yuizono","doi":"10.1016/j.displa.2025.103253","DOIUrl":"10.1016/j.displa.2025.103253","url":null,"abstract":"<div><div>This study investigates the effects of the visual presentation of surrounding avatars on users’ physical activity and perceived exertion in a virtual reality (VR) gym. Previous research has demonstrated that the presence and performance of others can significantly affect people’s exercise behavior; however, the specific effects of surrounding avatars’ exercise speed and body composition on users’ behavior and psychological experiences in the VR gym remain to be further explored. This study focused on two key visual representations of surrounding avatars: (1) exercise speed (fast and slow) and (2) body composition (normal weight and overweight). Participants cycled on a stationary bike in a VR gym while observing surrounding avatars exercising on a treadmill, during which their pedaling frequency, heart rate (HR), electromyography (EMG), perceived exertion, and self-perceived fitness were measured. The results showed that a faster exercise speed of surrounding avatars significantly increased users’ pedaling frequency, and overweight avatars enhanced users’ positive self-perception of fitness compared to normal-weight avatars. Furthermore, an interaction effect was observed: under fast exercise conditions, overweight avatars elicited higher heart rate and EMG values. Notably, the changes in pedaling frequency, EMG, and perceived exertion persisted even after the avatars left the VR gym.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103253"},"PeriodicalIF":3.4,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145319480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The influence of data chart encoding form on reading performance under vibration environment 振动环境下数据图表编码形式对读取性能的影响
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-13 DOI: 10.1016/j.displa.2025.103254
Tiantian Chen, Moke Li, Zhangfan Shen
Despite numerous studies have investigated the impact of data visualization design on usability in the past, few have considered the influence of external environmental factors. This study examined the combined effects of vibration intensity, graph design, and time pressure on users’ accuracy and speed in data recognition tasks. Firstly, vibration data were collected based on three typical real-world road conditions to establish the vibration intensity parameters for the experiment. Secondly, twelve types of graphs in different graphic forms and scale precision were adopted in tasks through an extensive survey of typical types of data visualizations. Finally, thirty-three participants were asked to perform a data recognition task, where they completed six repetitions of data recognition for the 12 graph materials under varying vibration intensities and time pressures. The results suggested that vibration intensity negatively impacted data reading performance, with higher vibration intensity leading to decreased performance. The main effect of the graphic form also reached statistical significance. Specifically, the recognition accuracy was highest for horizontal bar graphs in a vibrating environment, and the recognition speed was fastest for semicircle graphs. The precision of perception played an important role in improving accuracy but came at the cost of slower recognition time, and the advantage of high precision diminished as vibration intensity increased. Furthermore, a strong interaction effect was observed between graphic form and time pressure, which indicated that reduced time pressure led to enhanced accuracy of circular graphs and narrowed the gap among horizontal bar, circular, and semicircle graphs. The findings provide practical guidelines for the design of data visualization.
尽管过去有许多研究调查了数据可视化设计对可用性的影响,但很少考虑外部环境因素的影响。本研究考察了振动强度、图形设计和时间压力对用户在数据识别任务中的准确性和速度的综合影响。首先,基于三种典型现实路况采集振动数据,建立试验振动强度参数;其次,通过对典型数据可视化类型的广泛调查,在任务中采用了12种不同图形形式和尺度精度的图形。最后,33名参与者被要求执行数据识别任务,在不同的振动强度和时间压力下,他们完成了6次对12种图形材料的数据识别。结果表明,振动强度对数据读取性能有负面影响,振动强度越大,读取性能越差。图形形式的主要效果也达到了统计学意义。其中,振动环境下横条形图的识别精度最高,半圆形图的识别速度最快。感知精度对提高精度有重要作用,但代价是识别时间变慢,并且随着振动强度的增加,高精度的优势逐渐减弱。此外,图表形式与时间压力之间存在较强的交互作用,表明时间压力的降低提高了圆形图表的准确性,缩小了水平线、圆形和半圆形图表之间的差距。研究结果为数据可视化的设计提供了实用的指导。
{"title":"The influence of data chart encoding form on reading performance under vibration environment","authors":"Tiantian Chen,&nbsp;Moke Li,&nbsp;Zhangfan Shen","doi":"10.1016/j.displa.2025.103254","DOIUrl":"10.1016/j.displa.2025.103254","url":null,"abstract":"<div><div>Despite numerous studies have investigated the impact of data visualization design on usability in the past, few have considered the influence of external environmental factors. This study examined the combined effects of vibration intensity, graph design, and time pressure on users’ accuracy and speed in data recognition tasks. Firstly, vibration data were collected based on three typical real-world road conditions to establish the vibration intensity parameters for the experiment. Secondly, twelve types of graphs in different graphic forms and scale precision were adopted in tasks through an extensive survey of typical types of data visualizations. Finally, thirty-three participants were asked to perform a data recognition task, where they completed six repetitions of data recognition for the 12 graph materials under varying vibration intensities and time pressures. The results suggested that vibration intensity negatively impacted data reading performance, with higher vibration intensity leading to decreased performance. The main effect of the graphic form also reached statistical significance. Specifically, the recognition accuracy was highest for horizontal bar graphs in a vibrating environment, and the recognition speed was fastest for semicircle graphs. The precision of perception played an important role in improving accuracy but came at the cost of slower recognition time, and the advantage of high precision diminished as vibration intensity increased. Furthermore, a strong interaction effect was observed between graphic form and time pressure, which indicated that reduced time pressure led to enhanced accuracy of circular graphs and narrowed the gap among horizontal bar, circular, and semicircle graphs. The findings provide practical guidelines for the design of data visualization.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103254"},"PeriodicalIF":3.4,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145319479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Study of background complexity based on composite multiscale entropy of EEG signals 基于复合多尺度熵的脑电信号背景复杂度研究
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-10-10 DOI: 10.1016/j.displa.2025.103250
Zhou Yu , Li Xue , Weidong Xu , Zhisong Pan , Qi Jia , Yawen Liu , Ling Li , Xin Yang , Bentian Hao
The complexity of backgrounds at target configuration points significantly affects camouflage effectiveness. This study presents a method to characterize background complexity based on the Composite Multiscale Entropy (CMSE) of electroencephalography (EEG) signals. Background images with different vegetation coverage levels (10%, 30%, 50%, 70%, and 90%) and two vegetation distribution patterns (concentrated and dispersed) served as visual stimuli. Thirty-five participants took part in the experiment, and their EEG signals were recorded while viewing the images. CMSE values were computed from the collected EEG data. Statistical analysis indicated that CMSE values at the O1 and Oz channels reached a saturation point when vegetation coverage approached 70%, while the increase at the O2 channel slowed considerably beyond 50% coverage. Dispersed vegetation patterns yielded higher entropy values than concentrated patterns. The findings suggest that, within the scope of the present experimental conditions, vegetation coverage of around 70% at the target configuration point is more conducive to camouflage. Under constrained circumstances, backgrounds with vegetation coverage exceeding 50% are advisable. Furthermore, dispersed vegetation increases visual confusion, thereby enhancing camouflage effectiveness. These results provide useful guidance for selecting suitable target backgrounds, although their applicability to other environmental contexts requires further investigation.
目标配置点背景的复杂性对伪装效果影响很大。提出了一种基于脑电信号复合多尺度熵(CMSE)的背景复杂度表征方法。不同植被覆盖度(10%、30%、50%、70%和90%)和两种植被分布模式(集中和分散)的背景图像作为视觉刺激。35名参与者参加了这项实验,他们在观看图像时的脑电图信号被记录下来。根据采集的脑电图数据计算CMSE值。统计分析表明,在植被覆盖度接近70%时,O1和Oz通道的CMSE值达到饱和点,而在植被覆盖度超过50%时,O2通道的CMSE值增长明显放缓。分散植被格局的熵值高于集中植被格局。研究结果表明,在本实验条件范围内,目标配置点植被覆盖度达到70%左右更有利于伪装。在有限的情况下,最好选择植被覆盖率超过50%的背景。此外,分散的植被增加了视觉混乱,从而提高了伪装效果。这些结果为选择合适的目标背景提供了有用的指导,尽管它们对其他环境背景的适用性需要进一步研究。
{"title":"Study of background complexity based on composite multiscale entropy of EEG signals","authors":"Zhou Yu ,&nbsp;Li Xue ,&nbsp;Weidong Xu ,&nbsp;Zhisong Pan ,&nbsp;Qi Jia ,&nbsp;Yawen Liu ,&nbsp;Ling Li ,&nbsp;Xin Yang ,&nbsp;Bentian Hao","doi":"10.1016/j.displa.2025.103250","DOIUrl":"10.1016/j.displa.2025.103250","url":null,"abstract":"<div><div>The complexity of backgrounds at target configuration points significantly affects camouflage effectiveness. This study presents a method to characterize background complexity based on the Composite Multiscale Entropy (CMSE) of electroencephalography (EEG) signals. Background images with different vegetation coverage levels (10%, 30%, 50%, 70%, and 90%) and two vegetation distribution patterns (concentrated and dispersed) served as visual stimuli. Thirty-five participants took part in the experiment, and their EEG signals were recorded while viewing the images. CMSE values were computed from the collected EEG data. Statistical analysis indicated that CMSE values at the O1 and Oz channels reached a saturation point when vegetation coverage approached 70%, while the increase at the O2 channel slowed considerably beyond 50% coverage. Dispersed vegetation patterns yielded higher entropy values than concentrated patterns. The findings suggest that, within the scope of the present experimental conditions, vegetation coverage of around 70% at the target configuration point is more conducive to camouflage. Under constrained circumstances, backgrounds with vegetation coverage exceeding 50% are advisable. Furthermore, dispersed vegetation increases visual confusion, thereby enhancing camouflage effectiveness. These results provide useful guidance for selecting suitable target backgrounds, although their applicability to other environmental contexts requires further investigation.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103250"},"PeriodicalIF":3.4,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145319478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Displays
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1