首页 > 最新文献

Frontiers in Computer Science最新文献

英文 中文
Orthogonality and graph divergence losses promote disentanglement in generative models 正交性和图发散损失促进生成模型的解缠
IF 2.6 Q2 Computer Science Pub Date : 2024-05-22 DOI: 10.3389/fcomp.2024.1274779
Ankita Shukla, Rishi Dadhich, Rajhans Singh, Anirudh Rayas, Pouria Saidi, Gautam Dasarathy, Visar Berisha, Pavan Turaga
Over the last decade, deep generative models have evolved to generate realistic and sharp images. The success of these models is often attributed to an extremely large number of trainable parameters and an abundance of training data, with limited or no understanding of the underlying data manifold. In this article, we explore the possibility of learning a deep generative model that is structured to better capture the underlying manifold's geometry, to effectively improve image generation while providing implicit controlled generation by design. Our approach structures the latent space into multiple disjoint representations capturing different attribute manifolds. The global representations are guided by a disentangling loss for effective attribute representation learning and a differential manifold divergence loss to learn an effective implicit generative model. Experimental results on a 3D shapes dataset demonstrate the model's ability to disentangle attributes without direct supervision and its controllable generative capabilities. These findings underscore the potential of structuring deep generative models to enhance image generation and attribute control without direct supervision with ground truth attributes signaling progress toward more sophisticated deep generative models.
在过去十年中,深度生成模型不断发展,以生成逼真、清晰的图像。这些模型的成功往往归功于极多的可训练参数和丰富的训练数据,而对底层数据流形的理解有限或根本不了解。在本文中,我们探索了学习深度生成模型的可能性,这种模型的结构可以更好地捕捉底层流形的几何形状,从而有效改善图像生成,同时通过设计提供隐式可控生成。我们的方法将潜在空间结构为多个不相连的表征,以捕捉不同的属性流形。全局表征由用于有效学习属性表征的分离损失和用于学习有效隐式生成模型的差分流形发散损失所引导。三维形状数据集的实验结果表明,该模型能够在没有直接监督的情况下分解属性,并具有可控的生成能力。这些发现强调了构建深度生成模型的潜力,即在没有地面实况属性直接监督的情况下,增强图像生成和属性控制,这标志着向更复杂的深度生成模型迈进。
{"title":"Orthogonality and graph divergence losses promote disentanglement in generative models","authors":"Ankita Shukla, Rishi Dadhich, Rajhans Singh, Anirudh Rayas, Pouria Saidi, Gautam Dasarathy, Visar Berisha, Pavan Turaga","doi":"10.3389/fcomp.2024.1274779","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1274779","url":null,"abstract":"Over the last decade, deep generative models have evolved to generate realistic and sharp images. The success of these models is often attributed to an extremely large number of trainable parameters and an abundance of training data, with limited or no understanding of the underlying data manifold. In this article, we explore the possibility of learning a deep generative model that is structured to better capture the underlying manifold's geometry, to effectively improve image generation while providing implicit controlled generation by design. Our approach structures the latent space into multiple disjoint representations capturing different attribute manifolds. The global representations are guided by a disentangling loss for effective attribute representation learning and a differential manifold divergence loss to learn an effective implicit generative model. Experimental results on a 3D shapes dataset demonstrate the model's ability to disentangle attributes without direct supervision and its controllable generative capabilities. These findings underscore the potential of structuring deep generative models to enhance image generation and attribute control without direct supervision with ground truth attributes signaling progress toward more sophisticated deep generative models.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141108829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Linguistic analysis of human-computer interaction 人机交互的语言学分析
IF 2.6 Q2 Computer Science Pub Date : 2024-05-21 DOI: 10.3389/fcomp.2024.1384252
Georgia Zellou, Nicole Holliday
This article reviews recent literature investigating speech variation in production and comprehension during spoken language communication between humans and devices. Human speech patterns toward voice-AI presents a test to our scientific understanding about speech communication and language use. First, work exploring how human-AI interactions are similar to, or different from, human-human interactions in the realm of speech variation is reviewed. In particular, we focus on studies examining how users adapt their speech when resolving linguistic misunderstandings by computers and when accommodating their speech toward devices. Next, we consider work that investigates how top-down factors in the interaction can influence users’ linguistic interpretations of speech produced by technological agents and how the ways in which speech is generated (via text-to-speech synthesis, TTS) and recognized (using automatic speech recognition technology, ASR) has an effect on communication. Throughout this review, we aim to bridge both HCI frameworks and theoretical linguistic models accounting for variation in human speech. We also highlight findings in this growing area that can provide insight to the cognitive and social representations underlying linguistic communication more broadly. Additionally, we touch on the implications of this line of work for addressing major societal issues in speech technology.
本文回顾了研究人类与设备之间口语交流过程中语音生成和理解差异的最新文献。人类与语音人工智能的语音模式是对我们对语音交流和语言使用的科学理解的一次考验。首先,文章回顾了在语音变异领域探索人与人工智能的互动与人与人的互动有何相似或不同之处的工作。特别是,我们将重点放在研究用户在解决计算机的语言误解时如何调整自己的语音,以及在适应设备时如何调整自己的语音。接下来,我们将探讨交互过程中自上而下的因素如何影响用户对技术代理产生的语音的语言解释,以及语音生成(通过文本到语音合成,TTS)和识别(使用自动语音识别技术,ASR)的方式如何对交流产生影响。在这篇综述中,我们旨在将人机交互框架与解释人类语音差异的语言学理论模型联系起来。我们还强调了在这一不断扩大的领域中的研究成果,这些研究成果可以为更广泛的语言交流所依据的认知和社会表征提供洞察力。此外,我们还探讨了这一研究方向对解决语音技术领域重大社会问题的影响。
{"title":"Linguistic analysis of human-computer interaction","authors":"Georgia Zellou, Nicole Holliday","doi":"10.3389/fcomp.2024.1384252","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1384252","url":null,"abstract":"This article reviews recent literature investigating speech variation in production and comprehension during spoken language communication between humans and devices. Human speech patterns toward voice-AI presents a test to our scientific understanding about speech communication and language use. First, work exploring how human-AI interactions are similar to, or different from, human-human interactions in the realm of speech variation is reviewed. In particular, we focus on studies examining how users adapt their speech when resolving linguistic misunderstandings by computers and when accommodating their speech toward devices. Next, we consider work that investigates how top-down factors in the interaction can influence users’ linguistic interpretations of speech produced by technological agents and how the ways in which speech is generated (via text-to-speech synthesis, TTS) and recognized (using automatic speech recognition technology, ASR) has an effect on communication. Throughout this review, we aim to bridge both HCI frameworks and theoretical linguistic models accounting for variation in human speech. We also highlight findings in this growing area that can provide insight to the cognitive and social representations underlying linguistic communication more broadly. Additionally, we touch on the implications of this line of work for addressing major societal issues in speech technology.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141117480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shape from dots: a window into abstraction processes in visual perception 从点到形:视觉感知中抽象过程的一扇窗
IF 2.6 Q2 Computer Science Pub Date : 2024-05-16 DOI: 10.3389/fcomp.2024.1367534
Nicholas Baker, P. Kellman
A remarkable phenomenon in perception is that the visual system spontaneously organizes sets of discrete elements into abstract shape representations. We studied perceptual performance with dot displays to discover what spatial relationships support shape perception.In Experiment 1, we tested conditions that lead dot arrays to be perceived as smooth contours vs. having vertices. We found that the perception of a smooth contour vs. a vertex was influenced by spatial relations between dots beyond the three points that define the angle of the point in question. However, there appeared to be a hard boundary around 90° such that any angle 90° or less was perceived as a vertex regardless of the spatial relations of ancillary dots. We hypothesized that dot arrays whose triplets were perceived as smooth curves would be more readily perceived as a unitary object because they can be encoded more economically. In Experiment 2, we generated dot arrays with and without such “vertex triplets” and compared participants’ phenomenological reports of a unified shape with smooth curves vs. shapes with angular corners. Observers gave higher shape ratings for dot arrays from curvilinear shapes. In Experiment 3, we tested shape encoding using a mental rotation task. Participants judged whether two dot arrays were the same or different at five angular differences. Subjects responded reliably faster for displays without vertex triplets, suggesting economical encoding of smooth displays. We followed this up in Experiment 4 using a visual search task. Shapes with and without vertex triplets were embedded in arrays with 25 distractor dots. Participants were asked to detect which display in a 2IFC paradigm contained a shape against a distractor with random dots. Performance was better when the dots were sampled from a smooth shape than when they were sampled from a shape with vertex triplets.These results suggest that the visual system processes dot arrangements as coherent shapes automatically using precise smoothness constraints. This ability may be a consequence of processes that extract curvature in defining object shape and is consistent with recent theory and evidence suggesting that 2D contour representations are composed of constant curvature primitives.
感知中的一个显著现象是,视觉系统会自发地将一组离散元素组织成抽象的形状表征。在实验 1 中,我们测试了导致点阵列被感知为光滑轮廓与有顶点的条件。我们发现,对光滑轮廓与顶点的感知会受到三点以外的点之间空间关系的影响,这三点定义了相关点的角度。然而,在 90° 附近似乎存在一个硬边界,即无论附属点的空间关系如何,任何 90° 或更小的角度都会被感知为顶点。我们假设,如果点阵列的三连线被认为是平滑的曲线,那么它们会更容易被认为是一个单元物体,因为它们可以被更经济地编码。在实验 2 中,我们生成了具有和不具有这种 "顶点三联体 "的点阵,并比较了受试者对具有平滑曲线的统一形状和具有角的形状的现象报告。观察者对曲线形状的点阵列给出了更高的形状评分。在实验 3 中,我们使用心理旋转任务测试了形状编码。受试者判断两个点阵在五个角度差上是相同还是不同。对于没有顶点三连环的显示,受试者的反应速度更快,这表明对平滑显示的编码更经济。在实验 4 中,我们利用视觉搜索任务对此进行了跟进。有顶点三连环和无顶点三连环的图形被嵌入到有 25 个干扰点的阵列中。在 2IFC 范式中,参与者需要在随机点的干扰中检测出哪一个显示包含一个形状。这些结果表明,视觉系统会利用精确的平滑度限制自动将点排列处理为连贯的形状。这种能力可能是在定义物体形状时提取曲率过程的结果,并且与最近的理论和证据一致,这些理论和证据表明二维轮廓表征是由恒定曲率基元组成的。
{"title":"Shape from dots: a window into abstraction processes in visual perception","authors":"Nicholas Baker, P. Kellman","doi":"10.3389/fcomp.2024.1367534","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1367534","url":null,"abstract":"A remarkable phenomenon in perception is that the visual system spontaneously organizes sets of discrete elements into abstract shape representations. We studied perceptual performance with dot displays to discover what spatial relationships support shape perception.In Experiment 1, we tested conditions that lead dot arrays to be perceived as smooth contours vs. having vertices. We found that the perception of a smooth contour vs. a vertex was influenced by spatial relations between dots beyond the three points that define the angle of the point in question. However, there appeared to be a hard boundary around 90° such that any angle 90° or less was perceived as a vertex regardless of the spatial relations of ancillary dots. We hypothesized that dot arrays whose triplets were perceived as smooth curves would be more readily perceived as a unitary object because they can be encoded more economically. In Experiment 2, we generated dot arrays with and without such “vertex triplets” and compared participants’ phenomenological reports of a unified shape with smooth curves vs. shapes with angular corners. Observers gave higher shape ratings for dot arrays from curvilinear shapes. In Experiment 3, we tested shape encoding using a mental rotation task. Participants judged whether two dot arrays were the same or different at five angular differences. Subjects responded reliably faster for displays without vertex triplets, suggesting economical encoding of smooth displays. We followed this up in Experiment 4 using a visual search task. Shapes with and without vertex triplets were embedded in arrays with 25 distractor dots. Participants were asked to detect which display in a 2IFC paradigm contained a shape against a distractor with random dots. Performance was better when the dots were sampled from a smooth shape than when they were sampled from a shape with vertex triplets.These results suggest that the visual system processes dot arrangements as coherent shapes automatically using precise smoothness constraints. This ability may be a consequence of processes that extract curvature in defining object shape and is consistent with recent theory and evidence suggesting that 2D contour representations are composed of constant curvature primitives.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141127464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A magnetometer-based method for in-situ syncing of wearable inertial measurement units 基于磁力计的可穿戴惯性测量单元现场同步方法
IF 2.6 Q2 Computer Science Pub Date : 2024-04-19 DOI: 10.3389/fcomp.2024.1385392
T. Gilbert, Zexiao Lin, Sally Day, Antonia Hamilton, Jamie A. Ward
This paper presents a novel method to synchronize multiple wireless inertial measurement unit sensors (IMU) using their onboard magnetometers. The basic method uses an external electromagnetic pulse to create a known event measured by the magnetometer of multiple IMUs and in turn uses this to synchronize the devices. An initial evaluation using four commercial IMUs reveals a maximum error of 40 ms per hour as limited by a 25 Hz sample rate. Building on this we introduce a novel method to improve synchronization beyond the limitations imposed by the sample rate and evaluate this in a further study using 8 IMUs. We show that a sequence of electromagnetic pulses, in total lasting <3-s, can reduce the maximum synchronization error to 8 ms (for 25 Hz sample rate, and accounting for the transient response time of the magnetic field generator). An advantage of this method is that it can be applied to several devices, either simultaneously or individually, without the need to remove them from the context in which they are being used. This makes the approach particularly suited to synchronizing multi-person on-body sensors while they are being worn.
本文介绍了一种利用板载磁力计同步多个无线惯性测量单元传感器(IMU)的新方法。基本方法是利用外部电磁脉冲产生一个由多个 IMU 的磁力计测量的已知事件,然后利用该事件使设备同步。使用四个商用 IMU 进行的初步评估显示,在 25 Hz 采样率的限制下,最大误差为每小时 40 毫秒。在此基础上,我们引入了一种新方法来提高同步性,使其超越采样率的限制,并在使用 8 个 IMU 的进一步研究中对其进行了评估。我们发现,一连串电磁脉冲(总持续时间<3 秒)可将最大同步误差降至 8 毫秒(采样率为 25 赫兹,并考虑到磁场发生器的瞬态响应时间)。这种方法的优势在于,它可以同时或单独应用于多个设备,而无需将它们从使用环境中移除。因此,这种方法特别适用于同步佩戴在身上的多人身上的传感器。
{"title":"A magnetometer-based method for in-situ syncing of wearable inertial measurement units","authors":"T. Gilbert, Zexiao Lin, Sally Day, Antonia Hamilton, Jamie A. Ward","doi":"10.3389/fcomp.2024.1385392","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1385392","url":null,"abstract":"This paper presents a novel method to synchronize multiple wireless inertial measurement unit sensors (IMU) using their onboard magnetometers. The basic method uses an external electromagnetic pulse to create a known event measured by the magnetometer of multiple IMUs and in turn uses this to synchronize the devices. An initial evaluation using four commercial IMUs reveals a maximum error of 40 ms per hour as limited by a 25 Hz sample rate. Building on this we introduce a novel method to improve synchronization beyond the limitations imposed by the sample rate and evaluate this in a further study using 8 IMUs. We show that a sequence of electromagnetic pulses, in total lasting <3-s, can reduce the maximum synchronization error to 8 ms (for 25 Hz sample rate, and accounting for the transient response time of the magnetic field generator). An advantage of this method is that it can be applied to several devices, either simultaneously or individually, without the need to remove them from the context in which they are being used. This makes the approach particularly suited to synchronizing multi-person on-body sensors while they are being worn.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140684614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Top-down and bottom-up approaches to video quality of experience studies; overview and proposal of a new model 视频体验质量研究的自上而下和自下而上方法;概述和新模式建议
IF 2.6 Q2 Computer Science Pub Date : 2024-04-15 DOI: 10.3389/fcomp.2024.1305670
Kamil Koniuch, Sabina Baraković, J. Husić, Sruti Subramanian, Katrien De Moor, Lucjan Janowski, Michał Wierzchoń
Modern video streaming services require quality assurance of the presented audiovisual material. Quality assurance mechanisms allow streaming platforms to provide quality levels that are considered sufficient to yield user satisfaction, with the least possible amount of data transferred. A variety of measures and approaches have been developed to control video quality, e.g., by adapting it to network conditions. These include objective matrices of the quality and thresholds identified by means of subjective perceptual judgments. The former group of matrices has recently gained the attention of (multi) media researchers. They call this area of study “Quality of Experience” (QoE). In this paper, we present a theoretical model based on review of previous QoE’s models. We argue that most of them represent the bottom-up approach to modeling. Such models focus on describing as many variables as possible, but with a limited ability to investigate the causal relationship between them; therefore, the applicability of the findings in practice is limited. To advance the field, we therefore propose a structural, top-down model of video QoE that describes causal relationships among variables. This novel top-down model serves as a practical guide for structuring QoE experiments, ensuring the incorporation of influential factors in a confirmatory manner.
现代视频流媒体服务需要保证所提供视听材料的质量。质量保证机制允许流媒体平台提供足以让用户满意的质量水平,同时尽可能减少传输的数据量。目前已开发出多种措施和方法来控制视频质量,例如根据网络条件进行调整。其中包括客观的质量矩阵和通过主观感知判断确定的阈值。前一类矩阵最近得到了(多)媒体研究人员的关注。他们将这一研究领域称为 "体验质量"(QoE)。在本文中,我们在回顾以往 QoE 模型的基础上提出了一个理论模型。我们认为,大多数模型都是自下而上的建模方法。这些模型侧重于描述尽可能多的变量,但研究变量之间因果关系的能力有限;因此,研究结果在实践中的适用性有限。因此,为了推动这一领域的发展,我们提出了一种结构性的、自上而下的视频质量体验模型,该模型描述了变量之间的因果关系。这种新颖的自上而下模型可作为构建 QoE 实验的实用指南,确保以确证的方式纳入影响因素。
{"title":"Top-down and bottom-up approaches to video quality of experience studies; overview and proposal of a new model","authors":"Kamil Koniuch, Sabina Baraković, J. Husić, Sruti Subramanian, Katrien De Moor, Lucjan Janowski, Michał Wierzchoń","doi":"10.3389/fcomp.2024.1305670","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1305670","url":null,"abstract":"Modern video streaming services require quality assurance of the presented audiovisual material. Quality assurance mechanisms allow streaming platforms to provide quality levels that are considered sufficient to yield user satisfaction, with the least possible amount of data transferred. A variety of measures and approaches have been developed to control video quality, e.g., by adapting it to network conditions. These include objective matrices of the quality and thresholds identified by means of subjective perceptual judgments. The former group of matrices has recently gained the attention of (multi) media researchers. They call this area of study “Quality of Experience” (QoE). In this paper, we present a theoretical model based on review of previous QoE’s models. We argue that most of them represent the bottom-up approach to modeling. Such models focus on describing as many variables as possible, but with a limited ability to investigate the causal relationship between them; therefore, the applicability of the findings in practice is limited. To advance the field, we therefore propose a structural, top-down model of video QoE that describes causal relationships among variables. This novel top-down model serves as a practical guide for structuring QoE experiments, ensuring the incorporation of influential factors in a confirmatory manner.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140702611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inclusive gaming through AI: a perspective for identifying opportunities and obstacles through co-design with people living with MND 通过人工智能实现包容性游戏:通过与 MND 患者共同设计确定机遇和障碍的视角
IF 2.6 Q2 Computer Science Pub Date : 2024-04-10 DOI: 10.3389/fcomp.2024.1379559
Natasha Dwyer, Matthew Harrison, Ben O’Mara, Kirsten Harley
This interdisciplinary research initiative seeks to enhance the accessibility of video gaming for individuals living with Motor Neurone Disease (MND), a condition characterized by progressive muscle weakness. Gaming serves as a social and recreational outlet for many, connecting friends, family, and even strangers through collaboration and competition. However, MND’s disease progression, including muscle weakness and paralysis, severely limit the ability to engage in gaming. In this paper, we desscribe our exploration of AI solutions to improve accessibility to gaming. We argue that any application of accessible AI must be led by lived experience. Notably, we found in our previous scoping review, existing academic research into video games for those living with MND largely neglects the experiences of MND patients in the context of video games and AI, which was a prompt for us to address this critical gap.
这项跨学科研究计划旨在提高运动神经元疾病(MND)患者对视频游戏的可及性,这种疾病的特点是进行性肌无力。游戏是许多人的社交和娱乐方式,通过合作和竞争将朋友、家人甚至陌生人联系在一起。然而,MND 的疾病进展,包括肌肉无力和瘫痪,严重限制了人们参与游戏的能力。在本文中,我们描述了我们对人工智能解决方案的探索,以提高游戏的可及性。我们认为,任何无障碍人工智能的应用都必须以生活经验为主导。值得注意的是,我们在之前的范围界定审查中发现,针对 MND 患者的视频游戏的现有学术研究在很大程度上忽视了 MND 患者在视频游戏和人工智能方面的体验,这促使我们填补这一重要空白。
{"title":"Inclusive gaming through AI: a perspective for identifying opportunities and obstacles through co-design with people living with MND","authors":"Natasha Dwyer, Matthew Harrison, Ben O’Mara, Kirsten Harley","doi":"10.3389/fcomp.2024.1379559","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1379559","url":null,"abstract":"This interdisciplinary research initiative seeks to enhance the accessibility of video gaming for individuals living with Motor Neurone Disease (MND), a condition characterized by progressive muscle weakness. Gaming serves as a social and recreational outlet for many, connecting friends, family, and even strangers through collaboration and competition. However, MND’s disease progression, including muscle weakness and paralysis, severely limit the ability to engage in gaming. In this paper, we desscribe our exploration of AI solutions to improve accessibility to gaming. We argue that any application of accessible AI must be led by lived experience. Notably, we found in our previous scoping review, existing academic research into video games for those living with MND largely neglects the experiences of MND patients in the context of video games and AI, which was a prompt for us to address this critical gap.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140718517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the robustness of multimodal task load estimation models 评估多模式任务负荷估算模型的稳健性
IF 2.6 Q2 Computer Science Pub Date : 2024-04-10 DOI: 10.3389/fcomp.2024.1371181
Andreas Foltyn, J. Deuschel, Nadine R. Lang-Richter, Nina Holzer, Maximilian P. Oppelt
Numerous studies have focused on constructing multimodal machine learning models for estimating a person's cognitive load. However, a prevalent limitation is that these models are typically evaluated on data from the same scenario they were trained on. Little attention has been given to their robustness against data distribution shifts, which may occur during deployment. The aim of this paper is to investigate the performance of these models when confronted with a scenario different from the one on which they were trained. For this evaluation, we utilized a dataset encompassing two distinct scenarios: an n-Back test and a driving simulation. We selected a variety of classic machine learning and deep learning architectures, which were further complemented by various fusion techniques. The models were trained on the data from the n-Back task and tested on both scenarios to evaluate their predictive performance. However, the predictive performance alone may not lead to a trustworthy model. Therefore, we looked at the uncertainty estimates of these models. By leveraging these estimates, we can reduce misclassification by resorting to alternative measures in situations of high uncertainty. The findings indicate that late fusion produces stable classification results across the examined models for both scenarios, enhancing robustness compared to feature-based fusion methods. Although a simple logistic regression tends to provide the best predictive performance for n-Back, this is not always the case if the data distribution is shifted. Finally, the predictive performance of individual modalities differs significantly between the two scenarios. This research provides insights into the capabilities and limitations of multimodal machine learning models in handling distribution shifts and identifies which approaches may potentially be suitable for achieving robust results.
许多研究都专注于构建多模态机器学习模型,用于估算人的认知负荷。然而,一个普遍存在的局限性是,这些模型通常是根据它们所训练的同一场景中的数据进行评估的。人们很少关注这些模型对数据分布变化的稳健性,而数据分布变化可能会在部署过程中发生。本文旨在研究这些模型在面对不同于训练场景时的性能。为了进行评估,我们使用了一个包含两种不同场景的数据集:N-Back 测试和驾驶模拟。我们选择了各种经典的机器学习和深度学习架构,并辅以各种融合技术。这些模型在 n-Back 任务的数据上进行了训练,并在两个场景中进行了测试,以评估其预测性能。然而,仅凭预测性能可能无法建立一个值得信赖的模型。因此,我们研究了这些模型的不确定性估计值。通过利用这些估计值,我们可以在不确定性较高的情况下采用替代措施来减少误分类。研究结果表明,与基于特征的融合方法相比,后期融合在两种情况下对所研究的模型都能产生稳定的分类结果,增强了稳健性。虽然简单的逻辑回归往往能提供 n-Back 的最佳预测性能,但如果数据分布发生偏移,情况就不一定如此了。最后,在两种情况下,单个模态的预测性能差异很大。这项研究深入探讨了多模态机器学习模型在处理分布偏移方面的能力和局限性,并确定了哪些方法可能适合实现稳健的结果。
{"title":"Evaluating the robustness of multimodal task load estimation models","authors":"Andreas Foltyn, J. Deuschel, Nadine R. Lang-Richter, Nina Holzer, Maximilian P. Oppelt","doi":"10.3389/fcomp.2024.1371181","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1371181","url":null,"abstract":"Numerous studies have focused on constructing multimodal machine learning models for estimating a person's cognitive load. However, a prevalent limitation is that these models are typically evaluated on data from the same scenario they were trained on. Little attention has been given to their robustness against data distribution shifts, which may occur during deployment. The aim of this paper is to investigate the performance of these models when confronted with a scenario different from the one on which they were trained. For this evaluation, we utilized a dataset encompassing two distinct scenarios: an n-Back test and a driving simulation. We selected a variety of classic machine learning and deep learning architectures, which were further complemented by various fusion techniques. The models were trained on the data from the n-Back task and tested on both scenarios to evaluate their predictive performance. However, the predictive performance alone may not lead to a trustworthy model. Therefore, we looked at the uncertainty estimates of these models. By leveraging these estimates, we can reduce misclassification by resorting to alternative measures in situations of high uncertainty. The findings indicate that late fusion produces stable classification results across the examined models for both scenarios, enhancing robustness compared to feature-based fusion methods. Although a simple logistic regression tends to provide the best predictive performance for n-Back, this is not always the case if the data distribution is shifted. Finally, the predictive performance of individual modalities differs significantly between the two scenarios. This research provides insights into the capabilities and limitations of multimodal machine learning models in handling distribution shifts and identifies which approaches may potentially be suitable for achieving robust results.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140718341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EmoAsst: emotion recognition assistant via text-guided transfer learning on pre-trained visual and acoustic models EmoAsst:通过预训练视觉和声学模型上的文本引导迁移学习进行情感识别的助手
IF 2.6 Q2 Computer Science Pub Date : 2024-04-09 DOI: 10.3389/fcomp.2024.1304687
Minxiao Wang, Ning Yang
Children diagnosed with Autism Spectrum Disorder (ASD) often struggle to grasp social conventions and promptly recognize others' emotions. Recent advancements in the application of deep learning (DL) to emotion recognition are solidifying the role of AI-powered assistive technology in supporting autistic children. However, the cost of collecting and annotating large-scale high-quality human emotion data and the phenomenon of unbalanced performance on different modalities of data challenge DL-based emotion recognition. In response to these challenges, this paper explores transfer learning, wherein large pre-trained models like Contrastive Language-Image Pre-training (CLIP) and wav2vec 2.0 are fine-tuned to improve audio- and video-based emotion recognition with text- based guidance. In this work, we propose the EmoAsst framework, which includes a visual fusion module and emotion prompt fine-tuning for CLIP, in addition to leveraging CLIP's text encoder and supervised contrastive learning for audio-based emotion recognition on the wav2vec 2.0 model. In addition, a joint few-shot emotion classifier enhances the accuracy and offers great adaptability for real-world applications. The evaluation results on the MELD dataset highlight the outstanding performance of our methods, surpassing the majority of existing video and audio-based approaches. Notably, our research demonstrates the promising potential of the proposed text-based guidance techniques for improving video and audio-based Emotion Recognition and Classification (ERC).
被诊断患有自闭症谱系障碍(ASD)的儿童通常很难掌握社交惯例并及时识别他人的情绪。最近在应用深度学习(DL)进行情绪识别方面取得的进展,巩固了人工智能辅助技术在支持自闭症儿童方面的作用。然而,收集和注释大规模高质量人类情绪数据的成本,以及不同数据模式下的不平衡表现现象,都对基于深度学习的情绪识别提出了挑战。为了应对这些挑战,本文探讨了迁移学习,即对大型预训练模型(如对比语言-图像预训练(CLIP)和 wav2vec 2.0)进行微调,以改进基于音频和视频的情感识别,并提供基于文本的指导。在这项工作中,我们提出了 EmoAsst 框架,其中包括视觉融合模块和针对 CLIP 的情感提示微调,以及利用 CLIP 的文本编码器和监督对比学习,在 wav2vec 2.0 模型上进行基于音频的情感识别。此外,联合少量情感分类器提高了准确性,并为实际应用提供了极大的适应性。在 MELD 数据集上的评估结果表明,我们的方法性能卓越,超越了大多数现有的基于视频和音频的方法。值得注意的是,我们的研究证明了所提出的基于文本的引导技术在改进基于视频和音频的情感识别与分类(ERC)方面的巨大潜力。
{"title":"EmoAsst: emotion recognition assistant via text-guided transfer learning on pre-trained visual and acoustic models","authors":"Minxiao Wang, Ning Yang","doi":"10.3389/fcomp.2024.1304687","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1304687","url":null,"abstract":"Children diagnosed with Autism Spectrum Disorder (ASD) often struggle to grasp social conventions and promptly recognize others' emotions. Recent advancements in the application of deep learning (DL) to emotion recognition are solidifying the role of AI-powered assistive technology in supporting autistic children. However, the cost of collecting and annotating large-scale high-quality human emotion data and the phenomenon of unbalanced performance on different modalities of data challenge DL-based emotion recognition. In response to these challenges, this paper explores transfer learning, wherein large pre-trained models like Contrastive Language-Image Pre-training (CLIP) and wav2vec 2.0 are fine-tuned to improve audio- and video-based emotion recognition with text- based guidance. In this work, we propose the EmoAsst framework, which includes a visual fusion module and emotion prompt fine-tuning for CLIP, in addition to leveraging CLIP's text encoder and supervised contrastive learning for audio-based emotion recognition on the wav2vec 2.0 model. In addition, a joint few-shot emotion classifier enhances the accuracy and offers great adaptability for real-world applications. The evaluation results on the MELD dataset highlight the outstanding performance of our methods, surpassing the majority of existing video and audio-based approaches. Notably, our research demonstrates the promising potential of the proposed text-based guidance techniques for improving video and audio-based Emotion Recognition and Classification (ERC).","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140727072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Psychological profiling of hackers via machine learning toward sustainable cybersecurity 通过机器学习对黑客进行心理分析,实现可持续的网络安全
IF 2.6 Q2 Computer Science Pub Date : 2024-04-08 DOI: 10.3389/fcomp.2024.1381351
Umema Hani, Osama Sohaib, Khalid Khan, Asma Aleidi, Noman Islam
This research addresses a challenge of the hacker classification framework based on the “big five personality traits” model (OCEAN) and explores associations between personality traits and hacker types. The method's application prediction performance was evaluated in two groups: Students with hacking experience who intend to pursue information security and ethical hacking and industry professionals who work as White Hat hackers. These professionals were further categorized based on their behavioral tendencies, incorporating Gray Hat traits. The k-means algorithm analyzed intra-cluster dependencies, elucidating variations within different clusters and their correlation with Hat types. The study achieved an 88% accuracy in mapping clusters with Hat types, effectively identifying cyber-criminal behaviors. Ethical considerations regarding privacy and bias in personality profiling methodologies within cybersecurity are discussed, emphasizing the importance of informed consent, transparency, and accountability in data management practices. Furthermore, the research underscores the need for sustainable cybersecurity practices, integrating environmental and societal impacts into security frameworks. This study aims to advance responsible cybersecurity practices by promoting awareness and ethical considerations and prioritizing privacy, equity, and sustainability principles.
本研究解决了基于 "五大人格特质 "模型(OCEAN)的黑客分类框架所面临的挑战,并探索了人格特质与黑客类型之间的关联。该方法的应用预测性能在两组学生中进行了评估:这两组人分别是:有黑客经验并打算从事信息安全和道德黑客工作的学生,以及从事白帽黑客工作的业内专业人士。这些专业人员还根据其行为倾向进行了进一步分类,其中包括 "灰帽子 "特征。k-means 算法分析了簇内的依赖关系,阐明了不同簇内的变化及其与 "灰帽子 "类型的相关性。该研究在将聚类与 "帽子 "类型进行映射方面达到了 88% 的准确率,从而有效地识别了网络犯罪行为。研究还讨论了网络安全领域人格分析方法中有关隐私和偏见的伦理考虑因素,强调了数据管理实践中知情同意、透明度和问责制的重要性。此外,研究还强调了可持续网络安全实践的必要性,将环境和社会影响纳入安全框架。本研究旨在通过提高人们的意识和道德考量,优先考虑隐私、公平和可持续原则,推动负责任的网络安全实践。
{"title":"Psychological profiling of hackers via machine learning toward sustainable cybersecurity","authors":"Umema Hani, Osama Sohaib, Khalid Khan, Asma Aleidi, Noman Islam","doi":"10.3389/fcomp.2024.1381351","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1381351","url":null,"abstract":"This research addresses a challenge of the hacker classification framework based on the “big five personality traits” model (OCEAN) and explores associations between personality traits and hacker types. The method's application prediction performance was evaluated in two groups: Students with hacking experience who intend to pursue information security and ethical hacking and industry professionals who work as White Hat hackers. These professionals were further categorized based on their behavioral tendencies, incorporating Gray Hat traits. The k-means algorithm analyzed intra-cluster dependencies, elucidating variations within different clusters and their correlation with Hat types. The study achieved an 88% accuracy in mapping clusters with Hat types, effectively identifying cyber-criminal behaviors. Ethical considerations regarding privacy and bias in personality profiling methodologies within cybersecurity are discussed, emphasizing the importance of informed consent, transparency, and accountability in data management practices. Furthermore, the research underscores the need for sustainable cybersecurity practices, integrating environmental and societal impacts into security frameworks. This study aims to advance responsible cybersecurity practices by promoting awareness and ethical considerations and prioritizing privacy, equity, and sustainability principles.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140731330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive evaluation of marker-based, markerless methods for loose garment scenarios in varying camera configurations 全面评估基于标记的无标记方法在不同摄像机配置下的宽松服装应用场景
IF 2.6 Q2 Computer Science Pub Date : 2024-04-05 DOI: 10.3389/fcomp.2024.1379925
Lala Shakti Swarup Ray, Bo Zhou, Sungho Suh, P. Lukowicz
In support of smart wearable researchers striving to select optimal ground truth methods for motion capture across a spectrum of loose garment types, we present an extended benchmark named DrapeMoCapBench (DMCB+). This augmented benchmark incorporates a more intricate limb-wise Motion Capture (MoCap) accuracy analysis, and enhanced drape calculation, and introduces a novel benchmarking tool that encompasses multicamera deep learning MoCap methods. DMCB+ is specifically designed to evaluate the performance of both optical marker-based and markerless MoCap techniques, taking into account the challenges posed by various loose garment types. While high-cost marker-based systems are acknowledged for their precision, they often require skin-tight markers on bony areas, which can be impractical with loose garments. On the other hand, markerless MoCap methods driven by computer vision models have evolved to be more cost-effective, utilizing smartphone cameras and exhibiting promising results. Utilizing real-world MoCap datasets, DMCB+ conducts 3D physics simulations with a comprehensive set of variables, including six drape levels, three motion intensities, and six body-gender combinations. The extended benchmark provides a nuanced analysis of advanced marker-based and markerless MoCap techniques, highlighting their strengths and weaknesses across distinct scenarios. In particular, DMCB+ reveals that when evaluating casual loose garments, both marker-based and markerless methods exhibit notable performance degradation (>10 cm). However, in scenarios involving everyday activities with basic and swift motions, markerless MoCap outperforms marker-based alternatives. This positions markerless MoCap as an advantageous and economical choice for wearable studies. The inclusion of a multicamera deep learning MoCap method in the benchmarking tool further expands the scope, allowing researchers to assess the capabilities of cutting-edge technologies in diverse motion capture scenarios.
为支持智能可穿戴设备研究人员在各种宽松服装类型中选择最佳运动捕捉地面实况方法,我们提出了一个名为 "悬垂运动捕捉基准"(DMCB+)的扩展基准。该扩展基准包含更复杂的肢体运动捕捉(MoCap)精度分析和增强的悬垂计算,并引入了一种包含多摄像头深度学习 MoCap 方法的新型基准工具。DMCB+ 专用于评估基于光学标记和无标记 MoCap 技术的性能,同时考虑到各种宽松服装类型带来的挑战。虽然基于标记的高成本系统因其精确性而备受认可,但它们通常需要在骨质部位使用紧贴皮肤的标记,而这在宽松服装中并不实用。另一方面,由计算机视觉模型驱动的无标记 MoCap 方法已发展得更具成本效益,它利用智能手机摄像头,并取得了可喜的成果。利用真实世界的 MoCap 数据集,DMCB+ 利用一组全面的变量进行了三维物理模拟,包括六种悬垂水平、三种运动强度和六种身体-性别组合。扩展基准对基于标记和无标记的高级 MoCap 技术进行了细致入微的分析,突出了它们在不同场景下的优缺点。特别是,DMCB+ 显示,在评估休闲宽松服装时,基于标记和无标记的方法都表现出明显的性能下降(>10 厘米)。然而,在涉及基本和快速运动的日常活动场景中,无标记 MoCap 的性能优于基于标记的替代方法。这就使无标记 MoCap 成为可穿戴研究的一种经济而又有优势的选择。在基准测试工具中加入多摄像头深度学习 MoCap 方法进一步扩大了范围,使研究人员能够评估尖端技术在各种运动捕捉场景中的能力。
{"title":"A comprehensive evaluation of marker-based, markerless methods for loose garment scenarios in varying camera configurations","authors":"Lala Shakti Swarup Ray, Bo Zhou, Sungho Suh, P. Lukowicz","doi":"10.3389/fcomp.2024.1379925","DOIUrl":"https://doi.org/10.3389/fcomp.2024.1379925","url":null,"abstract":"In support of smart wearable researchers striving to select optimal ground truth methods for motion capture across a spectrum of loose garment types, we present an extended benchmark named DrapeMoCapBench (DMCB+). This augmented benchmark incorporates a more intricate limb-wise Motion Capture (MoCap) accuracy analysis, and enhanced drape calculation, and introduces a novel benchmarking tool that encompasses multicamera deep learning MoCap methods. DMCB+ is specifically designed to evaluate the performance of both optical marker-based and markerless MoCap techniques, taking into account the challenges posed by various loose garment types. While high-cost marker-based systems are acknowledged for their precision, they often require skin-tight markers on bony areas, which can be impractical with loose garments. On the other hand, markerless MoCap methods driven by computer vision models have evolved to be more cost-effective, utilizing smartphone cameras and exhibiting promising results. Utilizing real-world MoCap datasets, DMCB+ conducts 3D physics simulations with a comprehensive set of variables, including six drape levels, three motion intensities, and six body-gender combinations. The extended benchmark provides a nuanced analysis of advanced marker-based and markerless MoCap techniques, highlighting their strengths and weaknesses across distinct scenarios. In particular, DMCB+ reveals that when evaluating casual loose garments, both marker-based and markerless methods exhibit notable performance degradation (>10 cm). However, in scenarios involving everyday activities with basic and swift motions, markerless MoCap outperforms marker-based alternatives. This positions markerless MoCap as an advantageous and economical choice for wearable studies. The inclusion of a multicamera deep learning MoCap method in the benchmarking tool further expands the scope, allowing researchers to assess the capabilities of cutting-edge technologies in diverse motion capture scenarios.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140736197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Computer Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1