点光显示抓取动作的方向和速度运动特征在动作观察网络中进行了不同的编码。

IF 4.7 2区 医学 Q1 NEUROIMAGING NeuroImage Pub Date : 2024-11-16 DOI:10.1016/j.neuroimage.2024.120939
Settimio Ziccarelli, Antonino Errante, Leonardo Fogassi
{"title":"点光显示抓取动作的方向和速度运动特征在动作观察网络中进行了不同的编码。","authors":"Settimio Ziccarelli, Antonino Errante, Leonardo Fogassi","doi":"10.1016/j.neuroimage.2024.120939","DOIUrl":null,"url":null,"abstract":"<p><p>The processing of kinematic information embedded in observed actions is an essential ability for understanding others' behavior. Previous research showed that the action observation network (AON) may encode some action kinematic features. However, our understanding of how direction and velocity are encoded within the AON is still limited. In this study, we employed event-related fMRI to investigate the neural substrates specifically activated during observation of hand grasping actions presented as point-light displays, performed with different directions (right, left) and velocities (fast, slow). Twenty-three healthy adult participants took part in the study. To identify brain regions differentially recruited by grasping direction and velocity, univariate and multivariate pattern analysis (MVPA) were performed. The results of univariate analysis demonstrate that direction is encoded in occipito-temporal and posterior visual areas, while velocity recruits lateral occipito-temporal, superior parietal and intraparietal areas. Results of MVPA further show: a) a significant decoding accuracy of both velocity and direction at the network level; b) the possibility to decode within lateral occipito-temporal and parietal areas both direction and velocity; c) a contribution of bilateral premotor areas to velocity decoding models. These results indicate that posterior parietal nodes of the AON are mainly involved in coding grasping direction and that premotor regions are crucial for coding grasping velocity, while lateral occipito-temporal cortices play a key role in encoding both parameters. The current findings could have implications for observational-based rehabilitation treatments of patients with motor disorders and artificial intelligence-based hand action recognition models.</p>","PeriodicalId":19299,"journal":{"name":"NeuroImage","volume":" ","pages":"120939"},"PeriodicalIF":4.7000,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Direction and velocity kinematic features of point-light displays grasping actions are differentially coded within the action observation network.\",\"authors\":\"Settimio Ziccarelli, Antonino Errante, Leonardo Fogassi\",\"doi\":\"10.1016/j.neuroimage.2024.120939\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The processing of kinematic information embedded in observed actions is an essential ability for understanding others' behavior. Previous research showed that the action observation network (AON) may encode some action kinematic features. However, our understanding of how direction and velocity are encoded within the AON is still limited. In this study, we employed event-related fMRI to investigate the neural substrates specifically activated during observation of hand grasping actions presented as point-light displays, performed with different directions (right, left) and velocities (fast, slow). Twenty-three healthy adult participants took part in the study. To identify brain regions differentially recruited by grasping direction and velocity, univariate and multivariate pattern analysis (MVPA) were performed. The results of univariate analysis demonstrate that direction is encoded in occipito-temporal and posterior visual areas, while velocity recruits lateral occipito-temporal, superior parietal and intraparietal areas. Results of MVPA further show: a) a significant decoding accuracy of both velocity and direction at the network level; b) the possibility to decode within lateral occipito-temporal and parietal areas both direction and velocity; c) a contribution of bilateral premotor areas to velocity decoding models. These results indicate that posterior parietal nodes of the AON are mainly involved in coding grasping direction and that premotor regions are crucial for coding grasping velocity, while lateral occipito-temporal cortices play a key role in encoding both parameters. The current findings could have implications for observational-based rehabilitation treatments of patients with motor disorders and artificial intelligence-based hand action recognition models.</p>\",\"PeriodicalId\":19299,\"journal\":{\"name\":\"NeuroImage\",\"volume\":\" \",\"pages\":\"120939\"},\"PeriodicalIF\":4.7000,\"publicationDate\":\"2024-11-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"NeuroImage\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1016/j.neuroimage.2024.120939\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"NEUROIMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"NeuroImage","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.neuroimage.2024.120939","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"NEUROIMAGING","Score":null,"Total":0}
引用次数: 0

摘要

处理观察到的动作中所包含的运动学信息是理解他人行为的一项基本能力。以往的研究表明,动作观察网络(AON)可以编码一些动作的运动学特征。然而,我们对 AON 如何编码方向和速度的理解仍然有限。在这项研究中,我们采用了事件相关的 fMRI 技术,研究了在观察以点光源显示方式呈现的手部抓握动作时,不同方向(右、左)和速度(快、慢)的神经基质的特异性激活情况。23 名健康的成年参与者参加了研究。为了确定抓取方向和速度对大脑区域的不同招募作用,研究人员进行了单变量和多变量模式分析(MVPA)。单变量分析结果表明,方向在枕颞区和后部视觉区编码,而速度则征集外侧枕颞区、顶叶上部和顶叶内区。MVPA 的结果进一步表明:a) 在网络水平上,速度和方向的解码准确性都很高;b) 在侧枕颞区和顶叶区对方向和速度进行解码的可能性;c) 双侧运动前区对速度解码模型的贡献。这些结果表明,AON的后顶叶节点主要参与对抓取方向的编码,而运动前区对抓取速度的编码至关重要,而外侧枕颞皮层在对这两个参数的编码中起着关键作用。目前的研究结果可能会对运动障碍患者基于观察的康复治疗和基于人工智能的手部动作识别模型产生影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Direction and velocity kinematic features of point-light displays grasping actions are differentially coded within the action observation network.

The processing of kinematic information embedded in observed actions is an essential ability for understanding others' behavior. Previous research showed that the action observation network (AON) may encode some action kinematic features. However, our understanding of how direction and velocity are encoded within the AON is still limited. In this study, we employed event-related fMRI to investigate the neural substrates specifically activated during observation of hand grasping actions presented as point-light displays, performed with different directions (right, left) and velocities (fast, slow). Twenty-three healthy adult participants took part in the study. To identify brain regions differentially recruited by grasping direction and velocity, univariate and multivariate pattern analysis (MVPA) were performed. The results of univariate analysis demonstrate that direction is encoded in occipito-temporal and posterior visual areas, while velocity recruits lateral occipito-temporal, superior parietal and intraparietal areas. Results of MVPA further show: a) a significant decoding accuracy of both velocity and direction at the network level; b) the possibility to decode within lateral occipito-temporal and parietal areas both direction and velocity; c) a contribution of bilateral premotor areas to velocity decoding models. These results indicate that posterior parietal nodes of the AON are mainly involved in coding grasping direction and that premotor regions are crucial for coding grasping velocity, while lateral occipito-temporal cortices play a key role in encoding both parameters. The current findings could have implications for observational-based rehabilitation treatments of patients with motor disorders and artificial intelligence-based hand action recognition models.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
NeuroImage
NeuroImage 医学-核医学
CiteScore
11.30
自引率
10.50%
发文量
809
审稿时长
63 days
期刊介绍: NeuroImage, a Journal of Brain Function provides a vehicle for communicating important advances in acquiring, analyzing, and modelling neuroimaging data and in applying these techniques to the study of structure-function and brain-behavior relationships. Though the emphasis is on the macroscopic level of human brain organization, meso-and microscopic neuroimaging across all species will be considered if informative for understanding the aforementioned relationships.
期刊最新文献
Cerebellar representation during phonetic processing in tonal and non-tonal language speakers: An ALE meta-analysis. Deep learning applied to the segmentation of rodent brain MRI data outperforms noisy ground truth on full-fledged brain atlases. Development of A Novel Radioiodinated Compound for Amyloid and Tau Deposition imaging in Alzheimer's disease and Tauopathy Mouse Models. Investigating Unilateral and Bilateral Motor Imagery Control Using Electrocorticography and fMRI in Awake Craniotomy. Multiclass Classification of Alzheimer's Disease Prodromal Stages using Sequential Feature Embeddings and Regularized Multikernel Support Vector Machine.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1