首页 > 最新文献

Journal of Information and Intelligence最新文献

英文 中文
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"3 6","pages":"Pages 526-546"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146637062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"3 3","pages":"Pages 194-209"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146584841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"3 3","pages":"Pages 242-256"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146584846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"3 4","pages":"Pages 303-325"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146616417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"3 2","pages":"Page IFC"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146847749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"3 2","pages":"Pages 173-188"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146847754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"3 5","pages":"Pages 375-400"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147021878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boosting brain-computer interface performance through cognitive training: A brain-centric approach 通过认知训练提升脑机接口性能:一种以大脑为中心的方法。
Pub Date : 2025-01-01 DOI: 10.1016/j.jiixd.2024.06.003
Ziyuan Zhang , Ziyu Wang , Kaitai Guo , Yang Zheng , Minghao Dong , Jimin Liang
Previous efforts to boost the performance of brain-computer interfaces (BCIs) have predominantly focused on optimizing algorithms for decoding brain signals. However, the untapped potential of leveraging brain plasticity for optimization remains underexplored. In this study, we enhanced the temporal resolution of the human brain in discriminating visual stimuli by eliminating the attentional blink (AB) through color-salient cognitive training, and we confirmed that the mechanism was an attention-based improvement. Using the rapid serial visual presentation (RSVP)-based BCI, we evaluated the behavioral and electroencephalogram (EEG) decoding performance of subjects before and after cognitive training in high target percentage (with AB) and low target percentage (without AB) surveillance tasks, respectively. The results consistently demonstrated significant improvements in the trained subjects. Further analysis indicated that this improvement was attributed to the cognitively trained brain producing more discriminative EEG. Our work highlights the feasibility of cognitive training as a means of brain enhancement to boost BCI performance.
以前提高脑机接口(bci)性能的努力主要集中在优化解码大脑信号的算法。然而,利用大脑可塑性进行优化的未开发潜力仍未得到充分探索。在本研究中,我们通过色彩显著性认知训练来消除注意眨眼(attention blink, AB),从而提高了人类大脑辨别视觉刺激的时间分辨率,并证实了这一机制是一种基于注意的改进。采用基于快速串行视觉呈现(RSVP)的脑机接口(BCI),分别评价了受试者在高目标百分比(有AB)和低目标百分比(无AB)监测任务中认知训练前后的行为和脑电图(EEG)解码表现。结果一致表明,受过训练的受试者取得了显著的进步。进一步的分析表明,这种改善归因于经过认知训练的大脑产生了更多的鉴别脑电图。我们的工作强调了认知训练作为大脑增强手段提高脑机接口性能的可行性。
{"title":"Boosting brain-computer interface performance through cognitive training: A brain-centric approach","authors":"Ziyuan Zhang ,&nbsp;Ziyu Wang ,&nbsp;Kaitai Guo ,&nbsp;Yang Zheng ,&nbsp;Minghao Dong ,&nbsp;Jimin Liang","doi":"10.1016/j.jiixd.2024.06.003","DOIUrl":"10.1016/j.jiixd.2024.06.003","url":null,"abstract":"<div><div>Previous efforts to boost the performance of brain-computer interfaces (BCIs) have predominantly focused on optimizing algorithms for decoding brain signals. However, the untapped potential of leveraging brain plasticity for optimization remains underexplored. In this study, we enhanced the temporal resolution of the human brain in discriminating visual stimuli by eliminating the attentional blink (AB) through color-salient cognitive training, and we confirmed that the mechanism was an attention-based improvement. Using the rapid serial visual presentation (RSVP)-based BCI, we evaluated the behavioral and electroencephalogram (EEG) decoding performance of subjects before and after cognitive training in high target percentage (with AB) and low target percentage (without AB) surveillance tasks, respectively. The results consistently demonstrated significant improvements in the trained subjects. Further analysis indicated that this improvement was attributed to the cognitively trained brain producing more discriminative EEG. Our work highlights the feasibility of cognitive training as a means of brain enhancement to boost BCI performance.</div></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"3 1","pages":"Pages 19-35"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141690095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hand-aware graph convolution network for skeleton-based sign language recognition 基于骨架的手语识别的手感图卷积网络
Pub Date : 2025-01-01 DOI: 10.1016/j.jiixd.2024.08.001
Juan Song , Huixuechun Wang , Jianan Li , Jian Zheng , Zhifu Zhao , Qingshan Li
Skeleton-based sign language recognition (SLR) is a challenging research area mainly due to the fast and complex hand movement. Currently, graph convolution networks (GCNs) have been employed in skeleton-based SLR and achieved remarkable performance. However, existing GCN-based SLR methods suffer from a lack of explicit attention to hand topology which plays an important role in the sign language representation. To address this issue, we propose a novel hand-aware graph convolution network (HA-GCN) to focus on hand topological relationships of skeleton graph. Specifically, a hand-aware graph convolution layer is designed to capture both global body and local hand information, in which two sub-graphs are defined and incorporated to represent hand topology information. In addition, in order to eliminate the over-fitting problem, an adaptive DropGraph is designed in construction of hand-aware graph convolution block to remove the spatial and temporal redundancy in the sign language representation. With the aim to further improve the performance, the joints information, bones, together with their motion information are simultaneously modeled in a multi-stream framework. Extensive experiments on the two open-source datasets, AUTSL and INCLUDE, demonstrate that our proposed algorithm outperforms the state-of-the-art with a significant margin. Our code is available at https://github.com/snorlaxse/HA-SLR-GCN.
基于骨骼的手语识别是一个具有挑战性的研究领域,主要是由于手部运动的快速和复杂。目前,图卷积网络(GCNs)已被应用于基于骨架的单反中,并取得了显著的性能。然而,现有的基于gcn的单反方法缺乏对手部拓扑的明确关注,而手部拓扑在手语表征中起着重要作用。为了解决这一问题,我们提出了一种新的手感知图卷积网络(HA-GCN)来关注骨架图的手拓扑关系。具体而言,设计了一个手感知图卷积层来捕获全局和局部的手信息,其中定义并合并了两个子图来表示手的拓扑信息。此外,为了消除过拟合问题,在构建手感图卷积块时设计了自适应DropGraph,以消除手语表示中的时空冗余。为了进一步提高性能,关节信息、骨骼及其运动信息在多流框架中同时建模。在两个开源数据集(AUTSL和INCLUDE)上进行的大量实验表明,我们提出的算法在很大程度上优于最先进的算法。我们的代码可在https://github.com/snorlaxse/HA-SLR-GCN上获得。
{"title":"Hand-aware graph convolution network for skeleton-based sign language recognition","authors":"Juan Song ,&nbsp;Huixuechun Wang ,&nbsp;Jianan Li ,&nbsp;Jian Zheng ,&nbsp;Zhifu Zhao ,&nbsp;Qingshan Li","doi":"10.1016/j.jiixd.2024.08.001","DOIUrl":"10.1016/j.jiixd.2024.08.001","url":null,"abstract":"<div><div>Skeleton-based sign language recognition (SLR) is a challenging research area mainly due to the fast and complex hand movement. Currently, graph convolution networks (GCNs) have been employed in skeleton-based SLR and achieved remarkable performance. However, existing GCN-based SLR methods suffer from a lack of explicit attention to hand topology which plays an important role in the sign language representation. To address this issue, we propose a novel hand-aware graph convolution network (HA-GCN) to focus on hand topological relationships of skeleton graph. Specifically, a hand-aware graph convolution layer is designed to capture both global body and local hand information, in which two sub-graphs are defined and incorporated to represent hand topology information. In addition, in order to eliminate the over-fitting problem, an adaptive DropGraph is designed in construction of hand-aware graph convolution block to remove the spatial and temporal redundancy in the sign language representation. With the aim to further improve the performance, the joints information, bones, together with their motion information are simultaneously modeled in a multi-stream framework. Extensive experiments on the two open-source datasets, AUTSL and INCLUDE, demonstrate that our proposed algorithm outperforms the state-of-the-art with a significant margin. Our code is available at <span><span>https://github.com/snorlaxse/HA-SLR-GCN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"3 1","pages":"Pages 36-50"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143148333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RIFi: Robust and iterative indoor localization based on Wi-Fi RSS fingerprints RIFi:基于Wi-Fi RSS指纹的鲁棒迭代室内定位
Pub Date : 2025-01-01 DOI: 10.1016/j.jiixd.2024.07.003
Wei Liu , Meng Niu , Yunghsiang S. Han
RSS fingerprint based indoor localization consists of two phases: offline phase and online phase. A RSS fingerprint database constructed at the offline phase may be outdated for online phase, which may significantly degrade the localization performance. Furthermore, maintaining an RSS fingerprint database is a labor intensive and time-consuming task. In this paper, we proposes a robust and iterative indoor localization algorithm based on Wi-Fi RSS fingerprints, referred to as RIFi, which does not need to update the RSS fingerprint database and perform well even if the RSS fingerprint database is outdated. Specifically, we demonstrate that smaller localization area can provides better performance for outdated fingerprint database. Furthermore, we propose an iterative algorithm to determine the smaller localization area. Finally, the K-nearest neighbors (KNN) algorithm is invoked for the determined smaller localization area. Simulation results show that the proposed RIFi algorithm can significantly outperforms the traditional KNN algorithm for outdated RSS fingerprint database, and is more robust.
基于RSS指纹的室内定位分为离线阶段和在线阶段。在离线阶段构建的RSS指纹数据库在在线阶段可能已经过时,这可能会严重降低定位性能。此外,维护RSS指纹数据库是一项劳动密集型且耗时的任务。本文提出了一种基于Wi-Fi RSS指纹的鲁棒迭代室内定位算法(RIFi),该算法不需要更新RSS指纹库,即使RSS指纹库过时也能保持良好的性能。具体来说,我们证明了较小的定位区域可以为过时的指纹数据库提供更好的性能。此外,我们提出了一种迭代算法来确定较小的定位区域。最后,对确定的较小的定位区域调用k近邻(KNN)算法。仿真结果表明,对于过时的RSS指纹库,本文提出的RIFi算法可以显著优于传统的KNN算法,并且具有更强的鲁棒性。
{"title":"RIFi: Robust and iterative indoor localization based on Wi-Fi RSS fingerprints","authors":"Wei Liu ,&nbsp;Meng Niu ,&nbsp;Yunghsiang S. Han","doi":"10.1016/j.jiixd.2024.07.003","DOIUrl":"10.1016/j.jiixd.2024.07.003","url":null,"abstract":"<div><div>RSS fingerprint based indoor localization consists of two phases: offline phase and online phase. A RSS fingerprint database constructed at the offline phase may be outdated for online phase, which may significantly degrade the localization performance. Furthermore, maintaining an RSS fingerprint database is a labor intensive and time-consuming task. In this paper, we proposes a robust and iterative indoor localization algorithm based on Wi-Fi RSS fingerprints, referred to as RIFi, which does not need to update the RSS fingerprint database and perform well even if the RSS fingerprint database is outdated. Specifically, we demonstrate that smaller localization area can provides better performance for outdated fingerprint database. Furthermore, we propose an iterative algorithm to determine the smaller localization area. Finally, the K-nearest neighbors (KNN) algorithm is invoked for the determined smaller localization area. Simulation results show that the proposed RIFi algorithm can significantly outperforms the traditional KNN algorithm for outdated RSS fingerprint database, and is more robust.</div></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"3 1","pages":"Pages 1-18"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143149201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Information and Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1