首页 > 最新文献

IEICE Transactions on Information and Systems最新文献

英文 中文
Context-Aware Stock Recommendations with Stocks' Characteristics and Investors' Traits 基于股票特征和投资者特征的情境感知股票推荐
4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-01 DOI: 10.1587/transinf.2023edp7017
Takehiro TAKAYANAGI, Kiyoshi IZUMI
Personalized stock recommendations aim to suggest stocks tailored to individual investor needs, significantly aiding the financial decision making of an investor. This study shows the advantages of incorporating context into personalized stock recommendation systems. We embed item contextual information such as technical indicators, fundamental factors, and business activities of individual stocks. Simultaneously, we consider user contextual information such as investors' personality traits, behavioral characteristics, and attributes to create a comprehensive investor profile. Our model incorporating contextual information, validated on novel stock recommendation tasks, demonstrated a notable improvement over baseline models when incorporating these contextual features. Consistent outperformance across various hyperparameters further underscores the robustness and utility of our model in integrating stocks' features and investors' traits into personalized stock recommendations.
个性化股票推荐旨在推荐适合个人投资者需求的股票,极大地帮助投资者做出财务决策。本研究显示了将情境纳入个性化股票推荐系统的优势。我们嵌入项目上下文信息,如技术指标、基本因素和个股的商业活动。同时,我们考虑用户上下文信息,如投资者的个性特征、行为特征和属性,以创建一个全面的投资者档案。我们的模型结合了上下文信息,在新的股票推荐任务上得到了验证,当结合这些上下文特征时,我们的模型比基线模型有了显著的改进。在各种超参数中持续的优异表现进一步强调了我们的模型在将股票特征和投资者特征整合到个性化股票推荐中的鲁棒性和实用性。
{"title":"Context-Aware Stock Recommendations with Stocks' Characteristics and Investors' Traits","authors":"Takehiro TAKAYANAGI, Kiyoshi IZUMI","doi":"10.1587/transinf.2023edp7017","DOIUrl":"https://doi.org/10.1587/transinf.2023edp7017","url":null,"abstract":"Personalized stock recommendations aim to suggest stocks tailored to individual investor needs, significantly aiding the financial decision making of an investor. This study shows the advantages of incorporating context into personalized stock recommendation systems. We embed item contextual information such as technical indicators, fundamental factors, and business activities of individual stocks. Simultaneously, we consider user contextual information such as investors' personality traits, behavioral characteristics, and attributes to create a comprehensive investor profile. Our model incorporating contextual information, validated on novel stock recommendation tasks, demonstrated a notable improvement over baseline models when incorporating these contextual features. Consistent outperformance across various hyperparameters further underscores the robustness and utility of our model in integrating stocks' features and investors' traits into personalized stock recommendations.","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135372813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fusion-Based Edge and Color Recovery Using Weighted Near-Infrared Image and Color Transmission Maps for Robust Haze Removal 基于融合的边缘和颜色恢复,使用加权近红外图像和彩色透射图鲁棒去除雾霾
4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-01 DOI: 10.1587/transinf.2023pcp0007
Onhi KATO, Akira KUBOTA
Various haze removal methods based on the atmospheric scattering model have been presented in recent years. Most methods have targeted strong haze images where light is scattered equally in all color channels. This paper presents a haze removal method using near-infrared (NIR) images for relatively weak haze images. In order to recover the lost edges, the presented method first extracts edges from an appropriately weighted NIR image and fuses it with the color image. By introducing a wavelength-dependent scattering model, our method then estimates the transmission map for each color channel and recovers the color more naturally from the edge-recovered image. Finally, the edge-recovered and the color-recovered images are blended. In this blending process, the regions with high lightness, such as sky and clouds, where unnatural color shifts are likely to occur, are effectively estimated, and the optimal weighting map is obtained. Our qualitative and quantitative evaluations using 59 pairs of color and NIR images demonstrated that our method can recover edges and colors more naturally in weak haze images than conventional methods.
近年来提出了各种基于大气散射模型的雾霾去除方法。大多数方法都针对强雾霾图像,其中光在所有颜色通道中均匀散射。针对相对较弱的雾霾图像,提出了一种利用近红外图像去除雾霾的方法。为了恢复丢失的边缘,该方法首先从适当加权的近红外图像中提取边缘并将其与彩色图像融合。通过引入波长相关的散射模型,我们的方法估计了每个颜色通道的透射图,并从边缘恢复的图像中更自然地恢复颜色。最后,对边缘恢复图像和彩色恢复图像进行混合处理。在混合过程中,有效地估计了天空、云等可能出现不自然颜色偏移的高亮度区域,得到了最优加权图。我们使用59对彩色和近红外图像进行定性和定量评估,结果表明我们的方法可以比传统方法更自然地在弱雾图像中恢复边缘和颜色。
{"title":"Fusion-Based Edge and Color Recovery Using Weighted Near-Infrared Image and Color Transmission Maps for Robust Haze Removal","authors":"Onhi KATO, Akira KUBOTA","doi":"10.1587/transinf.2023pcp0007","DOIUrl":"https://doi.org/10.1587/transinf.2023pcp0007","url":null,"abstract":"Various haze removal methods based on the atmospheric scattering model have been presented in recent years. Most methods have targeted strong haze images where light is scattered equally in all color channels. This paper presents a haze removal method using near-infrared (NIR) images for relatively weak haze images. In order to recover the lost edges, the presented method first extracts edges from an appropriately weighted NIR image and fuses it with the color image. By introducing a wavelength-dependent scattering model, our method then estimates the transmission map for each color channel and recovers the color more naturally from the edge-recovered image. Finally, the edge-recovered and the color-recovered images are blended. In this blending process, the regions with high lightness, such as sky and clouds, where unnatural color shifts are likely to occur, are effectively estimated, and the optimal weighting map is obtained. Our qualitative and quantitative evaluations using 59 pairs of color and NIR images demonstrated that our method can recover edges and colors more naturally in weak haze images than conventional methods.","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135373449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large-Scale Gaussian Process Regression Based on Random Fourier Features and Local Approximation with Tsallis Entropy 基于随机傅立叶特征和Tsallis熵局部逼近的大规模高斯过程回归
4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-01 DOI: 10.1587/transinf.2023edl8016
Hongli ZHANG, Jinglei LIU
With the emergence of a large quantity of data in science and industry, it is urgent to improve the prediction accuracy and reduce the high complexity of Gaussian process regression (GPR). However, the traditional global approximation and local approximation have corresponding shortcomings, such as global approximation tends to ignore local features, and local approximation has the problem of over-fitting. In order to solve these problems, a large-scale Gaussian process regression algorithm (RFFLT) combining random Fourier features (RFF) and local approximation is proposed. 1) In order to speed up the training time, we use the random Fourier feature map input data mapped to the random low-dimensional feature space for processing. The main innovation of the algorithm is to design features by using existing fast linear processing methods, so that the inner product of the transformed data is approximately equal to the inner product in the feature space of the shift invariant kernel specified by the user. 2) The generalized robust Bayesian committee machine (GRBCM) based on Tsallis mutual information method is used in local approximation, which enhances the flexibility of the model and generates a sparse representation of the expert weight distribution compared with previous work. The algorithm RFFLT was tested on six real data sets, which greatly shortened the time of regression prediction and improved the prediction accuracy.
随着科学和工业中大量数据的出现,提高高斯过程回归(Gaussian process regression, GPR)的预测精度和降低其高复杂性已成为迫切需要解决的问题。然而,传统的全局近似和局部近似都存在相应的缺点,如全局近似容易忽略局部特征,局部近似存在过拟合问题。为了解决这些问题,提出了一种结合随机傅立叶特征(RFF)和局部近似的大规模高斯过程回归算法。1)为了加快训练时间,我们使用随机傅立叶特征映射将输入数据映射到随机低维特征空间进行处理。该算法的主要创新之处在于利用现有的快速线性处理方法设计特征,使变换后的数据的内积近似等于用户指定的移位不变核特征空间内的内积。2)采用基于Tsallis互信息方法的广义鲁棒贝叶斯委员会机(GRBCM)进行局部逼近,增强了模型的灵活性,与前人相比,生成了专家权重分布的稀疏表示。RFFLT算法在6个真实数据集上进行了测试,大大缩短了回归预测时间,提高了预测精度。
{"title":"Large-Scale Gaussian Process Regression Based on Random Fourier Features and Local Approximation with Tsallis Entropy","authors":"Hongli ZHANG, Jinglei LIU","doi":"10.1587/transinf.2023edl8016","DOIUrl":"https://doi.org/10.1587/transinf.2023edl8016","url":null,"abstract":"With the emergence of a large quantity of data in science and industry, it is urgent to improve the prediction accuracy and reduce the high complexity of Gaussian process regression (GPR). However, the traditional global approximation and local approximation have corresponding shortcomings, such as global approximation tends to ignore local features, and local approximation has the problem of over-fitting. In order to solve these problems, a large-scale Gaussian process regression algorithm (RFFLT) combining random Fourier features (RFF) and local approximation is proposed. 1) In order to speed up the training time, we use the random Fourier feature map input data mapped to the random low-dimensional feature space for processing. The main innovation of the algorithm is to design features by using existing fast linear processing methods, so that the inner product of the transformed data is approximately equal to the inner product in the feature space of the shift invariant kernel specified by the user. 2) The generalized robust Bayesian committee machine (GRBCM) based on Tsallis mutual information method is used in local approximation, which enhances the flexibility of the model and generates a sparse representation of the expert weight distribution compared with previous work. The algorithm RFFLT was tested on six real data sets, which greatly shortened the time of regression prediction and improved the prediction accuracy.","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135369905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Filter Bank for Perfect Reconstruction of Light Field from Its Focal Stack 从焦叠中完美重建光场的滤波器组
4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-01 DOI: 10.1587/transinf.2023pcp0006
Akira KUBOTA, Kazuya KODAMA, Daiki TAMURA, Asami ITO
Focal stacks (FS) have attracted attention as an alternative representation of light field (LF). However, the problem of reconstructing LF from its FS is considered ill-posed. Although many regularization methods have been discussed, no method has been proposed to solve this problem perfectly. This paper showed that the LF can be perfectly reconstructed from the FS through a filter bank in theory for Lambertian scenes without occlusion if the camera aperture for acquiring the FS is a Cauchy function. The numerical simulation demonstrated that the filter bank allows perfect reconstruction of the LF.
焦点叠加作为光场的另一种表现形式引起了人们的关注。然而,从其FS重建LF的问题被认为是不适定的。虽然讨论了许多正则化方法,但没有一种方法能完美地解决这个问题。本文证明了在无遮挡的兰伯场景中,如果获取FS的相机光圈为柯西函数,则可以通过滤波器组对FS进行理论上的重构。数值模拟结果表明,该滤波器组可以很好地重建低频信号。
{"title":"Filter Bank for Perfect Reconstruction of Light Field from Its Focal Stack","authors":"Akira KUBOTA, Kazuya KODAMA, Daiki TAMURA, Asami ITO","doi":"10.1587/transinf.2023pcp0006","DOIUrl":"https://doi.org/10.1587/transinf.2023pcp0006","url":null,"abstract":"Focal stacks (FS) have attracted attention as an alternative representation of light field (LF). However, the problem of reconstructing LF from its FS is considered ill-posed. Although many regularization methods have been discussed, no method has been proposed to solve this problem perfectly. This paper showed that the LF can be perfectly reconstructed from the FS through a filter bank in theory for Lambertian scenes without occlusion if the camera aperture for acquiring the FS is a Cauchy function. The numerical simulation demonstrated that the filter bank allows perfect reconstruction of the LF.","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135372682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fault-Resilient Robot Operating System Supporting Rapid Fault Recovery with Node Replication 支持节点复制快速故障恢复的容错机器人操作系统
4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-01 DOI: 10.1587/transinf.2023edl8014
Jonghyeok YOU, Heesoo KIM, Kilho LEE
This paper proposes a fault-resilient ROS platform supporting rapid fault detection and recovery. The platform employs heartbeat-based fault detection and node replication-based recovery. Our prototype implementation on top of the ROS Melodic shows a great performance in evaluations with a Nvidia development board and an inverted pendulum device.
本文提出了一种支持快速故障检测和恢复的容错ROS平台。该平台采用基于心跳的故障检测和基于节点复制的故障恢复。我们在ROS Melodic之上的原型实现在Nvidia开发板和倒立摆设备的评估中显示出出色的性能。
{"title":"Fault-Resilient Robot Operating System Supporting Rapid Fault Recovery with Node Replication","authors":"Jonghyeok YOU, Heesoo KIM, Kilho LEE","doi":"10.1587/transinf.2023edl8014","DOIUrl":"https://doi.org/10.1587/transinf.2023edl8014","url":null,"abstract":"This paper proposes a fault-resilient ROS platform supporting rapid fault detection and recovery. The platform employs heartbeat-based fault detection and node replication-based recovery. Our prototype implementation on top of the ROS Melodic shows a great performance in evaluations with a Nvidia development board and an inverted pendulum device.","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135372806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social Relation Atmosphere Recognition with Relevant Visual Concepts 基于相关视觉概念的社会关系氛围识别
4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-01 DOI: 10.1587/transinf.2023pcp0008
Ying JI, Yu WANG, Kensaku MORI, Jien KATO
Social relationships (e.g., couples, opponents) are the foundational part of society. Social relation atmosphere describes the overall interaction environment between social relationships. Discovering social relation atmosphere can help machines better comprehend human behaviors and improve the performance of social intelligent applications. Most existing research mainly focuses on investigating social relationships, while ignoring the social relation atmosphere. Due to the complexity of the expressions in video data and the uncertainty of the social relation atmosphere, it is even difficult to define and evaluate. In this paper, we innovatively analyze the social relation atmosphere in video data. We introduce a Relevant Visual Concept (RVC) from the social relationship recognition task to facilitate social relation atmosphere recognition, because social relationships contain useful information about human interactions and surrounding environments, which are crucial clues for social relation atmosphere recognition. Our approach consists of two main steps: (1) we first generate a group of visual concepts that preserve the inherent social relationship information by utilizing a 3D explanation module; (2) the extracted relevant visual concepts are used to supplement the social relation atmosphere recognition. In addition, we present a new dataset based on the existing Video Social Relation Dataset. Each video is annotated with four kinds of social relation atmosphere attributes and one social relationship. We evaluate the proposed method on our dataset. Experiments with various 3D ConvNets and fusion methods demonstrate that the proposed method can effectively improve recognition accuracy compared to end-to-end ConvNets. The visualization results also indicate that essential information in social relationships can be discovered and used to enhance social relation atmosphere recognition.
社会关系(如夫妻、对手)是社会的基本组成部分。社会关系氛围描述了社会关系之间的整体互动环境。发现社会关系氛围可以帮助机器更好地理解人类行为,提高社会智能应用的性能。现有的研究大多侧重于对社会关系的考察,而忽略了对社会关系氛围的考察。由于视频数据表达的复杂性和社会关系氛围的不确定性,甚至难以定义和评价。本文创新性地分析了视频数据中的社会关系氛围。我们从社会关系识别任务中引入相关视觉概念(Relevant Visual Concept, RVC)来促进社会关系氛围的识别,因为社会关系包含有关人际互动和周围环境的有用信息,这些信息是社会关系氛围识别的重要线索。我们的方法包括两个主要步骤:(1)我们首先利用3D解释模块生成一组视觉概念,这些概念保留了固有的社会关系信息;(2)利用提取的相关视觉概念补充社会关系氛围识别。此外,我们在现有视频社交关系数据集的基础上提出了一个新的数据集。每个视频都标注了四种社会关系氛围属性和一种社会关系。我们在我们的数据集上评估了所提出的方法。对多种三维卷积神经网络和融合方法的实验表明,与端到端卷积神经网络相比,该方法可以有效提高识别精度。可视化结果还表明,可以发现社会关系中的重要信息,并利用这些信息来增强社会关系氛围的识别。
{"title":"Social Relation Atmosphere Recognition with Relevant Visual Concepts","authors":"Ying JI, Yu WANG, Kensaku MORI, Jien KATO","doi":"10.1587/transinf.2023pcp0008","DOIUrl":"https://doi.org/10.1587/transinf.2023pcp0008","url":null,"abstract":"Social relationships (e.g., couples, opponents) are the foundational part of society. Social relation atmosphere describes the overall interaction environment between social relationships. Discovering social relation atmosphere can help machines better comprehend human behaviors and improve the performance of social intelligent applications. Most existing research mainly focuses on investigating social relationships, while ignoring the social relation atmosphere. Due to the complexity of the expressions in video data and the uncertainty of the social relation atmosphere, it is even difficult to define and evaluate. In this paper, we innovatively analyze the social relation atmosphere in video data. We introduce a Relevant Visual Concept (RVC) from the social relationship recognition task to facilitate social relation atmosphere recognition, because social relationships contain useful information about human interactions and surrounding environments, which are crucial clues for social relation atmosphere recognition. Our approach consists of two main steps: (1) we first generate a group of visual concepts that preserve the inherent social relationship information by utilizing a 3D explanation module; (2) the extracted relevant visual concepts are used to supplement the social relation atmosphere recognition. In addition, we present a new dataset based on the existing Video Social Relation Dataset. Each video is annotated with four kinds of social relation atmosphere attributes and one social relationship. We evaluate the proposed method on our dataset. Experiments with various 3D ConvNets and fusion methods demonstrate that the proposed method can effectively improve recognition accuracy compared to end-to-end ConvNets. The visualization results also indicate that essential information in social relationships can be discovered and used to enhance social relation atmosphere recognition.","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135372838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prior Information Based Decomposition and Reconstruction Learning for Micro-Expression Recognition 基于先验信息的微表情识别分解与重构学习
4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-01 DOI: 10.1587/transinf.2022edl8065
Jinsheng WEI, Haoyu CHEN, Guanming LU, Jingjie YAN, Yue XIE, Guoying ZHAO
Micro-expression recognition (MER) draws intensive research interest as micro-expressions (MEs) can infer genuine emotions. Prior information can guide the model to learn discriminative ME features effectively. However, most works focus on researching the general models with a stronger representation ability to adaptively aggregate ME movement information in a holistic way, which may ignore the prior information and properties of MEs. To solve this issue, driven by the prior information that the category of ME can be inferred by the relationship between the actions of facial different components, this work designs a novel model that can conform to this prior information and learn ME movement features in an interpretable way. Specifically, this paper proposes a Decomposition and Reconstruction-based Graph Representation Learning (DeRe-GRL) model to efectively learn high-level ME features. DeRe-GRL includes two modules: Action Decomposition Module (ADM) and Relation Reconstruction Module (RRM), where ADM learns action features of facial key components and RRM explores the relationship between these action features. Based on facial key components, ADM divides the geometric movement features extracted by the graph model-based backbone into several sub-features, and learns the map matrix to map these sub-features into multiple action features; then, RRM learns weights to weight all action features to build the relationship between action features. The experimental results demonstrate the effectiveness of the proposed modules, and the proposed method achieves competitive performance.
微表情识别是一种可以推断真实情绪的技术,引起了人们的广泛关注。先验信息可以有效地指导模型学习判别性特征。然而,大多数研究都集中在研究具有较强表征能力的通用模型,以整体的方式自适应地聚合MEs的运动信息,这可能忽略了MEs的先验信息和属性。为了解决这一问题,本文根据面部不同成分动作之间的关系可以推断出ME的类别这一先验信息,设计了一种符合这一先验信息的新模型,并以可解释的方式学习ME的运动特征。具体而言,本文提出了一种基于分解和重构的图表示学习(de - grl)模型,以有效地学习高级ME特征。dee - grl包括两个模块:动作分解模块(Action Decomposition Module, ADM)和关系重构模块(Relation Reconstruction Module, RRM),其中ADM学习面部关键成分的动作特征,RRM探索这些动作特征之间的关系。ADM基于人脸关键成分,将基于图模型主干提取的几何运动特征划分为若干个子特征,并学习映射矩阵将这些子特征映射为多个动作特征;然后,RRM学习权重,对所有动作特征进行加权,构建动作特征之间的关系。实验结果证明了所提模块的有效性,所提方法取得了较好的性能。
{"title":"Prior Information Based Decomposition and Reconstruction Learning for Micro-Expression Recognition","authors":"Jinsheng WEI, Haoyu CHEN, Guanming LU, Jingjie YAN, Yue XIE, Guoying ZHAO","doi":"10.1587/transinf.2022edl8065","DOIUrl":"https://doi.org/10.1587/transinf.2022edl8065","url":null,"abstract":"Micro-expression recognition (MER) draws intensive research interest as micro-expressions (MEs) can infer genuine emotions. Prior information can guide the model to learn discriminative ME features effectively. However, most works focus on researching the general models with a stronger representation ability to adaptively aggregate ME movement information in a holistic way, which may ignore the prior information and properties of MEs. To solve this issue, driven by the prior information that the category of ME can be inferred by the relationship between the actions of facial different components, this work designs a novel model that can conform to this prior information and learn ME movement features in an interpretable way. Specifically, this paper proposes a Decomposition and Reconstruction-based Graph Representation Learning (DeRe-GRL) model to efectively learn high-level ME features. DeRe-GRL includes two modules: Action Decomposition Module (ADM) and Relation Reconstruction Module (RRM), where ADM learns action features of facial key components and RRM explores the relationship between these action features. Based on facial key components, ADM divides the geometric movement features extracted by the graph model-based backbone into several sub-features, and learns the map matrix to map these sub-features into multiple action features; then, RRM learns weights to weight all action features to build the relationship between action features. The experimental results demonstrate the effectiveness of the proposed modules, and the proposed method achieves competitive performance.","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135372985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantitative Estimation of Video Forgery with Anomaly Analysis of Optical Flow 基于光流异常分析的视频伪造定量估计
4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-01 DOI: 10.1587/transinf.2022edl8107
Wan Yeon LEE, Yun-Seok CHOI, Tong Min KIM
We propose a quantitative measurement technique of video forgery that eliminates the decision burden of subtle boundary between normal and tampered patterns. We also propose the automatic adjustment scheme of spatial and temporal target zones, which maximizes the abnormality measurement of forged videos. Evaluation shows that the proposed scheme provides manifest detection capability against both inter-frame and intra-frame forgeries.
我们提出了一种视频伪造的定量测量技术,消除了正常模式和篡改模式之间微妙边界的决策负担。我们还提出了时空目标区域的自动调整方案,最大限度地提高了伪造视频的异常测量。评估表明,该方案对帧间和帧内伪造都具有明显的检测能力。
{"title":"Quantitative Estimation of Video Forgery with Anomaly Analysis of Optical Flow","authors":"Wan Yeon LEE, Yun-Seok CHOI, Tong Min KIM","doi":"10.1587/transinf.2022edl8107","DOIUrl":"https://doi.org/10.1587/transinf.2022edl8107","url":null,"abstract":"We propose a quantitative measurement technique of video forgery that eliminates the decision burden of subtle boundary between normal and tampered patterns. We also propose the automatic adjustment scheme of spatial and temporal target zones, which maximizes the abnormality measurement of forged videos. Evaluation shows that the proposed scheme provides manifest detection capability against both inter-frame and intra-frame forgeries.","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135373147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local-to-Global Structure-Aware Transformer for Question Answering over Structured Knowledge 面向结构化知识问答的局部到全局结构感知转换器
4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-01 DOI: 10.1587/transinf.2023edp7034
Yingyao WANG, Han WANG, Chaoqun DUAN, Tiejun ZHAO
Question-answering tasks over structured knowledge (i.e., tables and graphs) require the ability to encode structural information. Traditional pre-trained language models trained on linear-chain natural language cannot be directly applied to encode tables and graphs. The existing methods adopt the pre-trained models in such tasks by flattening structured knowledge into sequences. However, the serialization operation will lead to the loss of the structural information of knowledge. To better employ pre-trained transformers for structured knowledge representation, we propose a novel structure-aware transformer (SATrans) that injects the local-to-global structural information of the knowledge into the mask of the different self-attention layers. Specifically, in the lower self-attention layers, SATrans focus on the local structural information of each knowledge token to learn a more robust representation of it. In the upper self-attention layers, SATrans further injects the global information of the structured knowledge to integrate the information among knowledge tokens. In this way, the SATrans can effectively learn the semantic representation and structural information from the knowledge sequence and the attention mask, respectively. We evaluate SATrans on the table fact verification task and the knowledge base question-answering task. Furthermore, we explore two methods to combine symbolic and linguistic reasoning for these tasks to solve the problem that the pre-trained models lack symbolic reasoning ability. The experiment results reveal that the methods consistently outperform strong baselines on the two benchmarks.
结构化知识(即表格和图表)上的问答任务需要对结构化信息进行编码的能力。传统的基于线性链自然语言的预训练语言模型不能直接用于表和图的编码。现有方法通过将结构化知识扁平化为序列,采用预先训练好的模型。然而,序列化操作会导致知识结构信息的丢失。为了更好地利用预先训练好的变压器进行结构化知识表示,我们提出了一种新的结构感知变压器(satans),它将知识的局部到全局结构信息注入到不同自关注层的掩膜中。具体来说,在较低的自关注层中,satans专注于每个知识令牌的局部结构信息,以学习它的更健壮的表示。在上层自关注层,satans进一步注入结构化知识的全局信息,实现知识标记间的信息集成。这样,satans可以有效地分别从知识序列和注意掩模中学习到语义表示和结构信息。我们在表格事实验证任务和知识库问答任务上对satans进行评估。此外,我们探索了两种将符号推理和语言推理相结合的方法来解决预训练模型缺乏符号推理能力的问题。实验结果表明,该方法在两个基准上的表现始终优于强基线。
{"title":"Local-to-Global Structure-Aware Transformer for Question Answering over Structured Knowledge","authors":"Yingyao WANG, Han WANG, Chaoqun DUAN, Tiejun ZHAO","doi":"10.1587/transinf.2023edp7034","DOIUrl":"https://doi.org/10.1587/transinf.2023edp7034","url":null,"abstract":"Question-answering tasks over structured knowledge (i.e., tables and graphs) require the ability to encode structural information. Traditional pre-trained language models trained on linear-chain natural language cannot be directly applied to encode tables and graphs. The existing methods adopt the pre-trained models in such tasks by flattening structured knowledge into sequences. However, the serialization operation will lead to the loss of the structural information of knowledge. To better employ pre-trained transformers for structured knowledge representation, we propose a novel structure-aware transformer (SATrans) that injects the local-to-global structural information of the knowledge into the mask of the different self-attention layers. Specifically, in the lower self-attention layers, SATrans focus on the local structural information of each knowledge token to learn a more robust representation of it. In the upper self-attention layers, SATrans further injects the global information of the structured knowledge to integrate the information among knowledge tokens. In this way, the SATrans can effectively learn the semantic representation and structural information from the knowledge sequence and the attention mask, respectively. We evaluate SATrans on the table fact verification task and the knowledge base question-answering task. Furthermore, we explore two methods to combine symbolic and linguistic reasoning for these tasks to solve the problem that the pre-trained models lack symbolic reasoning ability. The experiment results reveal that the methods consistently outperform strong baselines on the two benchmarks.","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135369904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial Mask Completion Using StyleGAN2 Preserving Features of the Person 使用StyleGAN2完成面膜,保留人物的特征
4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-01 DOI: 10.1587/transinf.2023pcp0002
Norihiko KAWAI, Hiroaki KOIKE
Due to the global outbreak of coronaviruses, people are increasingly wearing masks even when photographed. As a result, photos uploaded to web pages and social networking services with the lower half of the face hidden are less likely to convey the attractiveness of the photographed persons. In this study, we propose a method to complete facial mask regions using StyleGAN2, a type of Generative Adversarial Networks (GAN). In the proposed method, a reference image of the same person without a mask is prepared separately from a target image of the person wearing a mask. After the mask region in the target image is temporarily inpainted, the face orientation and contour of the person in the reference image are changed to match those of the target image using StyleGAN2. The changed image is then composited into the mask region while correcting the color tone to produce a mask-free image while preserving the person's features.
由于新型冠状病毒的全球爆发,人们越来越多地戴着口罩拍照。因此,上传到网页和社交网络服务的照片中,遮住脸的下半部分不太可能传达出被拍照者的吸引力。在这项研究中,我们提出了一种使用StyleGAN2(一种生成式对抗网络(GAN))来完成人脸区域的方法。在所提出的方法中,将同一人不戴口罩的参考图像与戴口罩的人的目标图像分开制备。在目标图像中的掩模区域被临时填充后,使用StyleGAN2改变参考图像中人物的面部方向和轮廓,使其与目标图像相匹配。然后将改变后的图像合成到掩模区域,同时对色调进行校正,在保留人物特征的同时产生无掩模图像。
{"title":"Facial Mask Completion Using StyleGAN2 Preserving Features of the Person","authors":"Norihiko KAWAI, Hiroaki KOIKE","doi":"10.1587/transinf.2023pcp0002","DOIUrl":"https://doi.org/10.1587/transinf.2023pcp0002","url":null,"abstract":"Due to the global outbreak of coronaviruses, people are increasingly wearing masks even when photographed. As a result, photos uploaded to web pages and social networking services with the lower half of the face hidden are less likely to convey the attractiveness of the photographed persons. In this study, we propose a method to complete facial mask regions using StyleGAN2, a type of Generative Adversarial Networks (GAN). In the proposed method, a reference image of the same person without a mask is prepared separately from a target image of the person wearing a mask. After the mask region in the target image is temporarily inpainted, the face orientation and contour of the person in the reference image are changed to match those of the target image using StyleGAN2. The changed image is then composited into the mask region while correcting the color tone to produce a mask-free image while preserving the person's features.","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135372509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEICE Transactions on Information and Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1