首页 > 最新文献

IEICE Transactions on Information and Systems最新文献

英文 中文
Regressive Gaussian Process Latent Variable Model for Few-Frame Human Motion Prediction 基于回归高斯过程潜变量模型的少帧人体运动预测
4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-01 DOI: 10.1587/transinf.2023pcp0001
Xin JIN, Jia GUO
Human motion prediction has always been an interesting research topic in computer vision and robotics. It means forecasting human movements in the future conditioning on historical 3-dimensional human skeleton sequences. Existing predicting algorithms usually rely on extensive annotated or non-annotated motion capture data and are non-adaptive. This paper addresses the problem of few-frame human motion prediction, in the spirit of the recent progress on manifold learning. More precisely, our approach is based on the insight that achieving an accurate prediction relies on a sufficiently linear expression in the latent space from a few training data in observation space. To accomplish this, we propose Regressive Gaussian Process Latent Variable Model (RGPLVM) that introduces a novel regressive kernel function for the model training. By doing so, our model produces a linear mapping from the training data space to the latent space, while effectively transforming the prediction of human motion in physical space to the linear regression analysis in the latent space equivalent. The comparison with two learning motion prediction approaches (the state-of-the-art meta learning and the classical LSTM-3LR) demonstrate that our GPLVM significantly improves the prediction performance on various of actions in the small-sample size regime.
人体运动预测一直是计算机视觉和机器人领域一个有趣的研究课题。它意味着在历史上的三维人体骨骼序列的条件下预测未来的人类运动。现有的预测算法通常依赖于大量带注释或不带注释的动作捕捉数据,并且是非自适应的。本文以流形学习的最新进展为精神,研究了少帧人体运动预测问题。更准确地说,我们的方法是基于这样一种见解,即实现准确的预测依赖于观察空间中少量训练数据在潜在空间中的充分线性表达。为了实现这一目标,我们提出了回归高斯过程潜变量模型(RGPLVM),该模型为模型训练引入了一种新的回归核函数。通过这样做,我们的模型产生了从训练数据空间到潜在空间的线性映射,同时有效地将物理空间中人体运动的预测转换为潜在空间当量中的线性回归分析。与两种学习运动预测方法(最先进的元学习和经典的LSTM-3LR)的比较表明,我们的GPLVM显著提高了小样本量范围内各种动作的预测性能。
{"title":"Regressive Gaussian Process Latent Variable Model for Few-Frame Human Motion Prediction","authors":"Xin JIN, Jia GUO","doi":"10.1587/transinf.2023pcp0001","DOIUrl":"https://doi.org/10.1587/transinf.2023pcp0001","url":null,"abstract":"Human motion prediction has always been an interesting research topic in computer vision and robotics. It means forecasting human movements in the future conditioning on historical 3-dimensional human skeleton sequences. Existing predicting algorithms usually rely on extensive annotated or non-annotated motion capture data and are non-adaptive. This paper addresses the problem of few-frame human motion prediction, in the spirit of the recent progress on manifold learning. More precisely, our approach is based on the insight that achieving an accurate prediction relies on a sufficiently linear expression in the latent space from a few training data in observation space. To accomplish this, we propose Regressive Gaussian Process Latent Variable Model (RGPLVM) that introduces a novel regressive kernel function for the model training. By doing so, our model produces a linear mapping from the training data space to the latent space, while effectively transforming the prediction of human motion in physical space to the linear regression analysis in the latent space equivalent. The comparison with two learning motion prediction approaches (the state-of-the-art meta learning and the classical LSTM-3LR) demonstrate that our GPLVM significantly improves the prediction performance on various of actions in the small-sample size regime.","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135372842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Network-Based Post-Processing Filter on V-PCC Attribute Frames 基于神经网络的V-PCC属性帧后处理滤波
4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-01 DOI: 10.1587/transinf.2023pcl0002
Keiichiro TAKADA, Yasuaki TOKUMO, Tomohiro IKAI, Takeshi CHUJOH
Video-based point cloud compression (V-PCC) utilizes video compression technology to efficiently encode dense point clouds providing state-of-the-art compression performance with a relatively small computation burden. V-PCC converts 3-dimensional point cloud data into three types of 2-dimensional frames, i.e., occupancy, geometry, and attribute frames, and encodes them via video compression. On the other hand, the quality of these frames may be degraded due to video compression. This paper proposes an adaptive neural network-based post-processing filter on attribute frames to alleviate the degradation problem. Furthermore, a novel training method using occupancy frames is studied. The experimental results show average BD-rate gains of 3.0%, 29.3% and 22.2% for Y, U and V respectively.
基于视频的点云压缩(V-PCC)利用视频压缩技术对密集的点云进行高效编码,以相对较小的计算负担提供最先进的压缩性能。V-PCC将三维点云数据转换成三种二维帧,即占用帧、几何帧和属性帧,并通过视频压缩进行编码。另一方面,由于视频压缩,这些帧的质量可能会下降。本文提出了一种基于自适应神经网络的属性帧后处理滤波器,以缓解属性帧的退化问题。在此基础上,研究了一种基于占用帧的训练方法。实验结果表明,Y、U和V的平均bd速率增益分别为3.0%、29.3%和22.2%。
{"title":"Neural Network-Based Post-Processing Filter on V-PCC Attribute Frames","authors":"Keiichiro TAKADA, Yasuaki TOKUMO, Tomohiro IKAI, Takeshi CHUJOH","doi":"10.1587/transinf.2023pcl0002","DOIUrl":"https://doi.org/10.1587/transinf.2023pcl0002","url":null,"abstract":"Video-based point cloud compression (V-PCC) utilizes video compression technology to efficiently encode dense point clouds providing state-of-the-art compression performance with a relatively small computation burden. V-PCC converts 3-dimensional point cloud data into three types of 2-dimensional frames, i.e., occupancy, geometry, and attribute frames, and encodes them via video compression. On the other hand, the quality of these frames may be degraded due to video compression. This paper proposes an adaptive neural network-based post-processing filter on attribute frames to alleviate the degradation problem. Furthermore, a novel training method using occupancy frames is studied. The experimental results show average BD-rate gains of 3.0%, 29.3% and 22.2% for Y, U and V respectively.","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135372846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GPU-Accelerated Estimation and Targeted Reduction of Peak IR-Drop during Scan Chain Shifting 扫描链移位过程中峰值红外降的gpu加速估计和目标降低
4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-01 DOI: 10.1587/transinf.2023edp7011
Shiling SHI, Stefan HOLST, Xiaoqing WEN
High power dissipation during scan test often causes undue yield loss, especially for low-power circuits. One major reason is that the resulting IR-drop in shift mode may corrupt test data. A common approach to solving this problem is partial-shift, in which multiple scan chains are formed and only one group of scan chains is shifted at a time. However, existing partial-shift based methods suffer from two major problems: (1) their IR-drop estimation is not accurate enough or computationally too expensive to be done for each shift cycle; (2) partial-shift is hence applied to all shift cycles, resulting in long test time. This paper addresses these two problems with a novel IR-drop-aware scan shift method, featuring: (1) Cycle-based IR-Drop Estimation (CIDE) supported by a GPU-accelerated dynamic power simulator to quickly find potential shift cycles with excessive peak IR-drop; (2) a scan shift scheduling method that generates a scan chain grouping targeted for each considered shift cycle to reduce the impact on test time. Experiments on ITC'99 benchmark circuits show that: (1) the CIDE is computationally feasible; (2) the proposed scan shift schedule can achieve a global peak IR-drop reduction of up to 47%. Its scheduling efficiency is 58.4% higher than that of an existing typical method on average, which means our method has less test time.
扫描测试过程中的高功率耗散常常导致不适当的良率损失,特别是对于低功耗电路。一个主要原因是在移位模式下产生的ir下降可能会损坏测试数据。解决这一问题的一种常用方法是部分移位,即形成多个扫描链,每次只移位一组扫描链。然而,现有的基于部分移位的方法存在两个主要问题:(1)对每个移位周期的红外降估计不够准确或计算成本太高;(2)因此,所有移位周期都采用部分移位,导致测试时间长。针对这两个问题,本文提出了一种新的红外降感知扫描移位方法,其特点是:(1)基于周期的红外降估计(CIDE),在gpu加速的动态功率模拟器的支持下,快速发现具有过高峰值红外降的潜在移位周期;(2)扫描移位调度方法,针对每个考虑的移位周期生成目标扫描链分组,以减少对测试时间的影响。在ITC’99基准电路上的实验表明:(1)CIDE在计算上是可行的;(2)所提出的扫描移位方案可使全局峰值红外降降低47%。它的调度效率比现有的典型方法平均提高58.4%,这意味着我们的方法具有更少的测试时间。
{"title":"GPU-Accelerated Estimation and Targeted Reduction of Peak IR-Drop during Scan Chain Shifting","authors":"Shiling SHI, Stefan HOLST, Xiaoqing WEN","doi":"10.1587/transinf.2023edp7011","DOIUrl":"https://doi.org/10.1587/transinf.2023edp7011","url":null,"abstract":"High power dissipation during scan test often causes undue yield loss, especially for low-power circuits. One major reason is that the resulting IR-drop in shift mode may corrupt test data. A common approach to solving this problem is partial-shift, in which multiple scan chains are formed and only one group of scan chains is shifted at a time. However, existing partial-shift based methods suffer from two major problems: (1) their IR-drop estimation is not accurate enough or computationally too expensive to be done for each shift cycle; (2) partial-shift is hence applied to all shift cycles, resulting in long test time. This paper addresses these two problems with a novel IR-drop-aware scan shift method, featuring: (1) Cycle-based IR-Drop Estimation (CIDE) supported by a GPU-accelerated dynamic power simulator to quickly find potential shift cycles with excessive peak IR-drop; (2) a scan shift scheduling method that generates a scan chain grouping targeted for each considered shift cycle to reduce the impact on test time. Experiments on ITC'99 benchmark circuits show that: (1) the CIDE is computationally feasible; (2) the proposed scan shift schedule can achieve a global peak IR-drop reduction of up to 47%. Its scheduling efficiency is 58.4% higher than that of an existing typical method on average, which means our method has less test time.","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135372812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Scale Estimation for Omni-Directional Saliency Maps Using Learnable Equator Bias 基于可学习赤道偏差的全向显著性图多尺度估计
4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-01 DOI: 10.1587/transinf.2023edp7055
Takao YAMANAKA, Tatsuya SUZUKI, Taiki NOBUTSUNE, Chenjunlin WU
Omni-directional images have been used in wide range of applications including virtual/augmented realities, self-driving cars, robotics simulators, and surveillance systems. For these applications, it would be useful to estimate saliency maps representing probability distributions of gazing points with a head-mounted display, to detect important regions in the omni-directional images. This paper proposes a novel saliency-map estimation model for the omni-directional images by extracting overlapping 2-dimensional (2D) plane images from omni-directional images at various directions and angles of view. While 2D saliency maps tend to have high probability at the center of images (center bias), the high-probability region appears at horizontal directions in omni-directional saliency maps when a head-mounted display is used (equator bias). Therefore, the 2D saliency model with a center-bias layer was fine-tuned with an omni-directional dataset by replacing the center-bias layer to an equator-bias layer conditioned on the elevation angle for the extraction of the 2D plane image. The limited availability of omni-directional images in saliency datasets can be compensated by using the well-established 2D saliency model pretrained by a large number of training images with the ground truth of 2D saliency maps. In addition, this paper proposes a multi-scale estimation method by extracting 2D images in multiple angles of view to detect objects of various sizes with variable receptive fields. The saliency maps estimated from the multiple angles of view were integrated by using pixel-wise attention weights calculated in an integration layer for weighting the optimal scale to each object. The proposed method was evaluated using a publicly available dataset with evaluation metrics for omni-directional saliency maps. It was confirmed that the accuracy of the saliency maps was improved by the proposed method.
全方位图像已被广泛应用于虚拟/增强现实、自动驾驶汽车、机器人模拟器和监控系统等领域。对于这些应用,使用头戴式显示器估计表示凝视点概率分布的显著性地图,以检测全向图像中的重要区域将是有用的。本文提出了一种新的全向图像显著性图估计模型,该模型通过从不同视角和方向的全向图像中提取重叠的二维平面图像。虽然2D显着性地图往往在图像中心具有高概率(中心偏差),但当使用头戴式显示器时,高概率区域出现在全方位显着性地图的水平方向(赤道偏差)。因此,利用全向数据集对具有中心偏置层的二维显著性模型进行微调,将中心偏置层替换为以仰角为条件的赤道偏置层,提取二维平面图像。在显著性数据集中,全向图像的可用性有限,可以通过使用大量训练图像和2D显著性图的基础真值进行预训练而得到完善的2D显著性模型来弥补。此外,本文提出了一种多尺度估计方法,通过提取多个视角的二维图像来检测不同大小、不同感受域的目标。通过在集成层中计算逐像素的注意力权重,对多个视角估计的显著性图进行集成,并对每个目标进行最优比例尺加权。使用公开可用的数据集对所提出的方法进行了评估,该数据集具有全向显著性地图的评估指标。实验结果表明,该方法提高了显著性图的精度。
{"title":"Multi-Scale Estimation for Omni-Directional Saliency Maps Using Learnable Equator Bias","authors":"Takao YAMANAKA, Tatsuya SUZUKI, Taiki NOBUTSUNE, Chenjunlin WU","doi":"10.1587/transinf.2023edp7055","DOIUrl":"https://doi.org/10.1587/transinf.2023edp7055","url":null,"abstract":"Omni-directional images have been used in wide range of applications including virtual/augmented realities, self-driving cars, robotics simulators, and surveillance systems. For these applications, it would be useful to estimate saliency maps representing probability distributions of gazing points with a head-mounted display, to detect important regions in the omni-directional images. This paper proposes a novel saliency-map estimation model for the omni-directional images by extracting overlapping 2-dimensional (2D) plane images from omni-directional images at various directions and angles of view. While 2D saliency maps tend to have high probability at the center of images (center bias), the high-probability region appears at horizontal directions in omni-directional saliency maps when a head-mounted display is used (equator bias). Therefore, the 2D saliency model with a center-bias layer was fine-tuned with an omni-directional dataset by replacing the center-bias layer to an equator-bias layer conditioned on the elevation angle for the extraction of the 2D plane image. The limited availability of omni-directional images in saliency datasets can be compensated by using the well-established 2D saliency model pretrained by a large number of training images with the ground truth of 2D saliency maps. In addition, this paper proposes a multi-scale estimation method by extracting 2D images in multiple angles of view to detect objects of various sizes with variable receptive fields. The saliency maps estimated from the multiple angles of view were integrated by using pixel-wise attention weights calculated in an integration layer for weighting the optimal scale to each object. The proposed method was evaluated using a publicly available dataset with evaluation metrics for omni-directional saliency maps. It was confirmed that the accuracy of the saliency maps was improved by the proposed method.","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135372983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized Incentive Scheme for Peer-to-Peer Video Streaming using Solana Blockchain 使用Solana区块链的点对点视频流的分散激励方案
4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-01 DOI: 10.1587/transinf.2023edp7027
Yunqi MA, Satoshi FUJITA
Peer-to-peer (P2P) technology has gained popularity as a way to enhance system performance. Nodes in a P2P network work together by providing network resources to one another. In this study, we examine the use of P2P technology for video streaming and develop a distributed incentive mechanism to prevent free-riding. Our proposed solution combines WebTorrent and the Solana blockchain and can be accessed through a web browser. To incentivize uploads, some of the received video chunks are encrypted using AES. Smart contracts on the blockchain are used for third-party verification of uploads and for managing access to the video content. Experimental results on a test network showed that our system can encrypt and decrypt chunks in about 1/40th the time it takes using WebRTC, without affecting the quality of video streaming. Smart contracts were also found to quickly verify uploads in about 860 milliseconds. The paper also explores how to effectively reward virtual points for uploads.
点对点(P2P)技术作为提高系统性能的一种方式已经得到了广泛的应用。P2P网络中的节点通过相互提供网络资源来协同工作。在本研究中,我们研究了视频流媒体中P2P技术的使用,并开发了一种分布式激励机制来防止搭便车。我们提出的解决方案结合了WebTorrent和Solana区块链,可以通过web浏览器访问。为了激励上传,一些接收到的视频块使用AES加密。区块链上的智能合约用于上传的第三方验证和管理对视频内容的访问。在测试网络上的实验结果表明,我们的系统在不影响视频流质量的情况下,加密和解密块的时间约为使用WebRTC的1/40。智能合约还被发现可以在大约860毫秒内快速验证上传。本文还探讨了如何有效地奖励上传的虚拟积分。
{"title":"Decentralized Incentive Scheme for Peer-to-Peer Video Streaming using Solana Blockchain","authors":"Yunqi MA, Satoshi FUJITA","doi":"10.1587/transinf.2023edp7027","DOIUrl":"https://doi.org/10.1587/transinf.2023edp7027","url":null,"abstract":"Peer-to-peer (P2P) technology has gained popularity as a way to enhance system performance. Nodes in a P2P network work together by providing network resources to one another. In this study, we examine the use of P2P technology for video streaming and develop a distributed incentive mechanism to prevent free-riding. Our proposed solution combines WebTorrent and the Solana blockchain and can be accessed through a web browser. To incentivize uploads, some of the received video chunks are encrypted using AES. Smart contracts on the blockchain are used for third-party verification of uploads and for managing access to the video content. Experimental results on a test network showed that our system can encrypt and decrypt chunks in about 1/40th the time it takes using WebRTC, without affecting the quality of video streaming. Smart contracts were also found to quickly verify uploads in about 860 milliseconds. The paper also explores how to effectively reward virtual points for uploads.","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135372797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feedback Node Sets in Pancake Graphs and Burnt Pancake Graphs 煎饼图和烧焦煎饼图中的反馈节点集
4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-01 DOI: 10.1587/transinf.2022edp7211
Sinyu JUNG, Keiichi KANEKO
A feedback node set (FNS) of a graph is a subset of the nodes of the graph whose deletion makes the residual graph acyclic. By finding an FNS in an interconnection network, we can set a check point at each node in it to avoid a livelock configuration. Hence, to find an FNS is a critical issue to enhance the dependability of a parallel computing system. In this paper, we propose a method to find FNS's in n-pancake graphs and n-burnt pancake graphs. By analyzing the types of cycles proposed in our method, we also give the number of the nodes in the FNS in an n-pancake graph, (n-2.875)(n-1)!+1.5(n-3)!, and that in an n-burnt pancake graph, 2n-1(n-1)!(n-3.5).
图的反馈节点集(FNS)是图中节点的子集,这些节点的删除使残差图无环。通过在互连网络中找到一个FNS,我们可以在其中的每个节点上设置一个检查点,以避免活锁配置。因此,寻找一个FNS是提高并行计算系统可靠性的关键问题。本文提出了一种求n-煎饼图和n-烧焦煎饼图中FNS的方法。通过分析我们的方法中提出的循环类型,我们还给出了n-煎饼图中FNS的节点数,(n-2.875)(n-1)!+1.5(n-3)!,在n次烧饼图中,有2n-1(n-1)!(n-3.5)。
{"title":"Feedback Node Sets in Pancake Graphs and Burnt Pancake Graphs","authors":"Sinyu JUNG, Keiichi KANEKO","doi":"10.1587/transinf.2022edp7211","DOIUrl":"https://doi.org/10.1587/transinf.2022edp7211","url":null,"abstract":"A feedback node set (FNS) of a graph is a subset of the nodes of the graph whose deletion makes the residual graph acyclic. By finding an FNS in an interconnection network, we can set a check point at each node in it to avoid a livelock configuration. Hence, to find an FNS is a critical issue to enhance the dependability of a parallel computing system. In this paper, we propose a method to find FNS's in n-pancake graphs and n-burnt pancake graphs. By analyzing the types of cycles proposed in our method, we also give the number of the nodes in the FNS in an n-pancake graph, (n-2.875)(n-1)!+1.5(n-3)!, and that in an n-burnt pancake graph, 2n-1(n-1)!(n-3.5).","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135372811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Important Notice of the Cancellation of Special Section on Formal Approaches 关于取消正式申请特别部分的重要通知
4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-10-01 DOI: 10.1587/transinf.2022fop0000
{"title":"Important Notice of the Cancellation of Special Section on Formal Approaches","authors":"","doi":"10.1587/transinf.2022fop0000","DOIUrl":"https://doi.org/10.1587/transinf.2022fop0000","url":null,"abstract":"","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135373148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconfigurable Pedestrian Detection System Using Deep Learning for Video Surveillance 基于深度学习的视频监控可重构行人检测系统
IF 0.7 4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-01 DOI: 10.1587/transinf.2019edl8132
M. K. Jeevarajan, P. N. Kumar
{"title":"Reconfigurable Pedestrian Detection System Using Deep Learning for Video Surveillance","authors":"M. K. Jeevarajan, P. N. Kumar","doi":"10.1587/transinf.2019edl8132","DOIUrl":"https://doi.org/10.1587/transinf.2019edl8132","url":null,"abstract":"","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"106 1","pages":"1610-1614"},"PeriodicalIF":0.7,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67308393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Framework of Measuring Engagement with Access Logs Under Tracking Prevention for Affiliate Services 关联服务跟踪预防下访问日志参与度度量框架
IF 0.7 4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-01 DOI: 10.1587/transinf.2022ofp0001
Motoi Iwashita, Hirotaka Sugita
{"title":"Framework of Measuring Engagement with Access Logs Under Tracking Prevention for Affiliate Services","authors":"Motoi Iwashita, Hirotaka Sugita","doi":"10.1587/transinf.2022ofp0001","DOIUrl":"https://doi.org/10.1587/transinf.2022ofp0001","url":null,"abstract":"","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"106 1","pages":"1452-1460"},"PeriodicalIF":0.7,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67309110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discriminative Question Answering via Cascade Prompt Learning and Sentence Level Attention Mechanism 基于级联提示学习和句子级注意机制的辨别性问题回答
IF 0.7 4区 计算机科学 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-01 DOI: 10.1587/transinf.2022edp7225
Xiaoguang Yuan, Chaofan Dai, Zongkai Tian, Xinyu Fan, Yingyi Song, Ze Yu, Peifeng Wang, Wenjun Ke
{"title":"Discriminative Question Answering via Cascade Prompt Learning and Sentence Level Attention Mechanism","authors":"Xiaoguang Yuan, Chaofan Dai, Zongkai Tian, Xinyu Fan, Yingyi Song, Ze Yu, Peifeng Wang, Wenjun Ke","doi":"10.1587/transinf.2022edp7225","DOIUrl":"https://doi.org/10.1587/transinf.2022edp7225","url":null,"abstract":"","PeriodicalId":55002,"journal":{"name":"IEICE Transactions on Information and Systems","volume":"106 1","pages":"1584-1599"},"PeriodicalIF":0.7,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67308904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEICE Transactions on Information and Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1