首页 > 最新文献

IEEE Transactions on Emerging Topics in Computing最新文献

英文 中文
Maximizing Social Influence With Minimum Information Alteration 以最小的信息改动最大化社会影响力
IF 5.9 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-07-11 DOI: 10.1109/TETC.2023.3292384
Guan Wang;Weihua Li;Quan Bai;Edmund M-K Lai
With the rapid advancement of the Internet and social platforms, how to maximize the influence across popular online social networks has attracted great attention from both researchers and practitioners. Almost all the existing influence diffusion models assume that influence remains constant in the process of information spreading. However, in the real world, people tend to alternate information by attaching opinions or modifying the contents before spreading it. Namely, the meaning and idea of a message normally mutate in the process of influence diffusion. In this article, we investigate how to maximize the influence in online social platforms with a key consideration of suppressing the information alteration in the diffusion cascading process. We leverage deep learning models and knowledge graphs to present users’ personalised behaviours, i.e., actions after receiving a message. Furthermore, we investigate the information alteration in the process of influence diffusion. A novel seed selection algorithm is proposed to maximize the social influence without causing significant information alteration. Experimental results explicitly show the rationale of the proposed user behaviours deep learning model architecture and demonstrate the novel seeding algorithm's outstanding performance in both maximizing influence and retaining the influence originality.
随着互联网和社交平台的快速发展,如何在流行的在线社交网络中最大限度地扩大影响力引起了研究人员和从业人员的极大关注。几乎所有现有的影响力扩散模型都假设影响力在信息传播过程中保持不变。然而,在现实世界中,人们在传播信息之前往往会通过附加观点或修改内容来交替使用信息。也就是说,信息的含义和思想通常会在影响力扩散过程中发生变化。在本文中,我们研究了如何在网络社交平台中实现影响力最大化,其中的一个关键考虑因素是抑制扩散级联过程中的信息篡改。我们利用深度学习模型和知识图谱来呈现用户的个性化行为,即收到信息后的行动。此外,我们还研究了影响扩散过程中的信息改变。我们提出了一种新颖的种子选择算法,以在不造成重大信息改变的情况下最大化社会影响力。实验结果明确显示了所提出的用户行为深度学习模型架构的合理性,并证明了新型种子算法在最大化影响力和保留影响力原创性方面的出色表现。
{"title":"Maximizing Social Influence With Minimum Information Alteration","authors":"Guan Wang;Weihua Li;Quan Bai;Edmund M-K Lai","doi":"10.1109/TETC.2023.3292384","DOIUrl":"10.1109/TETC.2023.3292384","url":null,"abstract":"With the rapid advancement of the Internet and social platforms, how to maximize the influence across popular online social networks has attracted great attention from both researchers and practitioners. Almost all the existing influence diffusion models assume that influence remains constant in the process of information spreading. However, in the real world, people tend to alternate information by attaching opinions or modifying the contents before spreading it. Namely, the meaning and idea of a message normally mutate in the process of influence diffusion. In this article, we investigate how to maximize the influence in online social platforms with a key consideration of suppressing the information alteration in the diffusion cascading process. We leverage deep learning models and knowledge graphs to present users’ personalised behaviours, i.e., actions after receiving a message. Furthermore, we investigate the information alteration in the process of influence diffusion. A novel seed selection algorithm is proposed to maximize the social influence without causing significant information alteration. Experimental results explicitly show the rationale of the proposed user behaviours deep learning model architecture and demonstrate the novel seeding algorithm's outstanding performance in both maximizing influence and retaining the influence originality.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 2","pages":"419-431"},"PeriodicalIF":5.9,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62528854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edgeless-GNN: Unsupervised Representation Learning for Edgeless Nodes 无边GNN:无边节点的无监督表示学习
IF 5.9 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-07-11 DOI: 10.1109/TETC.2023.3292240
Yong-Min Shin;Cong Tran;Won-Yong Shin;Xin Cao
We study the problem of embedding edgeless nodes such as users who newly enter the underlying network, while using graph neural networks (GNNs) widely studied for effective representation learning of graphs. Our study is motivated by the fact that GNNs cannot be straightforwardly adopted for our problem since message passing to such edgeless nodes having no connections is impossible. To tackle this challenge, we propose $mathsf{Edgeless-GNN}$, a novel inductive framework that enables GNNs to generate node embeddings even for edgeless nodes through unsupervised learning. Specifically, we start by constructing a proxy graph based on the similarity of node attributes as the GNN's computation graph defined by the underlying network. The known network structure is used to train model parameters, whereas a topology-aware loss function is established such that our model judiciously learns the network structure by encoding positive, negative, and second-order relations between nodes. For the edgeless nodes, we inductively infer embeddings by expanding the computation graph. By evaluating the performance of various downstream machine learning tasks, we empirically demonstrate that $mathsf{Edgeless-GNN}$ exhibits (a) superiority over state-of-the-art inductive network embedding methods for edgeless nodes, (b) effectiveness of our topology-aware loss function, (c) robustness to incomplete node attributes, and (d) a linear scaling with the graph size.
我们研究了嵌入无边缘节点的问题,例如新进入底层网络的用户,同时使用广泛研究的图神经网络(GNN)进行图的有效表示学习。我们的研究是基于这样一个事实,即GNN不能直接用于我们的问题,因为消息传递到这种没有连接的无边缘节点是不可能的。为了应对这一挑战,我们提出了无边GNN,这是一种新的归纳框架,使GNN能够通过无监督学习生成节点嵌入,即使是无边节点。具体来说,我们首先基于节点属性的相似性构建一个代理图,作为底层网络定义的GNN计算图。已知的网络结构用于训练模型参数,而拓扑感知损失函数是以这样一种方式建立的,即我们的模型通过编码节点之间的正、负和二阶关系来明智地学习网络结构。对于无边节点,我们通过扩展计算图来归纳推断嵌入。通过评估各种下游机器学习任务的性能,我们从经验上证明,无边GNN表现出(a)对于无边节点而言,优于最先进的归纳网络嵌入方法,(b)我们的拓扑感知损失函数的有效性,(c)对不完全节点属性的鲁棒性,以及(d)随图大小的线性缩放。
{"title":"Edgeless-GNN: Unsupervised Representation Learning for Edgeless Nodes","authors":"Yong-Min Shin;Cong Tran;Won-Yong Shin;Xin Cao","doi":"10.1109/TETC.2023.3292240","DOIUrl":"10.1109/TETC.2023.3292240","url":null,"abstract":"We study the problem of embedding \u0000<i>edgeless</i>\u0000 nodes such as users who newly enter the underlying network, while using graph neural networks (GNNs) widely studied for effective representation learning of graphs. Our study is motivated by the fact that GNNs cannot be straightforwardly adopted for our problem since message passing to such edgeless nodes having no connections is impossible. To tackle this challenge, we propose \u0000<inline-formula><tex-math>$mathsf{Edgeless-GNN}$</tex-math></inline-formula>\u0000, a novel inductive framework that enables GNNs to generate node embeddings even for edgeless nodes through \u0000<i>unsupervised learning</i>\u0000. Specifically, we start by constructing a proxy graph based on the similarity of node attributes as the GNN's computation graph defined by the underlying network. The known network structure is used to train model parameters, whereas a \u0000<i>topology-aware</i>\u0000 loss function is established such that our model judiciously learns the network structure by encoding positive, negative, and second-order relations between nodes. For the edgeless nodes, we \u0000<i>inductively</i>\u0000 infer embeddings by expanding the computation graph. By evaluating the performance of various downstream machine learning tasks, we empirically demonstrate that \u0000<inline-formula><tex-math>$mathsf{Edgeless-GNN}$</tex-math></inline-formula>\u0000 exhibits (a) superiority over state-of-the-art inductive network embedding methods for edgeless nodes, (b) effectiveness of our topology-aware loss function, (c) robustness to incomplete node attributes, and (d) a linear scaling with the graph size.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 1","pages":"150-162"},"PeriodicalIF":5.9,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44628489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resource Allocation Optimization by Quantum Computing for Shared Use of Standalone IRS 利用量子计算优化资源分配,共享独立 IRS
IF 5.9 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-07-11 DOI: 10.1109/TETC.2023.3292355
Takahiro Ohyama;Yuichi Kawamoto;Nei Kato
Intelligent reflecting surfaces (IRSs) have attracted attention as a technology that can considerably improve the energy utilization efficiency of sixth-generation (6G) mobile communication systems. IRSs enable control of propagation characteristics by adjusting the phase shift of each reflective element. However, designing the phase shift requires the acquisition of channel information for each reflective element, which is impractical from an overhead perspective. In addition, for multiple wireless network operators to share an IRS for communication, new infrastructure facilities and operational costs are required at each operator's end to control the IRS in a coordinated manner. Herein, we propose a wireless communication system using standalone IRSs to solve these problems. The standalone IRSs cover a wide area by periodically switching phase shifts, and each operator allocates radio resources according to their phase-shift switching. Furthermore, we derive a quadratic unconstrained binary optimization equation for the proposed system to optimize radio resource allocation using quantum computing. The results of computer simulations indicate that the proposed system and method can be used to achieve efficient communication in 6G mobile communication systems.
智能反射面(IRS)作为一种可显著提高第六代(6G)移动通信系统能量利用效率的技术,已引起人们的关注。IRS 可通过调整每个反射元件的相移来控制传播特性。然而,设计相移需要获取每个反射元件的信道信息,从开销角度看并不现实。此外,多个无线网络运营商要共享一个 IRS 进行通信,每个运营商都需要新的基础设施和运营成本,才能以协调的方式控制 IRS。在此,我们提出一种使用独立 IRS 的无线通信系统来解决这些问题。独立的 IRS 通过周期性地切换相移来覆盖广阔的区域,每个运营商根据其相移切换来分配无线电资源。此外,我们还为拟议系统推导了一个二次无约束二元优化方程,以利用量子计算优化无线电资源分配。计算机仿真结果表明,所提出的系统和方法可用于实现 6G 移动通信系统中的高效通信。
{"title":"Resource Allocation Optimization by Quantum Computing for Shared Use of Standalone IRS","authors":"Takahiro Ohyama;Yuichi Kawamoto;Nei Kato","doi":"10.1109/TETC.2023.3292355","DOIUrl":"10.1109/TETC.2023.3292355","url":null,"abstract":"Intelligent reflecting surfaces (IRSs) have attracted attention as a technology that can considerably improve the energy utilization efficiency of sixth-generation (6G) mobile communication systems. IRSs enable control of propagation characteristics by adjusting the phase shift of each reflective element. However, designing the phase shift requires the acquisition of channel information for each reflective element, which is impractical from an overhead perspective. In addition, for multiple wireless network operators to share an IRS for communication, new infrastructure facilities and operational costs are required at each operator's end to control the IRS in a coordinated manner. Herein, we propose a wireless communication system using standalone IRSs to solve these problems. The standalone IRSs cover a wide area by periodically switching phase shifts, and each operator allocates radio resources according to their phase-shift switching. Furthermore, we derive a quadratic unconstrained binary optimization equation for the proposed system to optimize radio resource allocation using quantum computing. The results of computer simulations indicate that the proposed system and method can be used to achieve efficient communication in 6G mobile communication systems.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"11 4","pages":"950-961"},"PeriodicalIF":5.9,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62528843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Rei: A Reconfigurable Interconnection Unit for Array-Based CNN Accelerators Rei:基于阵列的 CNN 加速器的可重构互连单元
IF 5.9 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-06-30 DOI: 10.1109/TETC.2023.3290138
Paria Darbani;Hakem Beitollahi;Pejman Lotfi-Kamran
Convolutional Neural Network (CNN) is used in many real-world applications due to its high accuracy. The rapid growth of modern applications based on learning algorithms has increased the importance of efficient implementation of CNNs. The array-type architecture is a well-known platform for the efficient implementation of CNN models, which takes advantage of parallel computation and data reuse. However, accelerators suffer from restricted hardware resources, whereas CNNs involve considerable communication and computation load. Furthermore, since accelerators execute CNN layer by layer, different shapes and sizes of layers lead to suboptimal resource utilization. This problem prevents the accelerator from reaching maximum performance. The increasing scale and complexity of deep learning applications exacerbate this problem. Therefore, the performance of CNN models depends on the hardware's ability to adapt to different shapes of different layers to increase resource utilization. This work proposes a reconfigurable accelerator that can efficiently execute a wide range of CNNs. The proposed flexible and low-cost reconfigurable interconnect units allow the array to perform CNN faster than fixed-size implementations (by 45.9% for ResNet-18 compared to the baseline). The proposed architecture also reduces the on-chip memory access rate by 36.5% without compromising accuracy.
卷积神经网络(CNN)因其高精确度而被广泛应用于现实世界的许多应用中。基于学习算法的现代应用的快速增长增加了高效实现 CNN 的重要性。众所周知,阵列型架构是高效实现 CNN 模型的平台,它利用了并行计算和数据重用的优势。然而,加速器的硬件资源有限,而 CNN 涉及相当大的通信和计算负荷。此外,由于加速器逐层执行 CNN,不同形状和大小的层会导致资源利用率低于最佳水平。这个问题阻碍了加速器达到最高性能。深度学习应用的规模和复杂性不断增加,加剧了这一问题。因此,CNN 模型的性能取决于硬件适应不同层的不同形状以提高资源利用率的能力。这项工作提出了一种可重新配置的加速器,它可以高效地执行各种 CNN。所提出的灵活、低成本的可重新配置互连单元使阵列执行 CNN 的速度比固定大小的实现更快(与基线相比,ResNet-18 的速度提高了 45.9%)。所提出的架构还将片上内存访问率降低了 36.5%,同时不影响准确性。
{"title":"Rei: A Reconfigurable Interconnection Unit for Array-Based CNN Accelerators","authors":"Paria Darbani;Hakem Beitollahi;Pejman Lotfi-Kamran","doi":"10.1109/TETC.2023.3290138","DOIUrl":"10.1109/TETC.2023.3290138","url":null,"abstract":"Convolutional Neural Network (CNN) is used in many real-world applications due to its high accuracy. The rapid growth of modern applications based on learning algorithms has increased the importance of efficient implementation of CNNs. The array-type architecture is a well-known platform for the efficient implementation of CNN models, which takes advantage of parallel computation and data reuse. However, accelerators suffer from restricted hardware resources, whereas CNNs involve considerable communication and computation load. Furthermore, since accelerators execute CNN layer by layer, different shapes and sizes of layers lead to suboptimal resource utilization. This problem prevents the accelerator from reaching maximum performance. The increasing scale and complexity of deep learning applications exacerbate this problem. Therefore, the performance of CNN models depends on the hardware's ability to adapt to different shapes of different layers to increase resource utilization. This work proposes a reconfigurable accelerator that can efficiently execute a wide range of CNNs. The proposed flexible and low-cost reconfigurable interconnect units allow the array to perform CNN faster than fixed-size implementations (by 45.9% for ResNet-18 compared to the baseline). The proposed architecture also reduces the on-chip memory access rate by 36.5% without compromising accuracy.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"11 4","pages":"895-906"},"PeriodicalIF":5.9,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62528290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CANNON: Communication-Aware Sparse Neural Network Optimization CANNON:通信感知稀疏神经网络优化
IF 5.9 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-06-30 DOI: 10.1109/TETC.2023.3289778
A. Alper Goksoy;Guihong Li;Sumit K. Mandal;Umit Y. Ogras;Radu Marculescu
Sparse deep neural networks (DNNs) have the potential to deliver compelling performance and energy efficiency without significant accuracy loss. However, their benefits can quickly diminish if their training is oblivious to the target hardware. For example, fewer critical connections can have a significant overhead if they translate into long-distance communication on the target hardware. Therefore, hardware-aware sparse training is needed to leverage the full potential of sparse DNNs. To this end, we propose a novel and comprehensive communication-aware sparse DNN optimization framework for tile-based in-memory computing (IMC) architectures. The proposed technique, CANNON first maps the DNN layers onto the tiles of the target architecture. Then, it replaces the fully connected and convolutional layers with communication-aware sparse connections. After that, CANNON optimizes the communication cost with minimal impact on the DNN accuracy. Extensive experimental evaluations with a wide range of DNNs and datasets show up to 3.0× lower communication energy, 3.1× lower communication latency, and 6.8× lower energy-delay product compared to state-of-the-art pruning approaches with a negligible impact on the classification accuracy on IMC-based machine learning accelerators.
稀疏深度神经网络(DNN)有可能在不损失大量准确性的情况下提供出色的性能和能效。但是,如果在训练过程中忽略了目标硬件,其优势就会迅速减弱。例如,较少的关键连接如果转化为目标硬件上的长距离通信,就会产生很大的开销。因此,需要进行硬件感知的稀疏训练,以充分发挥稀疏 DNN 的潜力。为此,我们为基于瓦片的内存计算(IMC)架构提出了一种新颖、全面的通信感知稀疏 DNN 优化框架。所提出的技术 CANNON 首先将 DNN 层映射到目标架构的瓦片上。然后,它将全连接层和卷积层替换为通信感知稀疏连接。之后,CANNON 在对 DNN 精度影响最小的情况下优化通信成本。使用各种 DNN 和数据集进行的广泛实验评估表明,与最先进的剪枝方法相比,CANNON 的通信能耗降低了 3.0 倍,通信延迟降低了 3.1 倍,能耗-延迟乘积降低了 6.8 倍,而对基于 IMC 的机器学习加速器的分类准确性的影响可以忽略不计。
{"title":"CANNON: Communication-Aware Sparse Neural Network Optimization","authors":"A. Alper Goksoy;Guihong Li;Sumit K. Mandal;Umit Y. Ogras;Radu Marculescu","doi":"10.1109/TETC.2023.3289778","DOIUrl":"10.1109/TETC.2023.3289778","url":null,"abstract":"Sparse deep neural networks (DNNs) have the potential to deliver compelling performance and energy efficiency without significant accuracy loss. However, their benefits can quickly diminish if their training is oblivious to the target hardware. For example, fewer critical connections can have a significant overhead if they translate into long-distance communication on the target hardware. Therefore, hardware-aware sparse training is needed to leverage the full potential of sparse DNNs. To this end, we propose a novel and comprehensive communication-aware sparse DNN optimization framework for tile-based in-memory computing (IMC) architectures. The proposed technique, CANNON first maps the DNN layers onto the tiles of the target architecture. Then, it replaces the fully connected and convolutional layers with communication-aware sparse connections. After that, CANNON optimizes the communication cost with minimal impact on the DNN accuracy. Extensive experimental evaluations with a wide range of DNNs and datasets show up to 3.0× lower communication energy, 3.1× lower communication latency, and 6.8× lower energy-delay product compared to state-of-the-art pruning approaches with a negligible impact on the classification accuracy on IMC-based machine learning accelerators.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"11 4","pages":"882-894"},"PeriodicalIF":5.9,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62528280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Attentive Interest Collaborative Filtering for Recommender Systems 用于推荐系统的深度兴趣协作过滤技术
IF 5.9 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-06-20 DOI: 10.1109/TETC.2023.3286404
Libing Wu;Youhua Xia;Shuwen Min;Zhenchang Xia
Collaborative filtering (CF) is a pivotal building block in commercial recommender systems due to its strength and utility in user interest modeling. Recently, many researchers have turned to deep learning as a way to capture richer collaborative signals from user-item feature interactions. However, most deep-based methods only consider nonlinear, high-order interactions while ignoring the explicit collaborative signals in low-order interactions. They also typically ignore the quality of the user and item profiles. These are cornerstones in item recommendation that, we argue, must be considered for high-quality recommendations. Hence, we propose Deep Attentive Interest Collaborative Filtering (DAICF) to overcome these limitations. DAICF profiles users based on their interactive items, i.e., user neighborhood information. Similarly, item profiles are based on users who had interacted with it, i.e., item neighborhood information. Given a user's profile varies over different items, DAICF accurately models his attentive interests based on the specific target item. Low-order collaborative signals are captured by a shallow component, and high-order collaborative signals are captured by a deep component. These two complementary collaborative signals are then fused to provide rich recommendations that cut through today's information overload. By designing a personalized feature extraction method based on bilateral neighborhood information to solve the data sparsity problem in recommender systems, DAICF can dynamically distinguish the importance of a user's historical interaction items for predicting user preferences for a specific target item. A set of experiments against four real-world datasets validate that DAICF outperforms the most recent state-of-the-art recommendation algorithms and justifies the effectiveness and interpretability of our method.
协作过滤(CF)因其在用户兴趣建模方面的优势和实用性,在商业推荐系统中是一个举足轻重的组成部分。最近,许多研究人员转向了深度学习,以此从用户-项目特征交互中捕捉更丰富的协作信号。然而,大多数基于深度学习的方法只考虑了非线性的高阶交互,而忽略了低阶交互中明确的协作信号。这些方法通常还忽略了用户和物品特征的质量。我们认为,这些都是商品推荐的基石,必须加以考虑才能获得高质量的推荐。因此,我们提出了深度兴趣协同过滤(DAICF)来克服这些局限性。DAICF 根据用户的互动项目(即用户邻域信息)对用户进行剖析。同样,项目档案也是基于与之互动的用户,即项目邻域信息。鉴于用户在不同项目上的资料各不相同,DAICF 可根据特定的目标项目对用户的关注兴趣进行精确建模。低阶协作信号由浅层分量捕捉,高阶协作信号由深层分量捕捉。然后将这两种互补的协作信号融合在一起,为用户提供丰富的推荐,以应对当今信息过载的问题。通过设计一种基于双边邻域信息的个性化特征提取方法来解决推荐系统中的数据稀疏性问题,DAICF 可以动态区分用户历史交互项目的重要性,从而预测用户对特定目标项目的偏好。通过对四个真实世界数据集的实验,验证了 DAICF 优于最新的先进推荐算法,并证明了我们方法的有效性和可解释性。
{"title":"Deep Attentive Interest Collaborative Filtering for Recommender Systems","authors":"Libing Wu;Youhua Xia;Shuwen Min;Zhenchang Xia","doi":"10.1109/TETC.2023.3286404","DOIUrl":"10.1109/TETC.2023.3286404","url":null,"abstract":"Collaborative filtering (CF) is a pivotal building block in commercial recommender systems due to its strength and utility in user interest modeling. Recently, many researchers have turned to deep learning as a way to capture richer collaborative signals from user-item feature interactions. However, most deep-based methods only consider nonlinear, high-order interactions while ignoring the explicit collaborative signals in low-order interactions. They also typically ignore the quality of the user and item profiles. These are cornerstones in item recommendation that, we argue, must be considered for high-quality recommendations. Hence, we propose Deep Attentive Interest Collaborative Filtering (DAICF) to overcome these limitations. DAICF profiles users based on their interactive items, i.e., user neighborhood information. Similarly, item profiles are based on users who had interacted with it, i.e., item neighborhood information. Given a user's profile varies over different items, DAICF accurately models his attentive interests based on the specific target item. Low-order collaborative signals are captured by a shallow component, and high-order collaborative signals are captured by a deep component. These two complementary collaborative signals are then fused to provide rich recommendations that cut through today's information overload. By designing a personalized feature extraction method based on bilateral neighborhood information to solve the data sparsity problem in recommender systems, DAICF can dynamically distinguish the importance of a user's historical interaction items for predicting user preferences for a specific target item. A set of experiments against four real-world datasets validate that DAICF outperforms the most recent state-of-the-art recommendation algorithms and justifies the effectiveness and interpretability of our method.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 2","pages":"467-481"},"PeriodicalIF":5.9,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62528714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Near-Sensor Processing Accelerator for Approximate Local Binary Pattern Networks 近似局部二进制模式网络的近距离传感器处理加速器
IF 5.9 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-06-16 DOI: 10.1109/TETC.2023.3285493
Shaahin Angizi;Mehrdad Morsali;Sepehr Tabrizchi;Arman Roohi
In this work, a high-speed and energy-efficient comparator-based Near-Sensor Local Binary Pattern accelerator architecture (NS-LBP) is proposed to execute a novel local binary pattern deep neural network. First, inspired by recent LBP networks, we design an approximate, hardware-oriented, and multiply-accumulate (MAC)-free network named Ap-LBP for efficient feature extraction, further reducing the computation complexity. Then, we develop NS-LBP as a processing-in-SRAM unit and a parallel in-memory LBP algorithm to process images near the sensor in a cache, remarkably reducing the power consumption of data transmission to an off-chip processor. Our circuit-to-application co-simulation results on MNIST and SVHN datasets demonstrate minor accuracy degradation compared to baseline CNN and LBP-network models, while NS-LBP achieves 1.25 GHz and an energy-efficiency of 37.4 TOPS/W. NS-LBP reduces energy consumption by 2.2× and execution time by a factor of 4× compared to the best recent LBP-based networks.
本研究提出了一种基于比较器的高速节能近传感器局部二进制模式加速器架构(NS-LBP),用于执行新型局部二进制模式深度神经网络。首先,受近期局部二进制模式网络的启发,我们设计了一种近似的、面向硬件的、无乘法累加(MAC)的网络,命名为 Ap-LBP,用于高效特征提取,进一步降低了计算复杂度。然后,我们开发了 NS-LBP 作为 SRAM 处理单元和并行内存 LBP 算法,在缓存中处理传感器附近的图像,从而显著降低了向片外处理器传输数据的功耗。我们在 MNIST 和 SVHN 数据集上进行的电路到应用联合仿真结果表明,与基线 CNN 和 LBP 网络模型相比,NS-LBP 的准确度下降幅度较小,而 NS-LBP 的主频为 1.25 GHz,能效为 37.4 TOPS/W。与基于 LBP 的最新最佳网络相比,NS-LBP 的能耗降低了 2.2 倍,执行时间缩短了 4 倍。
{"title":"A Near-Sensor Processing Accelerator for Approximate Local Binary Pattern Networks","authors":"Shaahin Angizi;Mehrdad Morsali;Sepehr Tabrizchi;Arman Roohi","doi":"10.1109/TETC.2023.3285493","DOIUrl":"https://doi.org/10.1109/TETC.2023.3285493","url":null,"abstract":"In this work, a high-speed and energy-efficient comparator-based \u0000<underline>N</u>\u0000ear-\u0000<underline>S</u>\u0000ensor \u0000<underline>L</u>\u0000ocal \u0000<underline>B</u>\u0000inary \u0000<underline>P</u>\u0000attern accelerator architecture (NS-LBP) is proposed to execute a novel local binary pattern deep neural network. First, inspired by recent LBP networks, we design an approximate, hardware-oriented, and multiply-accumulate (MAC)-free network named Ap-LBP for efficient feature extraction, further reducing the computation complexity. Then, we develop NS-LBP as a processing-in-SRAM unit and a parallel in-memory LBP algorithm to process images near the sensor in a cache, remarkably reducing the power consumption of data transmission to an off-chip processor. Our circuit-to-application co-simulation results on MNIST and SVHN datasets demonstrate minor accuracy degradation compared to baseline CNN and LBP-network models, while NS-LBP achieves 1.25 GHz and an energy-efficiency of 37.4 TOPS/W. NS-LBP reduces energy consumption by 2.2× and execution time by a factor of 4× compared to the best recent LBP-based networks.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 1","pages":"73-83"},"PeriodicalIF":5.9,"publicationDate":"2023-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140161150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Embedding Techniques for Predicting Missing Links in Biological Networks: An Empirical Evaluation 预测生物网络中缺失链接的图嵌入技术:经验评估
IF 5.9 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-06-08 DOI: 10.1109/TETC.2023.3282539
Binon Teji;Swarup Roy;Devendra Singh Dhami;Dinabandhu Bhandari;Pietro Hiram Guzzi
Network science tries to understand the complex relationships among entities or actors of a system through graph formalism. For instance, biological networks represent macromolecules such as genes, proteins, or other small chemicals as nodes and the interactions among the molecules as links or edges. Often potential links are guessed computationally due to the expensive nature of wet lab experiments. Conventional link prediction techniques rely on local network topology and fail to incorporate the global structure fully. Graph representation learning (or embedding) aims to describe the properties of the entire graph by optimized, structure-preserving encoding of nodes or entire (sub) graphs into lower-dimensional vectors. Leveraging the encoded vectors as a feature improves the performance of the missing link identification task. Assessing the predictive quality of graph embedding techniques in missing link identification is essential. In this work, we evaluate the performance of ten (10) state-of-the-art graph embedding techniques in predicting missing links with special emphasis on homogeneous and heterogeneous biological networks. Most available graph embedding techniques cannot be used directly for link prediction. Hence, we use the latent representation of the network produced by the candidate techniques and reconstruct the network using various similarity and kernel functions. We evaluate nine (09) similarity functions in combination with candidate embedding techniques. We compare embedding techniques’ performance against five (05) traditional (non-embedding-based) link prediction techniques. Experimental results reveal that the quality of embedding-based link prediction is better than its counterpart. Among them, Neural Network-based embedding and attention-based techniques show consistent performance. We even observe that dot-product-based similarity is the best in inferring pair-wise edges among the nodes from their embedding. We report interesting findings that while predicting links in the heterogeneous graph, it predicts a good number of valid links between corresponding homogeneous nodes due to the possible indirect effect of homogeneous-heterogeneous interactions.
网络科学试图通过图形式理解系统中实体或参与者之间的复杂关系。例如,生物网络将基因、蛋白质或其他小型化学物质等大分子表示为节点,将分子间的相互作用表示为链接或边。由于湿实验室实验成本高昂,通常需要通过计算来猜测潜在的链接。传统的链接预测技术依赖于局部网络拓扑结构,而不能充分考虑全局结构。图表示学习(或嵌入)旨在通过对节点或整个(子)图进行优化、结构保留编码,将其转化为低维向量,从而描述整个图的属性。利用编码向量作为特征可以提高缺失链接识别任务的性能。评估图嵌入技术在缺失链接识别中的预测质量至关重要。在这项工作中,我们评估了十(10)种最先进的图嵌入技术在预测缺失链接方面的性能,并特别强调了同质和异质生物网络。大多数现有的图嵌入技术都不能直接用于链接预测。因此,我们使用候选技术生成的网络潜在表示,并使用各种相似性和核函数重建网络。我们评估了与候选嵌入技术相结合的九种(09)相似性函数。我们将嵌入技术的性能与五种传统(非嵌入式)链接预测技术进行了比较。实验结果表明,基于嵌入技术的链接预测质量优于同类技术。其中,基于神经网络的嵌入技术和基于注意力的技术表现一致。我们甚至发现,基于点积的相似性在从节点的嵌入推断节点间的成对边缘方面是最好的。我们报告了一些有趣的发现,即在预测异质图中链接的同时,由于同质-异质交互作用可能产生的间接影响,它还能预测出相应同质节点之间的大量有效链接。
{"title":"Graph Embedding Techniques for Predicting Missing Links in Biological Networks: An Empirical Evaluation","authors":"Binon Teji;Swarup Roy;Devendra Singh Dhami;Dinabandhu Bhandari;Pietro Hiram Guzzi","doi":"10.1109/TETC.2023.3282539","DOIUrl":"10.1109/TETC.2023.3282539","url":null,"abstract":"Network science tries to understand the complex relationships among entities or actors of a system through graph formalism. For instance, biological networks represent macromolecules such as genes, proteins, or other small chemicals as nodes and the interactions among the molecules as links or edges. Often potential links are guessed computationally due to the expensive nature of wet lab experiments. Conventional link prediction techniques rely on local network topology and fail to incorporate the global structure fully. Graph representation learning (or embedding) aims to describe the properties of the entire graph by optimized, structure-preserving encoding of nodes or entire (sub) graphs into lower-dimensional vectors. Leveraging the encoded vectors as a feature improves the performance of the missing link identification task. Assessing the predictive quality of graph embedding techniques in missing link identification is essential. In this work, we evaluate the performance of ten (10) state-of-the-art graph embedding techniques in predicting missing links with special emphasis on homogeneous and heterogeneous biological networks. Most available graph embedding techniques cannot be used directly for link prediction. Hence, we use the latent representation of the network produced by the candidate techniques and reconstruct the network using various similarity and kernel functions. We evaluate nine (09) similarity functions in combination with candidate embedding techniques. We compare embedding techniques’ performance against five (05) traditional (non-embedding-based) link prediction techniques. Experimental results reveal that the quality of embedding-based link prediction is better than its counterpart. Among them, Neural Network-based embedding and attention-based techniques show consistent performance. We even observe that dot-product-based similarity is the best in inferring pair-wise edges among the nodes from their embedding. We report interesting findings that while predicting links in the heterogeneous graph, it predicts a good number of valid links between corresponding homogeneous nodes due to the possible indirect effect of homogeneous-heterogeneous interactions.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 1","pages":"190-201"},"PeriodicalIF":5.9,"publicationDate":"2023-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62528657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Emerging Topics in Computing Information for Authors IEEE作者计算机信息新兴主题汇刊
IF 5.9 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-06-06 DOI: 10.1109/TETC.2023.3279759
{"title":"IEEE Transactions on Emerging Topics in Computing Information for Authors","authors":"","doi":"10.1109/TETC.2023.3279759","DOIUrl":"https://doi.org/10.1109/TETC.2023.3279759","url":null,"abstract":"","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"11 2","pages":"C2-C2"},"PeriodicalIF":5.9,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/6245516/10144811/10144701.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67883147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guest Editorial: IEEE Transactions on Emerging Topics in Computing Thematic Section on Memory- Centric Designs: Processing-in-Memory, In-Memory Computing, and Near-Memory Computing for Real-World Applications 客座编辑:IEEE计算新兴主题汇刊以内存为中心的设计主题部分:内存处理、内存计算和近内存计算在现实世界中的应用
IF 5.9 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-06-06 DOI: 10.1109/TETC.2023.3267909
Yuan-Hao Chang;Vincenzo Piuri
The von Neumann architecture has been the status quo since the dawn of modern computing. Computers built on the von Neumann architecture are composed of an intelligent master processor (e.g., CPU) and dumb memory/storage devices incapable of computation (e.g., memory and disk). However, the skyrocketing data volume in modern computing is calling such status quo into question. The excessive amounts of data movement between processor and memory/storage in more and more real-world applications (e.g., machine learning and AI applications) have made the processor-centric design a severe power and performance bottleneck. The diminishing Moore's Law also raises the need for a memory-centric design, which is rising on top of the recent material advancement and manufacturing innovation to open a paradigm shift. By doing computation right inside or near the memory, the memory-centric design promises massive throughput and energy savings.
冯·诺依曼体系结构自现代计算诞生以来一直是现状。基于冯·诺依曼体系结构构建的计算机由智能主处理器(如CPU)和无法计算的哑存储器/存储设备(如存储器和磁盘)组成。然而,现代计算中飞速增长的数据量使这种现状受到质疑。在越来越多的现实世界应用程序(如机器学习和人工智能应用程序)中,处理器和内存/存储之间的数据移动量过大,这使得以处理器为中心的设计成为严重的功率和性能瓶颈。摩尔定律的减弱也引发了对以记忆为中心的设计的需求,这种设计是在最近的材料进步和制造创新之上兴起的,以开启范式转变。通过在内存内部或附近进行计算,以内存为中心的设计保证了巨大的吞吐量和能源节约。
{"title":"Guest Editorial: IEEE Transactions on Emerging Topics in Computing Thematic Section on Memory- Centric Designs: Processing-in-Memory, In-Memory Computing, and Near-Memory Computing for Real-World Applications","authors":"Yuan-Hao Chang;Vincenzo Piuri","doi":"10.1109/TETC.2023.3267909","DOIUrl":"https://doi.org/10.1109/TETC.2023.3267909","url":null,"abstract":"The von Neumann architecture has been the status quo since the dawn of modern computing. Computers built on the von Neumann architecture are composed of an intelligent master processor (e.g., CPU) and dumb memory/storage devices incapable of computation (e.g., memory and disk). However, the skyrocketing data volume in modern computing is calling such status quo into question. The excessive amounts of data movement between processor and memory/storage in more and more real-world applications (e.g., machine learning and AI applications) have made the processor-centric design a severe power and performance bottleneck. The diminishing Moore's Law also raises the need for a memory-centric design, which is rising on top of the recent material advancement and manufacturing innovation to open a paradigm shift. By doing computation right inside or near the memory, the memory-centric design promises massive throughput and energy savings.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"11 2","pages":"278-280"},"PeriodicalIF":5.9,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/6245516/10144811/10144915.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67883146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Emerging Topics in Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1