首页 > 最新文献

IEEE Transactions on Mobile Computing最新文献

英文 中文
Charger Placement With Wave Interference 带波浪干扰的充电器安置
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-18 DOI: 10.1109/TMC.2024.3460403
Jing Xue;Die Wu;Jian Peng;Wenzheng Xu;Tang Liu
To guarantee the reliability for WRSNs, placing sufficient static chargers effectively ensures charging coverage for the entire network. However, this approach leads to a considerable number of sensors located within charging overlaps. The destructive wave interference caused by concurrent charging in these overlaps may weaken sensors received power, thereby negatively impacting charging performance. This work addresses a CHArging utIlity maximizatioN (CHAIN) problem, which aims to maximize the overall charging utility while considering wave interference among multiple chargers. Specifically, given a set of stationary sensors, we investigate how to determine optimal positions for a fixed number of chargers. To tackle this problem, we first develop a charging model with wave interference, then propose a two-step charger placement scheme to identify the optimal charger positions. In the first step, we maximize the overall additive power of the waves involved in interference by selecting an appropriate initial position for each charger. Then, in the second step, we maximize the overall charging utility by finding the optimal final position for each charger around its initial position. Finally, to evaluate the performance of our scheme, we conduct extensive simulations and field experiments and the results suggest that CHAIN performs better than the existing algorithms.
为了保证WRSNs的可靠性,放置足够的静态充电器可以有效地保证整个网络的充电覆盖。然而,这种方法导致相当数量的传感器位于充电重叠处。在这些重叠处同时充电引起的破坏性波干扰可能会削弱传感器接收功率,从而对充电性能产生负面影响。这项工作解决了充电效用最大化(CHAIN)问题,该问题旨在最大化整体充电效用,同时考虑多个充电器之间的波干扰。具体来说,给定一组固定传感器,我们研究如何确定固定数量充电器的最佳位置。为了解决这个问题,我们首先建立了一个具有波干扰的充电模型,然后提出了一个两步充电器放置方案来确定最佳充电器位置。在第一步中,我们通过为每个充电器选择适当的初始位置来最大化涉及干扰的波的总体加性功率。然后,在第二步中,我们通过在初始位置附近找到每个充电器的最佳最终位置来最大化整体充电效用。最后,为了评估我们的方案的性能,我们进行了大量的仿真和现场实验,结果表明,CHAIN比现有的算法性能更好。
{"title":"Charger Placement With Wave Interference","authors":"Jing Xue;Die Wu;Jian Peng;Wenzheng Xu;Tang Liu","doi":"10.1109/TMC.2024.3460403","DOIUrl":"10.1109/TMC.2024.3460403","url":null,"abstract":"To guarantee the reliability for WRSNs, placing sufficient static chargers effectively ensures charging coverage for the entire network. However, this approach leads to a considerable number of sensors located within charging overlaps. The destructive wave interference caused by concurrent charging in these overlaps may weaken sensors received power, thereby negatively impacting charging performance. This work addresses a CHArging utIlity maximizatioN (CHAIN) problem, which aims to maximize the overall charging utility while considering wave interference among multiple chargers. Specifically, given a set of stationary sensors, we investigate how to determine optimal positions for a fixed number of chargers. To tackle this problem, we first develop a charging model with wave interference, then propose a two-step charger placement scheme to identify the optimal charger positions. In the first step, we maximize the overall additive power of the waves involved in interference by selecting an appropriate initial position for each charger. Then, in the second step, we maximize the overall charging utility by finding the optimal final position for each charger around its initial position. Finally, to evaluate the performance of our scheme, we conduct extensive simulations and field experiments and the results suggest that CHAIN performs better than the existing algorithms.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 1","pages":"261-275"},"PeriodicalIF":7.7,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142257207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Argus: Enabling Cross-Camera Collaboration for Video Analytics on Distributed Smart Cameras Argus:实现跨摄像机协作,在分布式智能摄像机上进行视频分析
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-18 DOI: 10.1109/TMC.2024.3459409
Juheon Yi;Utku Günay Acer;Fahim Kawsar;Chulhong Min
Overlapping cameras offer exciting opportunities to view a scene from different angles, allowing for more advanced, comprehensive and robust analysis. However, existing video analytics systems for multi-camera streams are mostly limited to (i) per-camera processing and aggregation and (ii) workload-agnostic centralized processing architectures. In this paper, we present Argus, a distributed video analytics system with cross-camera collaboration on smart cameras. We identify multi-camera, multi-target tracking as the primary task of multi-camera video analytics and develop a novel technique that avoids redundant, processing-heavy identification tasks by leveraging object-wise spatio-temporal association in the overlapping fields of view across multiple cameras. We further develop a set of techniques to perform these operations across distributed cameras without cloud support at low latency by (i) dynamically ordering the camera and object inspection sequence and (ii) flexibly distributing the workload across smart cameras, taking into account network transmission and heterogeneous computational capacities. Evaluation of three real-world overlapping camera datasets with two Nvidia Jetson devices shows that Argus reduces the number of object identifications and end-to-end latency by up to 7.13× and 2.19× (4.86× and 1.60× compared to the state-of-the-art), while achieving comparable tracking quality.
重叠的相机提供了令人兴奋的机会,从不同的角度观看场景,允许更先进,全面和强大的分析。然而,现有的多摄像机流视频分析系统大多局限于(i)每个摄像机的处理和聚合以及(ii)与工作负载无关的集中处理架构。在本文中,我们提出了Argus,一个分布式视频分析系统,具有智能摄像机上的跨摄像机协作。我们将多摄像头、多目标跟踪确定为多摄像头视频分析的主要任务,并开发了一种新技术,通过利用多摄像头重叠视场中的目标时空关联,避免了冗余的、处理繁重的识别任务。我们进一步开发了一套技术,通过以下方式在没有云支持的情况下跨分布式摄像机以低延迟执行这些操作:(i)动态排序摄像机和对象检测序列;(ii)灵活分配智能摄像机的工作负载,同时考虑到网络传输和异构计算能力。对两个Nvidia Jetson设备的三个真实世界重叠相机数据集的评估表明,Argus将目标识别数量和端到端延迟减少了7.13倍和2.19倍(与最先进的设备相比,分别减少了4.86倍和1.60倍),同时实现了相当的跟踪质量。
{"title":"Argus: Enabling Cross-Camera Collaboration for Video Analytics on Distributed Smart Cameras","authors":"Juheon Yi;Utku Günay Acer;Fahim Kawsar;Chulhong Min","doi":"10.1109/TMC.2024.3459409","DOIUrl":"10.1109/TMC.2024.3459409","url":null,"abstract":"Overlapping cameras offer exciting opportunities to view a scene from different angles, allowing for more advanced, comprehensive and robust analysis. However, existing video analytics systems for multi-camera streams are mostly limited to (i) per-camera processing and aggregation and (ii) workload-agnostic centralized processing architectures. In this paper, we present Argus, a distributed video analytics system with \u0000<italic>cross-camera collaboration</i>\u0000 on smart cameras. We identify multi-camera, multi-target tracking as the primary task of multi-camera video analytics and develop a novel technique that avoids redundant, processing-heavy identification tasks by leveraging object-wise spatio-temporal association in the overlapping fields of view across multiple cameras. We further develop a set of techniques to perform these operations across distributed cameras without cloud support at low latency by (i) dynamically ordering the camera and object inspection sequence and (ii) flexibly distributing the workload across smart cameras, taking into account network transmission and heterogeneous computational capacities. Evaluation of three real-world overlapping camera datasets with two Nvidia Jetson devices shows that Argus reduces the number of object identifications and end-to-end latency by up to 7.13× and 2.19× (4.86× and 1.60× compared to the state-of-the-art), while achieving comparable tracking quality.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 1","pages":"117-134"},"PeriodicalIF":7.7,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
t-READi: Transformer-Powered Robust and Efficient Multimodal Inference for Autonomous Driving t-READi:变压器驱动的鲁棒高效多模态自动驾驶推理
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-18 DOI: 10.1109/TMC.2024.3462437
Pengfei Hu;Yuhang Qian;Tianyue Zheng;Ang Li;Zhe Chen;Yue Gao;Xiuzhen Cheng;Jun Luo
Given the wide adoption of multimodal sensors (e.g., camera, lidar, radar) by autonomous vehicles (AVs), deep analytics to fuse their outputs for a robust perception become imperative. However, existing fusion methods often make two assumptions rarely holding in practice: i) similar data distributions for all inputs and ii) constant availability for all sensors. Because, for example, lidars have various resolutions and failures of radars may occur, such variability often results in significant performance degradation in fusion. To this end, we present t-READi, an adaptive inference system that accommodates the variability of multimodal sensory data and thus enables robust and efficient perception. t-READi identifies variation-sensitive yet structure-specific model parameters; it then adapts only these parameters while keeping the rest intact. t-READi also leverages a cross-modality contrastive learning method to compensate for the loss from missing modalities. Both functions are implemented to maintain compatibility with existing multimodal deep fusion methods. The extensive experiments evidently demonstrate that compared with the status quo approaches, t-READi not only improves the average inference accuracy by more than 6% but also reduces the inference latency by almost 15× with the cost of only 5% extra memory overhead in the worst case under realistic data and modal variations.
鉴于自动驾驶汽车(av)广泛采用多模态传感器(如摄像头、激光雷达、雷达),深度分析以融合其输出以获得强大的感知变得势在必行。然而,现有的融合方法通常有两个在实践中很少被采用的假设:1)所有输入的数据分布相似;2)所有传感器的可用性不变。例如,由于激光雷达具有不同的分辨率,并且可能发生雷达故障,因此这种可变性通常会导致融合性能的显著下降。为此,我们提出了t-READi,一种适应多模态感官数据可变性的自适应推理系统,从而实现鲁棒和高效的感知。t-READi识别变化敏感但结构特定的模型参数;然后,它只适应这些参数,而保持其余参数不变。t-READi还利用跨模态对比学习方法来弥补模态缺失带来的损失。实现这两个函数是为了保持与现有多模态深度融合方法的兼容性。大量的实验表明,与现有方法相比,在真实数据和模态变化的最坏情况下,t-READi不仅将平均推理精度提高了6%以上,而且将推理延迟降低了近15倍,而成本仅为5%的额外内存开销。
{"title":"t-READi: Transformer-Powered Robust and Efficient Multimodal Inference for Autonomous Driving","authors":"Pengfei Hu;Yuhang Qian;Tianyue Zheng;Ang Li;Zhe Chen;Yue Gao;Xiuzhen Cheng;Jun Luo","doi":"10.1109/TMC.2024.3462437","DOIUrl":"10.1109/TMC.2024.3462437","url":null,"abstract":"Given the wide adoption of multimodal sensors (e.g., camera, lidar, radar) by \u0000<italic>autonomous vehicle</i>\u0000s (AVs), deep analytics to fuse their outputs for a robust perception become imperative. However, existing fusion methods often make two assumptions rarely holding in practice: i) similar data distributions for all inputs and ii) constant availability for all sensors. Because, for example, lidars have various resolutions and failures of radars may occur, such variability often results in significant performance degradation in fusion. To this end, we present t-READi, an adaptive inference system that accommodates the variability of multimodal sensory data and thus enables robust and efficient perception. t-READi identifies variation-sensitive yet \u0000<italic>structure-specific</i>\u0000 model parameters; it then adapts only these parameters while keeping the rest intact. t-READi also leverages a cross-modality contrastive learning method to compensate for the loss from missing modalities. Both functions are implemented to maintain compatibility with existing multimodal deep fusion methods. The extensive experiments evidently demonstrate that compared with the status quo approaches, t-READi not only improves the average inference accuracy by more than 6% but also reduces the inference latency by almost 15× with the cost of only 5% extra memory overhead in the worst case under realistic data and modal variations.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 1","pages":"135-149"},"PeriodicalIF":7.7,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142257208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploitation and Confrontation: Sustainability Analysis of Crowdsourcing 开发与对抗:众包的可持续性分析
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-18 DOI: 10.1109/TMC.2024.3463417
Hang Zhao;Shengling Wang;Hongwei Shi;Jianhui Huang;Yu Guo;Xiuzhen Cheng
Game theory is an effective analytical tool for crowdsourcing. Existing studies based on it share a commonality: the influence of players’ decisions is bilateral. However, the status is broken by the zero-determinant (ZD) strategy, where the ZD player can unilaterally control the opponent's expected payoff. Thereby, crowdsourcing games trigger conclusions that differ from traditional ones. By addressing three questions, this paper is the first work to analyze the turbulence in crowdsourcing caused by the inequality between the requestor and the worker in the ZD game. The first question reveals the potential for the requestor to exploit the worker; the second question quantifies the worker's tolerance towards exploitation, providing a basis for confrontation; the third question serves as the cornerstone for maintaining the crowdsourcing, regulating the requestor's exploitative behavior. To answer these questions, we extend ZD strategies from binary games to continuous ones, not only revealing the requestor's dominance but also enriching the theoretical system of ZD strategies and broadening their application. Furthermore, we introduce the worker's dissatisfaction degree, identifying the exponential trend and decay rate, revealing optimal timing and speed for the worker's effective confrontation and maximum exploitation for the requestor. Numerical simulations have validated the effectiveness of our analyses.
博弈论是众包的有效分析工具。基于此的现有研究有一个共同点:玩家决策的影响是双向的。然而,这种状态被零决定策略(ZD)打破了,ZD玩家可以单方面控制对手的预期收益。因此,众包游戏会产生不同于传统游戏的结论。本文通过解决三个问题,首次分析了众包博弈中请求者与劳动者之间的不平等导致的众包动荡。第一个问题揭示了请求者利用工作者的可能性;第二个问题量化了工人对剥削的容忍程度,为对抗提供了基础;第三个问题是维持众包的基石,规范请求者的剥削行为。为了回答这些问题,我们将ZD策略从二元博弈扩展到连续博弈,不仅揭示了请求者的优势地位,而且丰富了ZD策略的理论体系,拓宽了其应用范围。此外,我们引入了员工的不满程度,确定了指数趋势和衰减率,揭示了员工有效对抗的最佳时间和速度,并为请求者提供了最大的利用。数值模拟验证了分析的有效性。
{"title":"Exploitation and Confrontation: Sustainability Analysis of Crowdsourcing","authors":"Hang Zhao;Shengling Wang;Hongwei Shi;Jianhui Huang;Yu Guo;Xiuzhen Cheng","doi":"10.1109/TMC.2024.3463417","DOIUrl":"10.1109/TMC.2024.3463417","url":null,"abstract":"Game theory is an effective analytical tool for crowdsourcing. Existing studies based on it share a commonality: the influence of players’ decisions is \u0000<italic>bilateral</i>\u0000. However, the status is broken by the zero-determinant (ZD) strategy, where the ZD player can \u0000<italic>unilaterally</i>\u0000 control the opponent's expected payoff. Thereby, crowdsourcing games trigger conclusions that differ from traditional ones. By addressing three questions, this paper is the first work to analyze the turbulence in crowdsourcing caused by the inequality between the requestor and the worker in the ZD game. The first question reveals the potential for the requestor to exploit the worker; the second question quantifies the worker's tolerance towards exploitation, providing a basis for confrontation; the third question serves as the cornerstone for maintaining the crowdsourcing, regulating the requestor's exploitative behavior. To answer these questions, we extend ZD strategies from binary games to continuous ones, not only revealing the requestor's dominance but also enriching the theoretical system of ZD strategies and broadening their application. Furthermore, we introduce the worker's dissatisfaction degree, identifying the exponential trend and decay rate, revealing optimal timing and speed for the worker's effective confrontation and maximum exploitation for the requestor. Numerical simulations have validated the effectiveness of our analyses.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 2","pages":"614-626"},"PeriodicalIF":7.7,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142257209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bison: A Binary Sparse Network Coding Based Contents Sharing Scheme for D2D-Enabled Mobile Edge Caching Network Bison:基于二进制稀疏网络编码的 D2D 移动边缘缓存网络内容共享方案
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-18 DOI: 10.1109/TMC.2024.3463735
Cheng Peng;Jun Yin;Lei Wang;Fu Xiao
Mobile edge caching network (MEN), which enables popular or reusable content caching and sharing among adjacent mobile edge devices, has become a promising solution to reduce the traffic and burden over backhaul links. Network coding (NC), represented by classical random linear network coding (RLNC), is utilized to facilitate content delivery and increase throughput in MEN. However, as the harsh decoding condition results in unacceptable time and storage overhead, classical RLNC schemes struggle to be widely deployed in practice. In this work, we propose a cost-effective NC-based content-sharing scheme based on binary sparse network coding (BSNC), called Bison, for D2D-enabled MEN. Based on the shared relationship between the binary sparse coded block (BSCB), Bison first designs a caching maintenance module to characterize the sharing progress and maintain the caching state of each edge node. Then, Bison defines a matching metric named neighbor utility to evaluate neighbors’ matching values by considering nodes’ demand and content decodability. Guiding by the metric, Bison achieves the most beneficial matching relationship among edge nodes through a proposed online matching policy. Finally, Bison devises a coded block delivery strategy to enable the sharing of valuable content between two matched edge nodes. Extensive experiments in simulations and real-world Android testbeds demonstrate its effectiveness and efficiency, wherein Bison is at least 30% less than the RLNC-based scheme on time consumption and at least 10% less than the classical BSNC-based scheme on storage overhead. The results also show that our matching policy and coded block delivery strategy can perform with a low response latency on edge and mobile devices.
移动边缘缓存网络(MEN)支持在相邻的移动边缘设备之间实现流行的或可重用的内容缓存和共享,已成为减少回程链路流量和负担的一种有前途的解决方案。以经典随机线性网络编码(RLNC)为代表的网络编码(NC)被用来促进内容传递和提高网络吞吐量。然而,由于苛刻的解码条件导致不可接受的时间和存储开销,经典的RLNC方案难以在实践中广泛部署。在这项工作中,我们提出了一种基于二进制稀疏网络编码(BSNC)的具有成本效益的基于nc的内容共享方案,称为Bison,用于支持d2d的MEN。基于二进制稀疏编码块(BSCB)之间的共享关系,Bison首先设计了缓存维护模块来表征每个边缘节点的共享进程并维护缓存状态。然后,Bison定义了一个名为邻居实用程序的匹配度量,通过考虑节点的需求和内容可解码性来评估邻居的匹配值。Bison在度量的指导下,通过提出的在线匹配策略实现边缘节点之间最有利的匹配关系。最后,Bison设计了一种编码块交付策略,使两个匹配的边缘节点之间能够共享有价值的内容。大量的模拟实验和真实的Android测试平台证明了它的有效性和效率,其中Bison比基于rnc的方案在时间消耗上至少少30%,在存储开销上至少比基于经典bsnc的方案少10%。结果还表明,我们的匹配策略和编码块传递策略可以在边缘和移动设备上以较低的响应延迟执行。
{"title":"Bison: A Binary Sparse Network Coding Based Contents Sharing Scheme for D2D-Enabled Mobile Edge Caching Network","authors":"Cheng Peng;Jun Yin;Lei Wang;Fu Xiao","doi":"10.1109/TMC.2024.3463735","DOIUrl":"10.1109/TMC.2024.3463735","url":null,"abstract":"Mobile edge caching network (MEN), which enables popular or reusable content caching and sharing among adjacent mobile edge devices, has become a promising solution to reduce the traffic and burden over backhaul links. Network coding (NC), represented by classical random linear network coding (RLNC), is utilized to facilitate content delivery and increase throughput in MEN. However, as the harsh decoding condition results in unacceptable time and storage overhead, classical RLNC schemes struggle to be widely deployed in practice. In this work, we propose a cost-effective NC-based content-sharing scheme based on binary sparse network coding (BSNC), called Bison, for D2D-enabled MEN. Based on the shared relationship between the binary sparse coded block (BSCB), Bison first designs a caching maintenance module to characterize the sharing progress and maintain the caching state of each edge node. Then, Bison defines a matching metric named neighbor utility to evaluate neighbors’ matching values by considering nodes’ demand and content decodability. Guiding by the metric, Bison achieves the most beneficial matching relationship among edge nodes through a proposed online matching policy. Finally, Bison devises a coded block delivery strategy to enable the sharing of valuable content between two matched edge nodes. Extensive experiments in simulations and real-world Android testbeds demonstrate its effectiveness and efficiency, wherein Bison is at least 30% less than the RLNC-based scheme on time consumption and at least 10% less than the classical BSNC-based scheme on storage overhead. The results also show that our matching policy and coded block delivery strategy can perform with a low response latency on edge and mobile devices.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 2","pages":"677-695"},"PeriodicalIF":7.7,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142257219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Task-Oriented Video Compressive Streaming for Real-Time Semantic Segmentation 面向任务的实时语义分割视频压缩流
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-18 DOI: 10.1109/TMC.2024.3446185
Xuedou Xiao;Yingying Zuo;Mingxuan Yan;Wei Wang;Jianhua He;Qian Zhang
Real-time semantic segmentation (SS) is a major task for various vision-based applications such as self-driving. Due to the limited computing resources and stringent performance requirements, streaming videos from camera-embedded mobile devices to edge servers for SS is a promising approach. While there are increasing efforts on task-oriented video compression, most SS-applicable algorithms apply more uniform compression, as the sensitive regions are less obvious and concentrated. Such processing results in low compression performance and significantly limits the capacity of edge servers supporting real-time SS. In this paper, we propose STAC, a novel task-oriented DNN-driven video compressive streaming algorithm tailed for SS, to strike accuracy-bitrate balance and adapt to time-varying bandwidth. It exploits DNN's gradients as sensitivity metrics for fine-grained spatial adaptive compression and includes a temporal adaptive scheme that integrates spatial adaptation with predictive coding. Furthermore, we design a new bandwidth-aware neural network, serving as a compatible configuration tuner to fit time-varying bandwidth and content. STAC is evaluated in a system with a commodity mobile device and an edge server with real-world network traces. Experiments show that STAC can save up to 63.7–75.2% of bandwidth or improve accuracy by 3.1–9.5% compared to state-of-the-art algorithms, while capable of adapting to time-varying bandwidth.
实时语义分割(SS)是自动驾驶等各种基于视觉的应用的一项重要任务。由于有限的计算资源和严格的性能要求,从嵌入摄像头的移动设备流式传输视频到边缘服务器进行语义分割是一种很有前景的方法。虽然人们在面向任务的视频压缩方面做出了越来越多的努力,但大多数适用于 SS 的算法都采用了更均匀的压缩方式,因为敏感区域不太明显和集中。这种处理方式导致压缩性能低下,极大地限制了支持实时 SS 的边缘服务器的能力。在本文中,我们提出了 STAC,这是一种新颖的面向任务的 DNN 驱动视频压缩流算法,专为 SS 量身定制,以实现精度与比特率之间的平衡,并适应随时间变化的带宽。该算法利用 DNN 的梯度作为灵敏度度量,实现细粒度空间自适应压缩,并包含一个将空间自适应与预测编码相结合的时间自适应方案。此外,我们还设计了一种新的带宽感知神经网络,作为兼容配置调节器,以适应时变带宽和内容。STAC 在一个使用商品移动设备和边缘服务器的系统中通过真实的网络跟踪进行了评估。实验表明,与最先进的算法相比,STAC 能节省高达 63.7-75.2% 的带宽,或将准确率提高 3.1-9.5%,同时还能适应随时间变化的带宽。
{"title":"Task-Oriented Video Compressive Streaming for Real-Time Semantic Segmentation","authors":"Xuedou Xiao;Yingying Zuo;Mingxuan Yan;Wei Wang;Jianhua He;Qian Zhang","doi":"10.1109/TMC.2024.3446185","DOIUrl":"10.1109/TMC.2024.3446185","url":null,"abstract":"Real-time semantic segmentation (SS) is a major task for various vision-based applications such as self-driving. Due to the limited computing resources and stringent performance requirements, streaming videos from camera-embedded mobile devices to edge servers for SS is a promising approach. While there are increasing efforts on task-oriented video compression, most SS-applicable algorithms apply more uniform compression, as the sensitive regions are less obvious and concentrated. Such processing results in low compression performance and significantly limits the capacity of edge servers supporting real-time SS. In this paper, we propose STAC, a novel task-oriented DNN-driven video compressive streaming algorithm tailed for SS, to strike accuracy-bitrate balance and adapt to time-varying bandwidth. It exploits DNN's gradients as sensitivity metrics for fine-grained spatial adaptive compression and includes a temporal adaptive scheme that integrates spatial adaptation with predictive coding. Furthermore, we design a new bandwidth-aware neural network, serving as a compatible configuration tuner to fit time-varying bandwidth and content. STAC is evaluated in a system with a commodity mobile device and an edge server with real-world network traces. Experiments show that STAC can save up to 63.7–75.2% of bandwidth or improve accuracy by 3.1–9.5% compared to state-of-the-art algorithms, while capable of adapting to time-varying bandwidth.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"23 12","pages":"14396-14413"},"PeriodicalIF":7.7,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative Video Streaming With Super-Resolution in Multi-User MEC Networks 多用户 MEC 网络中的超分辨率协作视频流
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-17 DOI: 10.1109/TMC.2024.3461685
Xiaobo Zhou;Jiaxin Zeng;Shuxin Ge;Xilai Liu;Tie Qiu
The ever-increasing quality of experience (QoE) demand for video streaming has prompted the integration of video super-resolution and multi-access edge computing networks (MEC). With super-resolution, the low-resolution frames can be reconstructed into high-resolution ones by edge node and end device collaboratively, which is beneficial in improving QoE. However, the existing works focus on designing video streaming strategies in single-user scenarios, which cannot be applied to multi-user scenarios due to the resource contention among users, as well as the huge solution space of coupled bitrate selection and workload share between edge-end. To fill this gap, we propose a collaborative video streaming strategy with super-resolution in multi-user MEC networks, named Co-Video, to maximize the average QoE by making optimal bitrate selection and workload share. We first formulate the problem as an optimization problem towards maximum average QoE, where the QoE incorporates playback delay, video quality, and smoothness. Then, we transform the optimization problem into a partially observable Markov decision process (POMDP) and exploit the Co-Video strategy based on the multi-agent soft actor-critic (MASAC) algorithm. Specifically, Co-Video utilizes the branching actor network to converge to good policy stably. Finally, trace-driven simulations on real-world bandwidth traces demonstrate that Co-Video outperforms the state-of-the-art baselines.
视频流日益增长的体验质量(QoE)需求促使视频超分辨率和多接入边缘计算网络(MEC)的融合。利用超分辨率,可以通过边缘节点和终端设备协同将低分辨率帧重构为高分辨率帧,有利于提高QoE。然而,现有的工作主要集中在单用户场景下的视频流策略设计,由于用户之间的资源争夺,以及端到端耦合比特率选择和工作量分担的巨大解决空间,无法应用于多用户场景。为了填补这一空白,我们提出了一种多用户MEC网络中具有超分辨率的协同视频流策略,称为Co-Video,通过优化比特率选择和工作负载分担来最大化平均QoE。我们首先将这个问题表述为一个针对最大平均QoE的优化问题,其中QoE包含播放延迟、视频质量和平滑度。然后,我们将优化问题转化为部分可观察马尔可夫决策过程(POMDP),并利用基于多智能体软行为者评论(MASAC)算法的Co-Video策略。具体来说,Co-Video利用分支参与者网络稳定地收敛到好策略。最后,对现实世界带宽跟踪的跟踪驱动模拟表明,Co-Video优于最先进的基线。
{"title":"Collaborative Video Streaming With Super-Resolution in Multi-User MEC Networks","authors":"Xiaobo Zhou;Jiaxin Zeng;Shuxin Ge;Xilai Liu;Tie Qiu","doi":"10.1109/TMC.2024.3461685","DOIUrl":"10.1109/TMC.2024.3461685","url":null,"abstract":"The ever-increasing quality of experience (QoE) demand for video streaming has prompted the integration of video super-resolution and multi-access edge computing networks (MEC). With super-resolution, the low-resolution frames can be reconstructed into high-resolution ones by edge node and end device collaboratively, which is beneficial in improving QoE. However, the existing works focus on designing video streaming strategies in single-user scenarios, which cannot be applied to multi-user scenarios due to the resource contention among users, as well as the huge solution space of coupled bitrate selection and workload share between edge-end. To fill this gap, we propose a collaborative video streaming strategy with super-resolution in multi-user MEC networks, named Co-Video, to maximize the average QoE by making optimal bitrate selection and workload share. We first formulate the problem as an optimization problem towards maximum average QoE, where the QoE incorporates playback delay, video quality, and smoothness. Then, we transform the optimization problem into a partially observable Markov decision process (POMDP) and exploit the Co-Video strategy based on the multi-agent soft actor-critic (MASAC) algorithm. Specifically, Co-Video utilizes the branching actor network to converge to good policy stably. Finally, trace-driven simulations on real-world bandwidth traces demonstrate that Co-Video outperforms the state-of-the-art baselines.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 2","pages":"571-584"},"PeriodicalIF":7.7,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142257418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Achieving Linear Speedup in Asynchronous Federated Learning With Heterogeneous Clients 在异构客户端异步联合学习中实现线性加速
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-17 DOI: 10.1109/TMC.2024.3461852
Xiaolu Wang;Zijian Li;Shi Jin;Jun Zhang
Federated learning (FL) is an emerging distributed training paradigm that aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients. The Federated Averaging (FedAvg)-based algorithms have gained substantial popularity in FL to reduce the communication overhead, where each client conducts multiple localized iterations before communicating with a central server. In this paper, we focus on FL where the clients have diverse computation and/or communication capabilities. Under this circumstance, FedAvg can be less efficient since it requires all clients that participate in the global aggregation in a round to initiate iterations from the latest global model, and thus the synchronization among fast clients and straggler clients can severely slow down the overall training process. To address this issue, we propose an efficient asynchronous federated learning (AFL) framework called Delayed Federated Averaging (DeFedAvg). In DeFedAvg, the clients are allowed to perform local training with different stale global models at their own paces. Theoretical analyses demonstrate that DeFedAvg achieves asymptotic convergence rates that are on par with the results of FedAvg for solving nonconvex problems. More importantly, DeFedAvg is the first AFL algorithm that provably achieves the desirable linear speedup property, which indicates its high scalability. Additionally, we carry out extensive numerical experiments using real datasets to validate the efficiency and scalability of our approach when training deep neural networks.
联邦学习(FL)是一种新兴的分布式训练范例,旨在学习通用的全局模型,而无需交换或传输本地存储在不同客户机上的数据。基于联邦平均(fedag)的算法在FL中非常流行,以减少通信开销,其中每个客户端在与中央服务器通信之前进行多次本地化迭代。在本文中,我们关注的是客户端具有不同计算和/或通信能力的FL。在这种情况下,fedag的效率可能会降低,因为它需要一轮中参与全局聚合的所有客户端从最新的全局模型开始迭代,因此快速客户端和落后客户端之间的同步可能会严重减慢整个训练过程。为了解决这个问题,我们提出了一个高效的异步联邦学习(AFL)框架,称为延迟联邦平均(DeFedAvg)。在defdavg中,客户可以按照自己的节奏使用不同陈旧的全局模型进行本地训练。理论分析表明,在求解非凸问题时,FedAvg算法的渐近收敛速度与FedAvg算法的结果相当。更重要的是,DeFedAvg是第一个可以证明达到理想线性加速特性的AFL算法,这表明它具有很高的可扩展性。此外,我们使用真实数据集进行了大量的数值实验,以验证我们的方法在训练深度神经网络时的效率和可扩展性。
{"title":"Achieving Linear Speedup in Asynchronous Federated Learning With Heterogeneous Clients","authors":"Xiaolu Wang;Zijian Li;Shi Jin;Jun Zhang","doi":"10.1109/TMC.2024.3461852","DOIUrl":"10.1109/TMC.2024.3461852","url":null,"abstract":"Federated learning (FL) is an emerging distributed training paradigm that aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients. The Federated Averaging (FedAvg)-based algorithms have gained substantial popularity in FL to reduce the communication overhead, where each client conducts multiple localized iterations before communicating with a central server. In this paper, we focus on FL where the clients have diverse computation and/or communication capabilities. Under this circumstance, FedAvg can be less efficient since it requires all clients that participate in the global aggregation in a round to initiate iterations from the \u0000<italic>latest</i>\u0000 global model, and thus the synchronization among fast clients and \u0000<italic>straggler clients</i>\u0000 can severely slow down the overall training process. To address this issue, we propose an efficient asynchronous federated learning (AFL) framework called \u0000<italic>Delayed Federated Averaging (DeFedAvg)</i>\u0000. In DeFedAvg, the clients are allowed to perform local training with different stale global models at their own paces. Theoretical analyses demonstrate that DeFedAvg achieves asymptotic convergence rates that are on par with the results of FedAvg for solving nonconvex problems. More importantly, DeFedAvg is the first AFL algorithm that provably achieves the desirable \u0000<italic>linear speedup</i>\u0000 property, which indicates its high scalability. Additionally, we carry out extensive numerical experiments using real datasets to validate the efficiency and scalability of our approach when training deep neural networks.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 1","pages":"435-448"},"PeriodicalIF":7.7,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Near-Field Beam Training for Extremely Large-Scale MIMO Based on Deep Learning 基于深度学习的超大规模 MIMO 近场波束训练
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-17 DOI: 10.1109/TMC.2024.3462960
Jiali Nie;Yuanhao Cui;Zhaohui Yang;Weijie Yuan;Xiaojun Jing
Extremely Large-scale Array (ELAA) is considered a frontier technology for future communication systems, playing a crucial role in enhancing the rate and spectral efficiency of wireless networks. As ELAA employs a multitude of antennas operating at higher frequencies, users are typically situated in the near-field region where the spherical wavefront propagates. Near-field beam training requires information on both angle and distance, which inevitably leads to a significant increase in the beam training overhead. To address this challenge, we propose a near-field beam training method based on deep learning. Specifically, we employ a convolutional neural network (CNN) to efficiently extract channel characteristics from historical data by strategically selecting padding and kernel sizes. The negative value of the user average achievable rate is utilized as the loss function to optimize the beamformer, maximizing the achievable rate in multi-user networks without relying on predefined beam codebooks. Once deployed, the model requires only pre-estimated channel state information (CSI) to compute the optimal beamforming vector. Simulation results demonstrate that the proposed scheme achieves more stable beamforming gains and substantially outperforms traditional beam training approaches. Furthermore, owing to the inherent traits of deep learning methodologies, this approach substantially diminishes the near-field beam training overhead.
超大规模阵列(ELAA)被认为是未来通信系统的前沿技术,在提高无线网络的速率和频谱效率方面发挥着至关重要的作用。由于ELAA采用了大量在更高频率下工作的天线,用户通常位于球面波前传播的近场区域。近场波束训练需要角度和距离的信息,这不可避免地导致波束训练开销的显著增加。为了解决这一挑战,我们提出了一种基于深度学习的近场波束训练方法。具体来说,我们使用卷积神经网络(CNN)通过策略性地选择填充和核大小来有效地从历史数据中提取通道特征。利用用户平均可达速率的负值作为损失函数来优化波束形成器,在不依赖于预定义波束码本的情况下最大化多用户网络中的可达速率。一旦部署,该模型只需要预估计的信道状态信息(CSI)来计算最佳波束形成矢量。仿真结果表明,该方法能获得更稳定的波束形成增益,大大优于传统的波束训练方法。此外,由于深度学习方法的固有特性,该方法大大减少了近场波束训练开销。
{"title":"Near-Field Beam Training for Extremely Large-Scale MIMO Based on Deep Learning","authors":"Jiali Nie;Yuanhao Cui;Zhaohui Yang;Weijie Yuan;Xiaojun Jing","doi":"10.1109/TMC.2024.3462960","DOIUrl":"10.1109/TMC.2024.3462960","url":null,"abstract":"Extremely Large-scale Array (ELAA) is considered a frontier technology for future communication systems, playing a crucial role in enhancing the rate and spectral efficiency of wireless networks. As ELAA employs a multitude of antennas operating at higher frequencies, users are typically situated in the near-field region where the spherical wavefront propagates. Near-field beam training requires information on both angle and distance, which inevitably leads to a significant increase in the beam training overhead. To address this challenge, we propose a near-field beam training method based on deep learning. Specifically, we employ a convolutional neural network (CNN) to efficiently extract channel characteristics from historical data by strategically selecting padding and kernel sizes. The negative value of the user average achievable rate is utilized as the loss function to optimize the beamformer, maximizing the achievable rate in multi-user networks without relying on predefined beam codebooks. Once deployed, the model requires only pre-estimated channel state information (CSI) to compute the optimal beamforming vector. Simulation results demonstrate that the proposed scheme achieves more stable beamforming gains and substantially outperforms traditional beam training approaches. Furthermore, owing to the inherent traits of deep learning methodologies, this approach substantially diminishes the near-field beam training overhead.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 1","pages":"352-362"},"PeriodicalIF":7.7,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142257212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Few-Shot Adaptation to Unseen Conditions for Wireless-Based Human Activity Recognition Without Fine-Tuning 基于无线的人类活动识别无需微调,只需少量镜头即可适应未知条件
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-17 DOI: 10.1109/TMC.2024.3462466
Xiaotong Zhang;Qingqiao Hu;Zhen Xiao;Tao Sun;Jiaxi Zhang;Jin Zhang;Zhenjiang Li
Wireless-based human activity recognition (WHAR) enables various promising applications. However, since WHAR is sensitive to changes in sensing conditions (e.g., different environments, users, and new activities), trained models often do not work well under new conditions. Recent research uses meta-learning to adapt models. However, they must fine-tune the model, which greatly hinders the widespread adoption of WHAR in practice because model fine-tuning is difficult to automate and requires deep-learning expertise. The fundamental reason for model fine-tuning in existing works is because their goal is to find the mapping relationship between data samples and corresponding activity labels. Since this mapping reflects the intrinsic properties of data in the perceptual scene, it is naturally related to the conditions under which the activity is sensed. To address this problem, we exploit the principle that under the same sensing condition, data of the same activity class are more similar (in a certain latent space) than data of other classes, and this property holds invariant across different conditions. Our main observation is that meta-learning can actually also transform WHAR design into a learning problem that is always under similar conditions, thus decoupling the dependence on sensing conditions. With this capability, general and accurate WHAR can be achieved, avoiding model fine-tuning. In this paper, we implement this idea through two innovative designs in a system called RoMF. Extensive experiments using FMCW, Wi-Fi and acoustic three sensing signals show that it can achieve up to 95.3% accuracy in unseen conditions, including new environments, users and activity classes.
基于无线的人类活动识别(WHAR)实现了各种有前途的应用。然而,由于WHAR对传感条件的变化很敏感(例如,不同的环境、用户和新的活动),训练好的模型在新的条件下往往不能很好地工作。最近的研究使用元学习来调整模型。然而,他们必须对模型进行微调,这极大地阻碍了WHAR在实践中的广泛采用,因为模型微调很难自动化并且需要深度学习专业知识。现有作品中进行模型微调的根本原因是他们的目标是找到数据样本与相应活动标签之间的映射关系。由于这种映射反映了感知场景中数据的内在属性,因此它自然与感知活动的条件有关。为了解决这个问题,我们利用了在相同的感知条件下,相同活动类的数据比其他类的数据更相似(在一定的潜在空间中)的原则,并且这一属性在不同的条件下保持不变。我们的主要观察是,元学习实际上也可以将WHAR设计转化为始终处于相似条件下的学习问题,从而解耦对感知条件的依赖。有了这个功能,就可以实现一般和准确的WHAR,避免模型微调。在本文中,我们通过在一个称为RoMF的系统中进行两个创新设计来实现这一思想。利用FMCW、Wi-Fi和声学三种传感信号进行的大量实验表明,在不可见的条件下,包括新环境、用户和活动类别,该系统的准确率高达95.3%。
{"title":"Few-Shot Adaptation to Unseen Conditions for Wireless-Based Human Activity Recognition Without Fine-Tuning","authors":"Xiaotong Zhang;Qingqiao Hu;Zhen Xiao;Tao Sun;Jiaxi Zhang;Jin Zhang;Zhenjiang Li","doi":"10.1109/TMC.2024.3462466","DOIUrl":"10.1109/TMC.2024.3462466","url":null,"abstract":"Wireless-based human activity recognition (WHAR) enables various promising applications. However, since WHAR is sensitive to changes in sensing conditions (e.g., different environments, users, and new activities), trained models often do not work well under new conditions. Recent research uses meta-learning to adapt models. However, they must fine-tune the model, which greatly hinders the widespread adoption of WHAR in practice because model fine-tuning is difficult to automate and requires deep-learning expertise. The fundamental reason for model fine-tuning in existing works is because their goal is to find the mapping relationship between data samples and corresponding activity labels. Since this mapping reflects the intrinsic properties of data in the perceptual scene, it is naturally related to the conditions under which the activity is sensed. To address this problem, we exploit the principle that under the same sensing condition, data of the same activity class are more similar (in a certain latent space) than data of other classes, and this property holds invariant across different conditions. Our main observation is that meta-learning can actually also transform WHAR design into a learning problem that is always under similar conditions, thus decoupling the dependence on sensing conditions. With this capability, general and accurate WHAR can be achieved, avoiding model fine-tuning. In this paper, we implement this idea through two innovative designs in a system called RoMF. Extensive experiments using FMCW, Wi-Fi and acoustic three sensing signals show that it can achieve up to 95.3% accuracy in unseen conditions, including new environments, users and activity classes.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 2","pages":"585-599"},"PeriodicalIF":7.7,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142257215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Mobile Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1