Pub Date : 2024-08-21DOI: 10.1109/TMC.2024.3447026
Xueliang Li;Shicong Hong;Junyang Chen;Junkai Ji;Chengwen Luo;Guihai Yan;Zhibin Yu;Jianqiang Li
Energy-efficiency is one of the most important design criteria for mobile systems, such as smartphones and tablets. But current mobile systems always over-provision resources to satisfy users. The root cause is that, we have no knowledge on how much of system performance/energy will exactly satisfy users. Psychophysics defines the quantified link between physical stimuli and human-perceived stimuli. So, we will leverage psychophysics to study the quantified correlation between computer architecture resources (i.e., physical stimuli) and user satisfaction (i.e., human-perceived stimuli). We then exploit such correlation to precisely apportion resources to operate tasks and accurately satisfy users. Benefiting from our precisely-defined user satisfaction criteria and well-designed algorithms, we can reduce energy consumption of computer architectures by up to 42.9% without harming user experience. To the best of our knowledge, we for the first time theoretically and accurately model such substantial correlation. Our work opens a new research domain for fundamentally improving mobiles’ energy-efficiency.
{"title":"Satisfying Energy-Efficiency Constraints for Mobile Systems","authors":"Xueliang Li;Shicong Hong;Junyang Chen;Junkai Ji;Chengwen Luo;Guihai Yan;Zhibin Yu;Jianqiang Li","doi":"10.1109/TMC.2024.3447026","DOIUrl":"10.1109/TMC.2024.3447026","url":null,"abstract":"Energy-efficiency is one of the most important design criteria for mobile systems, such as smartphones and tablets. But current mobile systems always over-provision resources to satisfy users. The root cause is that, we have no knowledge on how much of system performance/energy will exactly satisfy users. Psychophysics defines the quantified link between physical stimuli and human-perceived stimuli. So, we will leverage psychophysics to study the quantified correlation between computer architecture resources (i.e., physical stimuli) and user satisfaction (i.e., human-perceived stimuli). We then exploit such correlation to precisely apportion resources to operate tasks and accurately satisfy users. Benefiting from our precisely-defined user satisfaction criteria and well-designed algorithms, we can reduce energy consumption of computer architectures by up to 42.9% without harming user experience. To the best of our knowledge, we for the first time theoretically and accurately model such substantial correlation. Our work opens a new research domain for fundamentally improving mobiles’ energy-efficiency.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":null,"pages":null},"PeriodicalIF":7.7,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-21DOI: 10.1109/TMC.2024.3447000
Ziru Niu;Hai Dong;A. K. Qin;Tao Gu
Federated Learning (FL) achieves great popularity in the Internet of Things (IoT) as a powerful interface to offer intelligent services to customers while maintaining data privacy. Under the orchestration of a server, edge devices (also called clients in FL) collaboratively train a global deep-learning model without sharing any local data. Nevertheless, the unequal training contributions among clients have made FL vulnerable, as clients with heavily biased datasets can easily compromise FL by sending malicious or heavily biased parameter updates. Furthermore, the resource shortage issue of the network also becomes a bottleneck. Due to overwhelming computation overheads generated by training deep-learning models on edge devices, and significant communication overheads for transmitting deep-learning models across the network, enormous amounts of resources are consumed in the FL process. This encompasses computation resources like energy and communication resources like bandwidth. To comprehensively address these challenges, in this paper, we present FLrce, an efficient FL framework with a r