首页 > 最新文献

Sustainable Computing-Informatics & Systems最新文献

英文 中文
A comprehensive comparative study on intelligence based optimization algorithms used for maximum power tracking in grid-PV systems 基于智能的并网光伏系统最大功率跟踪优化算法综合比较研究
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-30 DOI: 10.1016/j.suscom.2023.100946
Marlin S, Sundarsingh Jebaseelan

For maximum power point tracking (MPPT) in the solar Photovolatic (PV) system, the meta-heuristic optimization techniques have been widely applied in the last few decades. This is due to the fact that traditional MPPT methodologies are unable to monitor the global MPP in the face of shifting environmental factors. Hence, it is essential to use an intelligence based controlling algorithm for MPPT controlling. The main purpose of this study is to investigate and assess the effectiveness of three cutting-edge and distinctive optimization algorithms for MPPT controlling, including Mongoose Optimization (MO), Prairie Dog Optimization Algorithm (PDOA), and hybrid PDOA + MO. It also aims to select the most effective and sophisticated optimization algorithm to meet the grid systems' energy requirements. This research's original contribution is the implementation and performance evaluation of three alternative meta-heuristic models for MPPT controlling. The goal of this effort is to maximize the energy yield from photovoltaic systems in order to meet the energy demands of grid systems. Three different controlling strategies, including MO + MPPT, PDOA + MPPT, and MO + PDOA + MPPT, are used in this work to achieve this goal. To evaluate the effectiveness and improved performance outcomes, a number of parameters have been taken into account in this work, including time, error, power, THD, and others. Furthermore, using a comprehensive simulation and comparison study, the outcomes of the MO, PDOA, and hybrid PDOA + MO techniques have also been tested and confirmed in this work. Comparisons are also made between the peak, settling, and increasing times of the present and proposed regulatory models. The results and waveforms generated demonstrate that the hybrid PDOA + MO performs better than the other controlling models in terms of enhanced efficiency of 99.5 %, low rising time of 1.6 s, low peak time of 1.05 s, minimal settling time of 1.24 s, error rate of 0.48, response time of 0.005 s, and tracking time of 0.0019 s

对于太阳能光伏发电系统的最大功率点跟踪,元启发式优化技术在近几十年来得到了广泛的应用。这是由于面对不断变化的环境因素,传统的MPPT方法无法监测全球MPP。因此,采用基于智能的控制算法对MPPT进行控制是必要的。本研究的主要目的是研究和评估猫鼬优化算法(Mongoose optimization, MO)、草原土拨鼠优化算法(Prairie Dog optimization Algorithm, PDOA)和混合PDOA + MO三种具有前沿和特色的MPPT控制优化算法的有效性,并选择最有效和最复杂的优化算法来满足电网系统的能量需求。本研究的原创性贡献在于对MPPT控制的三种备选元启发式模型的实现和性能评估。这项工作的目标是最大限度地提高光伏系统的能源产量,以满足电网系统的能源需求。本文采用MO + MPPT、PDOA + MPPT、MO + PDOA + MPPT三种不同的控制策略来实现这一目标。为了评估有效性和改进的性能结果,在这项工作中考虑了许多参数,包括时间、误差、功率、THD等。此外,通过全面的仿真和对比研究,本文还对MO、PDOA和混合PDOA + MO技术的结果进行了测试和验证。比较了目前和提出的调节模型的峰值、稳定和增加时间。结果和生成的波形表明,混合PDOA + MO控制模型的效率提高了99.5%,低上升时间为1.6 s,低峰值时间为1.05 s,最小沉降时间为1.24 s,错误率为0.48,响应时间为0.005 s,跟踪时间为0.0019 s。
{"title":"A comprehensive comparative study on intelligence based optimization algorithms used for maximum power tracking in grid-PV systems","authors":"Marlin S,&nbsp;Sundarsingh Jebaseelan","doi":"10.1016/j.suscom.2023.100946","DOIUrl":"10.1016/j.suscom.2023.100946","url":null,"abstract":"<div><p><span>For maximum power point tracking (MPPT) in the solar Photovolatic (PV) system, the meta-heuristic optimization techniques have been widely applied in the last few decades. This is due to the fact that traditional MPPT methodologies are unable to monitor the global MPP in the face of shifting environmental factors. Hence, it is essential to use an intelligence based controlling algorithm for MPPT controlling. The main purpose of this study is to investigate and assess the effectiveness of three cutting-edge and distinctive </span>optimization algorithms<span> for MPPT controlling, including Mongoose Optimization (MO), Prairie Dog Optimization Algorithm (PDOA), and hybrid PDOA + MO. It also aims to select the most effective and sophisticated optimization algorithm to meet the grid systems' energy requirements. This research's original contribution is the implementation and performance evaluation of three alternative meta-heuristic models for MPPT controlling. The goal of this effort is to maximize the energy yield from photovoltaic systems<span> in order to meet the energy demands of grid systems<span>. Three different controlling strategies, including MO + MPPT, PDOA + MPPT, and MO + PDOA + MPPT, are used in this work to achieve this goal. To evaluate the effectiveness and improved performance outcomes, a number of parameters have been taken into account in this work, including time, error, power, THD, and others. Furthermore, using a comprehensive simulation and comparison study, the outcomes of the MO, PDOA, and hybrid PDOA + MO techniques have also been tested and confirmed in this work. Comparisons are also made between the peak, settling, and increasing times of the present and proposed regulatory models. The results and waveforms generated demonstrate that the hybrid PDOA + MO performs better than the other controlling models in terms of enhanced efficiency of 99.5 %, low rising time of 1.6 s, low peak time of 1.05 s, minimal settling time of 1.24 s, error rate of 0.48, response time of 0.005 s, and tracking time of 0.0019 s</span></span></span></p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"41 ","pages":"Article 100946"},"PeriodicalIF":4.5,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138520695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A chameleon and remora search optimization algorithm for handling task scheduling uncertainty problem in cloud computing 一种处理云计算中任务调度不确定性问题的变色龙搜索优化算法
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-30 DOI: 10.1016/j.suscom.2023.100944
P. Pabitha , K. Nivitha , C. Gunavathi , B. Panjavarnam

Task scheduling in cloud computing is responsible for serving the user requirements. The scheduling strategy must handle the problems of high load over virtual machines (VMs), high-cost consumption and lengthier scheduling time effectively. The greatest challenge in the cloud computing environment is achieving the intended outcome of task scheduling under the uncertain user request demands as it is responsible for assigning specific resources to requests for achieving effective task completion. However, most of the task scheduling approaches contributed to the literature mainly focused on the design and development of scheduling algorithms but ignored to explore the impact of uncertain factors such as millions of instructions per second (MIPS) and network bandwidth during the scheduling process. In this paper, A Chameleon and Remora Search Optimization Algorithm (CRSOA) is proposed for achieving efficient scheduling process by exploring the impact of MIPS and network bandwidth which directly affects the virtual machine (VM) performance. Further the work includes the uncertainty factors of task completion rate, load balance, scheduling cost and makespan in a simultaneous manner during the process of scheduling. It is formulated a multi-objective cloud task scheduling optimization model by integrating the merits of Chameleon Search Algorithm (CSA) and Remora Search Optimization Algorithm (RSOA) using a greedy methodology for simulating the real cloud computing task scheduling process. The simulation results evidently confirmed that the proposed CRSOA approach is minimizing the completion time and effective in handling the load balancing between the available VMs against other competitive metaheuristic task scheduling algorithms. The experimental investigation of this CRSOA confirmed its predominance in minimizing the makespan by 18.96%, cost by 22.18%, and degree of imbalance by 20.54%, compared to the baseline approaches with different number of tasks and VMs.

云计算中的任务调度负责满足用户需求。调度策略必须有效地解决虚拟机高负载、高开销和调度时间长的问题。云计算环境中最大的挑战是在不确定的用户请求需求下实现任务调度的预期结果,因为它负责为请求分配特定的资源,以实现有效的任务完成。然而,文献中的大多数任务调度方法主要集中在调度算法的设计和开发上,而忽略了对调度过程中百万指令秒(MIPS)和网络带宽等不确定因素的影响的探讨。本文通过探索直接影响虚拟机性能的MIPS和网络带宽的影响,提出了一种变色龙和remoa搜索优化算法(CRSOA)来实现高效的调度过程。同时考虑了调度过程中任务完成率、负载均衡、调度成本和最大完工时间等不确定性因素。结合变色龙搜索算法(CSA)和remoa搜索优化算法(RSOA)的优点,采用贪心方法模拟真实的云计算任务调度过程,建立了多目标云任务调度优化模型。仿真结果表明,与其他具有竞争力的元启发式任务调度算法相比,所提出的CRSOA方法能够最大限度地缩短任务完成时间,有效地处理可用虚拟机之间的负载均衡问题。对该CRSOA的实验研究证实,与具有不同任务和虚拟机数量的基准方法相比,该方法在最大完工时间减少18.96%、成本减少22.18%、不平衡程度减少20.54%方面具有优势。
{"title":"A chameleon and remora search optimization algorithm for handling task scheduling uncertainty problem in cloud computing","authors":"P. Pabitha ,&nbsp;K. Nivitha ,&nbsp;C. Gunavathi ,&nbsp;B. Panjavarnam","doi":"10.1016/j.suscom.2023.100944","DOIUrl":"10.1016/j.suscom.2023.100944","url":null,"abstract":"<div><p><span><span>Task scheduling in </span>cloud computing is responsible for serving the user requirements. The scheduling strategy must handle the problems of high load over virtual machines (VMs), high-cost consumption and lengthier scheduling time effectively. The greatest challenge in the cloud computing environment is achieving the intended outcome of task scheduling under the uncertain user request demands as it is responsible for assigning specific resources to requests for achieving effective task completion. However, most of the task scheduling approaches contributed to the literature mainly focused on the design and development of </span>scheduling algorithms<span> but ignored to explore the impact of uncertain factors such as millions of instructions per second<span><span> (MIPS) and network bandwidth during the scheduling process. In this paper, A Chameleon and Remora Search </span>Optimization Algorithm<span><span> (CRSOA) is proposed for achieving efficient scheduling process by exploring the impact of MIPS and network bandwidth which directly affects the virtual machine (VM) performance. Further the work includes the uncertainty factors of task completion rate, load balance, scheduling cost and makespan in a simultaneous manner during the process of scheduling. It is formulated a multi-objective cloud task scheduling optimization model by integrating the merits of Chameleon Search Algorithm (CSA) and Remora Search Optimization Algorithm (RSOA) using a greedy methodology for simulating the real cloud computing task scheduling process. The simulation results evidently confirmed that the proposed CRSOA approach is minimizing the completion time and effective in handling the load balancing between the available VMs against other competitive </span>metaheuristic task scheduling algorithms. The experimental investigation of this CRSOA confirmed its predominance in minimizing the makespan by 18.96%, cost by 22.18%, and degree of imbalance by 20.54%, compared to the baseline approaches with different number of tasks and VMs.</span></span></span></p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"41 ","pages":"Article 100944"},"PeriodicalIF":4.5,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138520659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Soft computing based smart grid fault detection using computerised data analysis with fuzzy machine learning model 基于软计算的基于模糊机器学习模型的计算机化数据分析的智能电网故障检测
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-30 DOI: 10.1016/j.suscom.2023.100945
Taifeng Chen, Chunbo Liu

Electrical grids are more dependable, secure, and significant smart grid (SG) technologies. For effective and dependable electricity distribution, new risks are raised by its high reliance on digital communication technologies. The best grid monitoring and control skills are essential for system reliability. Among other things, SG applications include three key challenges: managing big data volumes, having enough real-time capable measurement instruments, and two-way low-latency communication. This study proposes a unique method for detecting faults in the smart grid via the use of data monitoring and classification using a fuzzy machine learning model. Here, enhanced smart sensor metering performed in the cloud at the network's edge has been used to track data from the smart grid. Fuzzy reinforcement encoder adversarial NN has then been used to categorise the tracked data. Experimental analysis is carried out in terms of scalability, reliability, accuracy, mean average precision, throughput. The potential use of the current grid can be increased, and fault frequency can be decreased, with better monitoring technologies and predictive techniques. Proposed technique attained accuracy of 93 %, throughput of 94 %, reliability of 81 %, mean average precision of 89 %, scalability of 92 %.

电网是更加可靠、安全、重要的智能电网(SG)技术。对数字通信技术的高度依赖给有效可靠的配电带来了新的风险。最好的电网监测和控制技术对系统可靠性至关重要。除此之外,SG应用还面临三个关键挑战:管理大数据量,拥有足够的实时测量仪器,以及双向低延迟通信。本研究提出了一种独特的方法,通过使用模糊机器学习模型的数据监测和分类来检测智能电网中的故障。在这里,在网络边缘的云端执行的增强型智能传感器计量已被用于跟踪来自智能电网的数据。然后使用模糊强化编码器对抗神经网络对跟踪数据进行分类。从可扩展性、可靠性、准确性、平均精密度、吞吐量等方面进行了实验分析。通过更好的监测技术和预测技术,可以增加当前电网的潜在使用,降低故障频率。该技术的准确度为93%,吞吐量为94%,可靠性为81%,平均精密度为89%,可扩展性为92%。
{"title":"Soft computing based smart grid fault detection using computerised data analysis with fuzzy machine learning model","authors":"Taifeng Chen,&nbsp;Chunbo Liu","doi":"10.1016/j.suscom.2023.100945","DOIUrl":"https://doi.org/10.1016/j.suscom.2023.100945","url":null,"abstract":"<div><p>Electrical grids are more dependable, secure, and significant smart grid (SG) technologies. For effective and dependable electricity distribution, new risks are raised by its high reliance on digital communication technologies. The best grid monitoring and control skills are essential for system reliability. Among other things, SG applications<span> include three key challenges: managing big data volumes, having enough real-time capable measurement instruments, and two-way low-latency communication. This study proposes a unique method for detecting faults in the smart grid via the use of data monitoring and classification using a fuzzy machine learning<span> model. Here, enhanced smart sensor metering performed in the cloud at the network's edge has been used to track data from the smart grid. Fuzzy reinforcement encoder adversarial NN has then been used to categorise the tracked data. Experimental analysis is carried out in terms of scalability, reliability, accuracy, mean average precision, throughput. The potential use of the current grid can be increased, and fault frequency can be decreased, with better monitoring technologies and predictive techniques. Proposed technique attained accuracy of 93 %, throughput of 94 %, reliability of 81 %, mean average precision of 89 %, scalability of 92 %.</span></span></p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"41 ","pages":"Article 100945"},"PeriodicalIF":4.5,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138484031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Priority based job scheduling technique that utilizes gaps to increase the efficiency of job distribution in cloud computing 基于优先级的作业调度技术,利用间隙来提高云计算作业分配效率
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-30 DOI: 10.1016/j.suscom.2023.100942
Saydul Akbar Murad , Zafril Rizal M. Azmi , Abu Jafar Md. Muzahid , Md. Murad Hossain Sarker , M. Saef Ullah Miah , MD. Khairul Bashar Bhuiyan , Nick Rahimi , Anupam Kumar Bairagi

A growing number of services, accessible and usable by individuals and businesses on a pay-as-you-go basis, are being made available via cloud computing platforms. The business services paradigm in cloud computing encounters several quality of service (QoS) challenges, such as flow time, makespan time, reliability, and delay. To overcome these obstacles, we first designed a resource management framework for cloud computing systems. This framework elucidates the methodology of resource management in the context of cloud job scheduling. Then, we study the impact of a Virtual Machine’s (VM’s) physical resources on the consistency with which cloud services are executed. After that, we developed a priority-based fair scheduling (PBFS) algorithm to schedule jobs so that they have access to the required resources at optimal times. The algorithm has been devised utilizing three key characteristics, namely CPU time, arrival time, and job length. For optimal scheduling of cloud jobs, we also devised a backfilling technique called Earliest Gap Shortest Job First (EG-SJF), which prioritizes filling in schedule gaps in a specific order. The simulation was carried out with the help of the CloudSim framework. Finally, we compare our proposed PBFS algorithm to LJF, FCFS, and MAX-MIN and find that it achieves better results in terms of overall delay, makespan time, and flow time.

越来越多的个人和企业可以通过云计算平台访问和使用现收现付的服务。云计算中的业务服务范式遇到了几个服务质量(QoS)挑战,例如流时间、最大跨度时间、可靠性和延迟。为了克服这些障碍,我们首先为云计算系统设计了一个资源管理框架。该框架阐明了云作业调度环境下的资源管理方法。然后,我们研究了虚拟机(VM)的物理资源对云服务执行一致性的影响。之后,我们开发了一种基于优先级的公平调度(PBFS)算法来调度作业,以便它们能够在最佳时间访问所需的资源。该算法的设计利用了三个关键特征,即CPU时间、到达时间和作业长度。为了优化云作业的调度,我们还设计了一种称为最早间隙最短作业优先(EG-SJF)的回填技术,该技术按特定顺序优先填充调度间隙。在CloudSim框架的帮助下进行了仿真。最后,我们将提出的PBFS算法与LJF、FCFS和MAX-MIN算法进行了比较,发现它在总体延迟、makespan时间和流时间方面取得了更好的结果。
{"title":"Priority based job scheduling technique that utilizes gaps to increase the efficiency of job distribution in cloud computing","authors":"Saydul Akbar Murad ,&nbsp;Zafril Rizal M. Azmi ,&nbsp;Abu Jafar Md. Muzahid ,&nbsp;Md. Murad Hossain Sarker ,&nbsp;M. Saef Ullah Miah ,&nbsp;MD. Khairul Bashar Bhuiyan ,&nbsp;Nick Rahimi ,&nbsp;Anupam Kumar Bairagi","doi":"10.1016/j.suscom.2023.100942","DOIUrl":"https://doi.org/10.1016/j.suscom.2023.100942","url":null,"abstract":"<div><p><span>A growing number of services, accessible and usable by individuals and businesses on a pay-as-you-go basis, are being made available via cloud computing platforms. The business services paradigm in cloud computing encounters several quality of service (QoS) challenges, such as flow time, makespan time, reliability, and delay. To overcome these obstacles, we first designed a resource management framework for </span>cloud computing systems<span><span>. This framework elucidates the methodology of resource management in the context of cloud job scheduling. Then, we study the impact of a Virtual Machine’s (VM’s) physical resources on the consistency with which cloud services are executed. After that, we developed a priority-based fair scheduling (PBFS) algorithm to schedule jobs so that they have access to the required resources at optimal times. The algorithm has been devised utilizing three key characteristics, namely CPU time, arrival time, and job length. For optimal scheduling of cloud jobs, we also devised a backfilling technique called Earliest Gap Shortest Job First (EG-SJF), which prioritizes filling in schedule gaps in a specific order. The simulation was carried out with the help of the CloudSim framework. Finally, we compare our proposed PBFS algorithm to LJF, </span>FCFS, and MAX-MIN and find that it achieves better results in terms of overall delay, makespan time, and flow time.</span></p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"41 ","pages":"Article 100942"},"PeriodicalIF":4.5,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138484032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning compliance-aware dynamic software allocation for energy, cost and resource-efficient cloud environment 面向能源、成本和资源高效的云环境的机器学习合规性感知动态软件分配
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-28 DOI: 10.1016/j.suscom.2023.100938
Leila Helali, Mohamed Nazih Omri

With the growing number of cloud services protected by licenses, compliance management and assurance is becoming critical need to support the development of trustworthy cloud systems. In these systems, the multiplication of services and the inefficient resource utilization incurred energy consumption and costs increase despite the consolidation initiatives underway. Few works deal with resource allocation optimization at the SaaS level, which does not consider compliance aspects. Generally, the reported consolidation work does not address license management in the cloud environment as a whole, particularly from a resource management perspective, and the vast majority of consolidation work focuses on resource optimization at the infrastructure level. Thus, we propose a software license consolidation scheme based on multi-objective reinforcement learning that enables efficient use of resources and optimizes energy consumption, resource wastage, and costs while ensuring compliance with the processor-based licensing model. The experimental results show that our solution outperforms the baseline approaches in different scenarios with homogeneous and heterogeneous resources under different data center scales.

随着越来越多的云服务受到许可证的保护,遵从性管理和保证成为支持可信云系统开发的关键需求。在这些系统中,尽管正在进行整合,但服务的增加和低效的资源利用导致了能源消耗和成本的增加。很少有著作处理SaaS级别的资源分配优化,因为SaaS级别不考虑遵从性方面。通常,报告的整合工作并没有从整体上解决云环境中的许可证管理问题,特别是从资源管理的角度来看,而且绝大多数整合工作都侧重于基础设施级别的资源优化。因此,我们提出了一种基于多目标强化学习的软件许可整合方案,该方案能够有效利用资源,优化能源消耗、资源浪费和成本,同时确保符合基于处理器的许可模型。实验结果表明,在不同数据中心规模下的同质和异构资源场景下,我们的解决方案优于基线方法。
{"title":"Machine learning compliance-aware dynamic software allocation for energy, cost and resource-efficient cloud environment","authors":"Leila Helali,&nbsp;Mohamed Nazih Omri","doi":"10.1016/j.suscom.2023.100938","DOIUrl":"https://doi.org/10.1016/j.suscom.2023.100938","url":null,"abstract":"<div><p>With the growing number of cloud services protected by licenses, compliance management and assurance is becoming critical need to support the development of trustworthy cloud systems<span>. In these systems, the multiplication of services and the inefficient resource utilization incurred energy consumption and costs increase despite the consolidation initiatives underway. Few works deal with resource allocation optimization at the SaaS level, which does not consider compliance aspects. Generally, the reported consolidation work does not address license management in the cloud environment as a whole, particularly from a resource management perspective, and the vast majority of consolidation work focuses on resource optimization at the infrastructure level. Thus, we propose a software license consolidation scheme based on multi-objective reinforcement learning<span> that enables efficient use of resources and optimizes energy consumption, resource wastage, and costs while ensuring compliance with the processor-based licensing model. The experimental results show that our solution outperforms the baseline approaches in different scenarios with homogeneous and heterogeneous resources<span> under different data center scales.</span></span></span></p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"41 ","pages":"Article 100938"},"PeriodicalIF":4.5,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138466193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on IoT-based hybrid electrical vehicles energy management systems using machine learning-based algorithm 基于机器学习算法的物联网混合动力汽车能量管理系统研究
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-28 DOI: 10.1016/j.suscom.2023.100943
R. Manivannan

Electric vehicles (EVs) are quickly becoming a staple of smart transportation in applications involving smart cities due to their ability to reduce carbon footprints. However, the widespread use of electric vehicles significantly strains the nation's electrical system. In-depth descriptions of the EV's energy management system (EMS) should highlight the vehicle's powertrain's vital role. The energy for propulsion in electric automobiles comes from a rechargeable battery. The safe and dependable operation of batteries in electric vehicles relies heavily on online surveillance and status estimations of charges. An energy management strategy (EMS) that considers the electric vehicle's battery and ultra-capacitor may lessen the vehicle's reliance on external power sources and extend the battery's lifespan. A machine learning-based mathematical dynamic programming algorithm is used in designing the energy management system to teach the system how to respond appropriately to various situations without resorting to predefined rules. Therefore, this research aims to use Machine Learning to create a Smart Energy Management System for Hybrid Electrical Vehicles (SEMS-HEV) with energy storage. Energy optimization techniques and algorithms are necessary in this setting to reduce expenses and length of charging and appropriately arrange the EV charging process to prevent bursts in the electrical supply that may impact the transmission network. To improve the performance of an energy management system, this study employs an IoT-based smart charging system for scheduling V2G connections for hybrid electrical vehicles. It allows for more precise and effective control and greater efficiency by enabling the system to learn from its surroundings.

由于能够减少碳足迹,电动汽车(ev)正迅速成为涉及智慧城市的智能交通应用的主要产品。然而,电动汽车的广泛使用给国家的电力系统带来了极大的压力。对电动汽车能源管理系统(EMS)的深入描述应该突出汽车动力系统的重要作用。电动汽车的推进能量来自可充电电池。电动汽车电池的安全可靠运行在很大程度上依赖于对充电的在线监测和状态估计。考虑到电动汽车的电池和超级电容器的能量管理策略(EMS)可以减少车辆对外部电源的依赖,延长电池的使用寿命。在设计能源管理系统时,使用了基于机器学习的数学动态规划算法,教系统如何在不诉诸预定义规则的情况下对各种情况做出适当的反应。因此,本研究旨在利用机器学习为具有储能功能的混合动力电动汽车(sem - hev)创建智能能源管理系统。在这种情况下,需要能量优化技术和算法来降低充电费用和充电时间,并合理安排电动汽车充电过程,以防止供电突发对输电网络造成影响。为了提高能源管理系统的性能,本研究采用基于物联网的智能充电系统来调度混合动力汽车的V2G连接。它使系统能够从周围环境中学习,从而实现更精确、更有效的控制和更高的效率。
{"title":"Research on IoT-based hybrid electrical vehicles energy management systems using machine learning-based algorithm","authors":"R. Manivannan","doi":"10.1016/j.suscom.2023.100943","DOIUrl":"10.1016/j.suscom.2023.100943","url":null,"abstract":"<div><p><span><span>Electric vehicles (EVs) are quickly becoming a staple of smart transportation in applications involving smart cities due to their ability to reduce carbon footprints. However, the widespread use of electric vehicles significantly strains the nation's electrical system<span>. In-depth descriptions of the EV's energy management<span> system (EMS) should highlight the vehicle's powertrain's vital role. The energy for propulsion in electric automobiles comes from a rechargeable battery. The safe and dependable operation of </span></span></span>batteries<span><span> in electric vehicles relies heavily on online surveillance and status estimations of charges. An energy management strategy (EMS) that considers the electric vehicle's battery and ultra-capacitor may lessen the vehicle's reliance on </span>external power sources<span><span> and extend the battery's lifespan. A machine learning-based mathematical dynamic programming algorithm<span> is used in designing the energy management system to teach the system how to respond appropriately to various situations without resorting to </span></span>predefined rules. Therefore, this research aims to use Machine Learning to create a Smart Energy Management System for Hybrid Electrical Vehicles </span></span></span><em>(SEMS-HEV)</em><span><span> with energy storage. Energy optimization techniques and algorithms are necessary in this setting to reduce expenses and length of charging and appropriately arrange the EV charging process to prevent bursts in the electrical supply that may impact the </span>transmission network. To improve the performance of an energy management system, this study employs an IoT-based smart charging system for scheduling V2G connections for hybrid electrical vehicles. It allows for more precise and effective control and greater efficiency by enabling the system to learn from its surroundings.</span></p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"41 ","pages":"Article 100943"},"PeriodicalIF":4.5,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138520707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sustainable and lightweight domain-based intrusion detection system for in-vehicle network 基于域的可持续轻量级车载网络入侵检测系统
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-25 DOI: 10.1016/j.suscom.2023.100936
Edy Kristianto , Po-Ching Lin , Ren-Hung Hwang

Intelligent transportation systems are designed to enhance and optimize the traffic flow, safety of urban mobility, and improve energy efficiency. While advanced vehicles are equipped with new features, such as communication technologies for the exchange of safety messages, the communication interfaces of the vehicles increase the attack surfaces for attackers to exploit, even into the in-vehicle network (IVN). The automotive Ethernet and intrusion detection systems (IDSs) are promising solutions to the security problem inside the IVN. The automotive Ethernet has higher bandwidth capacity at an economical cost and flexibility for expansion than the current solution. The IDSs can protect the IVN from attackers compromising the vehicle. They can be implemented based on machine learning to learn from the normal IVN traffic behavior. However, numerous IDSs that utilize machine learning face hardware limitations when training within the in-vehicle network. Thus, the models have to be trained outside the IVN and then imported into it. Moreover, the IVN messages are unlabeled whether they are normal or under attack. We propose a lightweight unsupervised IDS that enables training in the IVN with limited computation resources. Our IDS models have an impressive parameter reduction of up to 94% compared to existing models. This leads to a remarkable reduction in memory usage of up to 86%, training time slashed by up to 69%, and a remarkable drop in energy consumption by up to 68%. Despite the size reduction, the proposed models remain only slightly less accurate than current solutions by up to 2%.

智能交通系统旨在增强和优化交通流量、城市交通安全以及提高能源效率。虽然先进车辆配备了新功能,例如用于交换安全信息的通信技术,但车辆的通信接口增加了攻击者可利用的攻击面,甚至可以进入车载网络(IVN)。汽车以太网和入侵检测系统(ids)是解决IVN内部安全问题的有前途的解决方案。与目前的解决方案相比,汽车以太网具有更高的带宽容量,成本更低,扩展更灵活。ids可以保护IVN免受攻击者对车辆的危害。它们可以基于机器学习来实现,从正常的IVN流量行为中学习。然而,许多利用机器学习的ids在车载网络中进行训练时面临硬件限制。因此,模型必须在IVN之外进行训练,然后再导入其中。此外,无论IVN消息是正常的还是受到攻击的,它们都是未标记的。我们提出了一种轻量级的无监督IDS,可以在有限的计算资源下在IVN中进行训练。与现有模型相比,我们的IDS模型具有令人印象深刻的参数减少高达94%。这导致内存使用量显著减少高达86%,训练时间减少高达69%,能源消耗显著下降高达68%。尽管缩小了尺寸,但所提出的模型的精度仍然只比目前的解决方案低2%。
{"title":"Sustainable and lightweight domain-based intrusion detection system for in-vehicle network","authors":"Edy Kristianto ,&nbsp;Po-Ching Lin ,&nbsp;Ren-Hung Hwang","doi":"10.1016/j.suscom.2023.100936","DOIUrl":"https://doi.org/10.1016/j.suscom.2023.100936","url":null,"abstract":"<div><p>Intelligent transportation systems<span><span> are designed to enhance and optimize the traffic flow, safety of urban mobility, and improve energy efficiency. While advanced vehicles are equipped with new features, such as communication technologies for the exchange of safety messages, the communication interfaces of the vehicles increase the attack surfaces for attackers to exploit, even into the in-vehicle network (IVN). The automotive Ethernet and intrusion detection systems (IDSs) are promising solutions to the security problem inside the IVN. The automotive Ethernet has higher bandwidth capacity at an economical cost and flexibility for expansion than the current solution. The IDSs can protect the IVN from attackers compromising the vehicle. They can be implemented based on </span>machine learning to learn from the normal IVN traffic behavior. However, numerous IDSs that utilize machine learning face hardware limitations when training within the in-vehicle network. Thus, the models have to be trained outside the IVN and then imported into it. Moreover, the IVN messages are unlabeled whether they are normal or under attack. We propose a lightweight unsupervised IDS that enables training in the IVN with limited computation resources. Our IDS models have an impressive parameter reduction of up to 94% compared to existing models. This leads to a remarkable reduction in memory usage of up to 86%, training time slashed by up to 69%, and a remarkable drop in energy consumption by up to 68%. Despite the size reduction, the proposed models remain only slightly less accurate than current solutions by up to 2%.</span></p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"41 ","pages":"Article 100936"},"PeriodicalIF":4.5,"publicationDate":"2023-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138448339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tweaked optimization based quality aware VM selection method for effectual placement strategy 基于改进优化的质量感知虚拟机选择方法的有效放置策略
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-23 DOI: 10.1016/j.suscom.2023.100939
Rubaya Khatun , Md Ashifuddin Mondal

Cloud computing has become a standard and promising distributed computing framework for the provision of on-demand computing resources and pay-per-use concepts. Operations of these computing resources result in maximum power consumption, enraged cost and high Co2 emission to the environment. The major difficulties faced when accessing cloud data center are SLA violations, increased time, less utilization of resources, high consumption of power and energy. Hence, considering these difficulties, a novel virtual machine (VM) selection approach is proposed to minimize the constraints while maintaining the SLA. First, based on the assumptions of VMs and physical machines (PMs), the overutilized hosts are detected using a static threshold approach, while underutilized hosts are identified based on the utilized resources. After load detection, the VMs that need to be migrated over other PMs are selected using the tweaked chimp optimization algorithm (TCOA). After selecting VMs without influencing the capacity of other VMs, the placement process is performed over other PMs using a power aware best fit decreasing approach. The proposed approach can greatly improve the QoS by selecting the optimal VMs that need to be migrated. Cloudsim is used as a simulation tool, and the results are compared with existing techniques in terms of migration time, energy consumption, SLA violation per host and so on to prove the superiority. The energy consumption of the proposed model is obtained to be 195.3 kWh, the overall SLA violation rate is attained to be 0.032%, and the migration time for 500 virtual machines is 8.72 s

云计算已经成为一种标准的、有前途的分布式计算框架,用于提供按需计算资源和按使用付费概念。这些计算资源的运行导致了最大的功耗、高昂的成本和对环境的高二氧化碳排放。访问云数据中心面临的主要困难是SLA违规、时间增加、资源利用率降低、电力和能源消耗高。因此,考虑到这些困难,提出了一种新的虚拟机(VM)选择方法来最小化约束,同时保持SLA。首先,基于虚拟机和物理机(pm)的假设,使用静态阈值方法检测过度使用的主机,而根据已使用的资源识别未充分使用的主机。负载检测完成后,通过调整后的TCOA (chimp optimization algorithm)算法选择需要迁移的虚拟机。在不影响其他vm容量的情况下选择vm后,使用功率感知最佳拟合减小方法在其他pm上执行放置过程。该方法通过选择需要迁移的最优虚拟机,大大提高了QoS。采用Cloudsim作为仿真工具,并在迁移时间、能耗、每台主机违反SLA等方面与现有技术进行了比较,证明了其优越性。该模型的能耗为195.3 kWh,总体SLA违规率为0.032%,500台虚拟机的迁移时间为8.72秒。
{"title":"Tweaked optimization based quality aware VM selection method for effectual placement strategy","authors":"Rubaya Khatun ,&nbsp;Md Ashifuddin Mondal","doi":"10.1016/j.suscom.2023.100939","DOIUrl":"10.1016/j.suscom.2023.100939","url":null,"abstract":"<div><p><span><span>Cloud computing has become a standard and promising </span>distributed computing<span> framework for the provision of on-demand computing resources and pay-per-use concepts. Operations of these computing resources result in maximum power consumption, enraged cost and high Co</span></span><sub>2</sub><span><span><span> emission to the environment. The major difficulties faced when accessing cloud data center are </span>SLA violations, increased time, less utilization of resources, high consumption of power and energy. Hence, considering these difficulties, a novel virtual machine (VM) selection approach is proposed to minimize the constraints while maintaining the SLA. First, based on the assumptions of VMs and physical machines (PMs), the overutilized hosts are detected using a static threshold approach, while underutilized hosts are identified based on the utilized resources. After load detection, the VMs that need to be migrated over other PMs are selected using the tweaked chimp </span>optimization algorithm (TCOA). After selecting VMs without influencing the capacity of other VMs, the placement process is performed over other PMs using a power aware best fit decreasing approach. The proposed approach can greatly improve the QoS by selecting the optimal VMs that need to be migrated. Cloudsim is used as a simulation tool, and the results are compared with existing techniques in terms of migration time, energy consumption, SLA violation per host and so on to prove the superiority. The energy consumption of the proposed model is obtained to be 195.3 kWh, the overall SLA violation rate is attained to be 0.032%, and the migration time for 500 virtual machines is 8.72 s</span></p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"41 ","pages":"Article 100939"},"PeriodicalIF":4.5,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138520660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ECG signals-based security and steganography approaches in WBANs: A comprehensive survey and taxonomy 基于心电信号的wban安全与隐写方法:综合调查与分类
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-20 DOI: 10.1016/j.suscom.2023.100937
Mohammad Masdari , Shahab S. Band , Sultan Noman Qasem , Biju Theruvil Sayed , Hao-Ting Pai

Wireless Body Area Networks (WBANs) are integral components of e-healthcare systems, responsible for monitoring patients' physiological states through intelligent implantable or wearable sensor nodes. These nodes collect medical data, which is then transmitted to remote healthcare facilities for thorough evaluation. Securing medical data within WBANs is paramount due to its central role in preserving patient privacy and confidentiality. Notably, Electrocardiogram (ECG) signals have recently gained prominence as pivotal elements within diverse security frameworks. Incorporating ECG signals strategically enhances the security and reliability of WBANs and broader e-healthcare systems, instilling greater trustworthiness. This survey article provides an in-depth exploration of contemporary ECG-based security schemes, adding to the scholarly discourse. The imperative to categorize these security paradigms revolves around their use of ECG signals. This categorization identifies three key domains: the first involves schemes that utilize ECG signals for cryptographic operations, encompassing key generation, agreement, management, and authentication. The second category employs steganography-based techniques, using ECG signals to conceal patients' sensitive medical data. The third category focuses on enhancing ECG signal security during data transmission. Each category is meticulously elaborated, detailing architectural foundations, notable contributions, and intrinsic security services. Furthermore, each section presents a comprehensive overview of the attributes characterizing ECG-based security frameworks. This includes insights into employed datasets, simulation environments, evaluation metrics, and inherent advantages and limitations. Expanding on this, a thorough analysis of distinctive attributes underpinning these security frameworks concludes by shedding light on potential directions for future research.

无线体域网络(wban)是电子医疗系统不可或缺的组成部分,负责通过智能植入式或可穿戴传感器节点监测患者的生理状态。这些节点收集医疗数据,然后将这些数据传输到远程医疗机构进行全面评估。保护wban内的医疗数据至关重要,因为它在保护患者隐私和保密性方面发挥着核心作用。值得注意的是,心电图(ECG)信号最近在各种安全框架中作为关键元素而受到重视。结合ECG信号战略性地增强了wban和更广泛的电子医疗系统的安全性和可靠性,从而增强了可信度。这篇调查文章提供了一个深入的探索当代基于心电图的安全方案,增加了学术论述。对这些安全范例进行分类的必要性围绕着它们对心电信号的使用展开。这种分类确定了三个关键领域:第一个涉及利用心电信号进行加密操作的方案,包括密钥生成、协议、管理和身份验证。第二类采用基于隐写术的技术,利用心电信号隐藏患者的敏感医疗数据。第三类是加强心电信号在数据传输过程中的安全性。每个类别都经过精心阐述,详细介绍了体系结构基础、值得注意的贡献和固有的安全服务。此外,每个部分都全面概述了表征基于ecg的安全框架的属性。这包括对所使用的数据集、模拟环境、评估指标以及固有优势和局限性的见解。在此基础上,对支撑这些安全框架的独特属性进行了深入分析,从而揭示了未来研究的潜在方向。
{"title":"ECG signals-based security and steganography approaches in WBANs: A comprehensive survey and taxonomy","authors":"Mohammad Masdari ,&nbsp;Shahab S. Band ,&nbsp;Sultan Noman Qasem ,&nbsp;Biju Theruvil Sayed ,&nbsp;Hao-Ting Pai","doi":"10.1016/j.suscom.2023.100937","DOIUrl":"10.1016/j.suscom.2023.100937","url":null,"abstract":"<div><p>Wireless Body Area Networks<span><span><span> (WBANs) are integral components of e-healthcare systems, responsible for monitoring patients' physiological states through intelligent implantable or wearable sensor nodes. These nodes collect medical data, which is then transmitted to remote </span>healthcare facilities for thorough evaluation. Securing medical data within WBANs is paramount due to its central role in preserving patient privacy and confidentiality. Notably, Electrocardiogram (ECG) signals have recently gained prominence as pivotal elements within diverse security frameworks. Incorporating ECG signals strategically enhances the security and reliability of WBANs and broader e-healthcare systems, instilling greater trustworthiness. This survey article provides an in-depth exploration of contemporary ECG-based security schemes, adding to the scholarly discourse. The imperative to categorize these security paradigms revolves around their use of ECG signals. This categorization identifies three key domains: the first involves schemes that utilize ECG signals for </span>cryptographic operations<span>, encompassing key generation, agreement, management, and authentication. The second category employs steganography-based techniques, using ECG signals to conceal patients' sensitive medical data. The third category focuses on enhancing ECG signal security during data transmission. Each category is meticulously elaborated, detailing architectural foundations, notable contributions, and intrinsic security services. Furthermore, each section presents a comprehensive overview of the attributes characterizing ECG-based security frameworks. This includes insights into employed datasets, simulation environments, evaluation metrics<span>, and inherent advantages and limitations. Expanding on this, a thorough analysis of distinctive attributes underpinning these security frameworks concludes by shedding light on potential directions for future research.</span></span></span></p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"41 ","pages":"Article 100937"},"PeriodicalIF":4.5,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138520658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient multi-format low-precision floating-point multiplier 一个高效的多格式低精度浮点乘法器
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-18 DOI: 10.1016/j.suscom.2023.100928
Hadis Ahmadpour Kermani , Azadeh Alsadat Emrani Zarandi

Low-precision computing has emerged as a promising technology to enhance performance in modern applications like deep neural network training and scientific computing. However, most existing circuits and systems are tailored to a single type of half-precision format, such as FP16 or BFloat16. In light of this limitation, this paper introduces the design of a multi-format floating-point multiplier capable of supporting a wide range of half-precision formats, including both their signed and unsigned versions. Our design emphasizes high reconfigurability, allowing it to adapt to the required dynamic range and precision. Moreover, experimental results showed that the introduced unsigned versions of the half-precision floating-point formats resulted in improved circuit parameters and energy consumption.

在深度神经网络训练和科学计算等现代应用中,低精度计算已经成为一种有前途的技术,可以提高性能。然而,大多数现有的电路和系统都是针对单一类型的半精度格式定制的,例如FP16或BFloat16。鉴于这一限制,本文介绍了一种多格式浮点乘法器的设计,它能够支持广泛的半精度格式,包括它们的有符号和无符号版本。我们的设计强调高可重构性,使其能够适应所需的动态范围和精度。此外,实验结果表明,引入半精度浮点格式的无符号版本可以改善电路参数和能耗。
{"title":"An efficient multi-format low-precision floating-point multiplier","authors":"Hadis Ahmadpour Kermani ,&nbsp;Azadeh Alsadat Emrani Zarandi","doi":"10.1016/j.suscom.2023.100928","DOIUrl":"https://doi.org/10.1016/j.suscom.2023.100928","url":null,"abstract":"<div><p>Low-precision computing has emerged as a promising technology to enhance performance in modern applications like deep neural network<span> training and scientific computing<span>. However, most existing circuits and systems are tailored to a single type of half-precision format, such as FP16 or BFloat16. In light of this limitation, this paper introduces the design of a multi-format floating-point multiplier capable of supporting a wide range of half-precision formats, including both their signed and unsigned versions. Our design emphasizes high reconfigurability, allowing it to adapt to the required dynamic range and precision. Moreover, experimental results showed that the introduced unsigned versions of the half-precision floating-point formats resulted in improved circuit parameters and energy consumption.</span></span></p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"41 ","pages":"Article 100928"},"PeriodicalIF":4.5,"publicationDate":"2023-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138430608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Sustainable Computing-Informatics & Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1