首页 > 最新文献

Journal of Grid Computing最新文献

英文 中文
Accounting Information Systems and Strategic Performance: The Interplay of Digital Technology and Edge Computing Devices 会计信息系统与战略绩效:数字技术与边缘计算设备的相互作用
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-29 DOI: 10.1007/s10723-023-09720-8
Xi Zhen, Li Zhen

With the rapid development of digital technologies, scholars and industries are pushing into the information age, where data processing is the accounting industry's major challenge. This study aimed to analyze the use of these digital technologies for strategic performance attainment and mediating the accounting information system (AIS). Further, this study also explores the moderation of the DT and strategic performance linkage. In this rapid change, the business organization is crucial to competition. Hence, technology is the key factor for maintaining the competitiveness of the industrialists, specifically where information plays a vital role in making management decisions. Accounting software is a significant tool that efficiently collects data and makes timely decisions to declare the business strategy to respond quickly to the market. However, the available accounting software is costly, and small-scale businesses cannot afford it. Therefore, this paper developed a digital accounting system using artificial intelligence (AI) and edge computing (EC) to process and store the accounting data. This article introduces novel edge framework for digital data processing with advanced data processing methods. The with the growth of IoT, the data sizes have increased significantly. Moreover, the traditional cloud platforms are enriched with EC to process the vast amount of data where it is collected. Therefore, the business can adapt to new size data and raise its standards in terms of technical content. It will define the distributed storage in the cloud and test the cluster performance of the system once the system design and its effects on the system. In the end, the system operation time, load balancing and rows of data is tested experimentally. The results and its analysis demonstrated that the data processing with EC for AIS utilized is improved acceleration rate, operational efficiency and execution rate.

随着数字技术的飞速发展,学者和行业都在向信息时代迈进,其中数据处理是会计行业面临的主要挑战。本研究旨在分析利用这些数字技术实现战略绩效和中介会计信息系统(AIS)的情况。此外,本研究还探讨了数字技术与战略绩效联系的调节作用。在这个快速变化的时代,企业组织对于竞争至关重要。因此,技术是工业企业保持竞争力的关键因素,特别是在信息对管理决策起着至关重要作用的领域。会计软件是一种重要的工具,它能有效地收集数据并及时做出决策,从而宣布企业战略,对市场做出快速反应。然而,现有的会计软件价格昂贵,小型企业负担不起。因此,本文利用人工智能(AI)和边缘计算(EC)开发了一种数字会计系统,用于处理和存储会计数据。本文采用先进的数据处理方法,介绍了用于数字数据处理的新型边缘框架。随着物联网的发展,数据规模大幅增加。此外,传统的云平台也可以利用 EC 来处理收集到的大量数据。因此,企业可以适应新的数据规模,提高技术含量标准。它将定义云中的分布式存储,并在系统设计完成后测试系统的集群性能及其对系统的影响。最后,对系统运行时间、负载平衡和数据行数进行了实验测试。结果及其分析表明,利用 EC 进行 AIS 的数据处理提高了加速率、运行效率和执行率。
{"title":"Accounting Information Systems and Strategic Performance: The Interplay of Digital Technology and Edge Computing Devices","authors":"Xi Zhen, Li Zhen","doi":"10.1007/s10723-023-09720-8","DOIUrl":"https://doi.org/10.1007/s10723-023-09720-8","url":null,"abstract":"<p>With the rapid development of digital technologies, scholars and industries are pushing into the information age, where data processing is the accounting industry's major challenge. This study aimed to analyze the use of these digital technologies for strategic performance attainment and mediating the accounting information system (AIS). Further, this study also explores the moderation of the DT and strategic performance linkage. In this rapid change, the business organization is crucial to competition. Hence, technology is the key factor for maintaining the competitiveness of the industrialists, specifically where information plays a vital role in making management decisions. Accounting software is a significant tool that efficiently collects data and makes timely decisions to declare the business strategy to respond quickly to the market. However, the available accounting software is costly, and small-scale businesses cannot afford it. Therefore, this paper developed a digital accounting system using artificial intelligence (AI) and edge computing (EC) to process and store the accounting data. This article introduces novel edge framework for digital data processing with advanced data processing methods. The with the growth of IoT, the data sizes have increased significantly. Moreover, the traditional cloud platforms are enriched with EC to process the vast amount of data where it is collected. Therefore, the business can adapt to new size data and raise its standards in terms of technical content. It will define the distributed storage in the cloud and test the cluster performance of the system once the system design and its effects on the system. In the end, the system operation time, load balancing and rows of data is tested experimentally. The results and its analysis demonstrated that the data processing with EC for AIS utilized is improved acceleration rate, operational efficiency and execution rate.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139069792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of Analytical Offloading for Innovative Internet of Vehicles Based on Mobile Edge Computing 基于移动边缘计算的创新车联网分析卸载开发
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-28 DOI: 10.1007/s10723-023-09719-1
Ming Zhang

The current task offloading technique needs to be performed more effectively. Onboard terminals cannot execute efficient computation due to the explosive expansion of data flow, the quick increase in vehicle population, and the growing scarcity of spectrum resources. As a result, this study suggests a task-offloading technique based on reinforcement learning computing for the Internet of Vehicles edge computing architecture. The system framework for the Internet of Vehicles has been initially developed. Although the control centre gathers all vehicle information, the roadside unit collects vehicle data from the neighborhood and sends it to a mobile edge computing server for processing. Then, to guarantee that job dispatching in the Internet of Vehicles is logical, the computation model, communications approach, interfering approach, and concerns about confidentiality are established. This research examines the best way to analyze and design a computation offloading approach for a multiuser smart Internet of Vehicles (IoV) based on mobile edge computing (MEC). We present an analytical offloading strategy for various MEC networks, covering one-to-one, one-to-two, and two-to-one situations, as it is challenging to determine an analytical offloading proportion for a generic MEC-based IoV network. The suggested analytic offload strategy may match the brute force (BF) approach with the best performance of the Deep Deterministic Policy Gradient (DDPG). For the analytical offloading design for a general MEC-based IoV, the analytical results in this study can be a valuable source of information.

目前的任务卸载技术需要更有效地执行。由于数据流的爆炸式扩张、车辆数量的快速增长以及频谱资源的日益稀缺,车载终端无法执行高效计算。因此,本研究为车联网边缘计算架构提出了一种基于强化学习计算的任务卸载技术。车联网的系统框架已初步建立。虽然控制中心会收集所有车辆信息,但路边装置会收集附近的车辆数据并发送到移动边缘计算服务器进行处理。然后,为了保证车辆互联网中的工作调度符合逻辑,建立了计算模型、通信方法、干扰方法以及对保密性的关注。本研究探讨了分析和设计基于移动边缘计算(MEC)的多用户智能车联网(IoV)计算卸载方法的最佳途径。我们针对各种 MEC 网络提出了一种分析性卸载策略,涵盖一对一、一对二和二对一的情况,因为确定基于 MEC 的通用 IoV 网络的分析性卸载比例具有挑战性。建议的分析卸载策略可与蛮力(BF)方法和深度确定性策略梯度(DDPG)的最佳性能相匹配。对于一般基于 MEC 的 IoV 的分析卸载设计,本研究的分析结果可以作为宝贵的信息来源。
{"title":"Development of Analytical Offloading for Innovative Internet of Vehicles Based on Mobile Edge Computing","authors":"Ming Zhang","doi":"10.1007/s10723-023-09719-1","DOIUrl":"https://doi.org/10.1007/s10723-023-09719-1","url":null,"abstract":"<p>The current task offloading technique needs to be performed more effectively. Onboard terminals cannot execute efficient computation due to the explosive expansion of data flow, the quick increase in vehicle population, and the growing scarcity of spectrum resources. As a result, this study suggests a task-offloading technique based on reinforcement learning computing for the Internet of Vehicles edge computing architecture. The system framework for the Internet of Vehicles has been initially developed. Although the control centre gathers all vehicle information, the roadside unit collects vehicle data from the neighborhood and sends it to a mobile edge computing server for processing. Then, to guarantee that job dispatching in the Internet of Vehicles is logical, the computation model, communications approach, interfering approach, and concerns about confidentiality are established. This research examines the best way to analyze and design a computation offloading approach for a multiuser smart Internet of Vehicles (IoV) based on mobile edge computing (MEC). We present an analytical offloading strategy for various MEC networks, covering one-to-one, one-to-two, and two-to-one situations, as it is challenging to determine an analytical offloading proportion for a generic MEC-based IoV network. The suggested analytic offload strategy may match the brute force (BF) approach with the best performance of the Deep Deterministic Policy Gradient (DDPG). For the analytical offloading design for a general MEC-based IoV, the analytical results in this study can be a valuable source of information.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139051107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge Computing with Fog-cloud for Heart Data Processing using Particle Swarm Optimized Deep Learning Technique 使用粒子群优化深度学习技术,利用边缘计算和雾云处理心脏数据
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-23 DOI: 10.1007/s10723-023-09706-6
Sheng Chai, Lantian Guo

Chronic illnesses such as heart disease, diabetes, cancer, and respiratory diseases are complex and pose a significant threat to global health. Processing heart data is particularly challenging due to the variability of symptoms. However, advancements in smart wearable devices, computing technologies, and IoT solutions have made heart data processing easier. This proposed model integrates Edge-Fog-Cloud computing to provide rapid and accurate results, making it a promising solution for heart data processing. Patient data is collected using hardware components, and cardiac feature extraction is used to obtain crucial features from data signals. The Optimized Cascaded Convolution Neural Network (CCNN) processes these features, and the CCNN's hyperparameters are optimized using both PSO (Particle Swarm Optimization) and GSO(Galactic Swarm Optimization) techniques. The proposed system leverages the strengths of both optimization algorithms to improve the accuracy and efficiency of the heart data processing system. The GSO-CCNN optimizes the CCNN's hyperparameters, while the PSO-CCNN optimizes the feature selection process. Combining both algorithms enhances the system's ability to identify relevant features and optimize the CCNN's architecture. Performance analysis demonstrates that the proposed technique, which integrates Edge-Fog-Cloud computing with combined PSO-CCNN and GSO-CCNN techniques, outperforms traditional models such as PSO-CCNN, GSO-CCNN, WOA-CCNN, and DHOA-CCNN, which utilize traditional cloud and edge technologies. The proposed model is evaluated in terms of time, energy consumption, bandwidth, and the standard performance metrics of accuracy, precision, recall, specificity, and F1-score. Therefore, the proposed system's comparative analysis ensures its efficiency over conventional models for heart data processing.

心脏病、糖尿病、癌症和呼吸系统疾病等慢性疾病十分复杂,对全球健康构成了重大威胁。由于症状多变,处理心脏数据尤其具有挑战性。然而,智能可穿戴设备、计算技术和物联网解决方案的进步使心脏数据处理变得更加容易。本建议模型集成了边缘-雾-云计算,可提供快速、准确的结果,是一种很有前景的心脏数据处理解决方案。利用硬件组件收集患者数据,并通过心脏特征提取从数据信号中获取关键特征。优化级联卷积神经网络(CCNN)处理这些特征,并使用粒子群优化(PSO)和银河系群优化(GSO)技术优化 CCNN 的超参数。拟议的系统充分利用了这两种优化算法的优势,提高了心脏数据处理系统的准确性和效率。GSO-CCNN 优化了 CCNN 的超参数,而 PSO-CCNN 则优化了特征选择过程。这两种算法的结合增强了系统识别相关特征和优化 CCNN 架构的能力。性能分析表明,所提出的技术将边缘-雾-云计算与 PSO-CCNN 和 GSO-CCNN 技术相结合,性能优于 PSO-CCNN、GSO-CCNN、WOA-CCNN 和 DHOA-CCNN 等利用传统云技术和边缘技术的传统模型。我们从时间、能耗、带宽以及准确度、精确度、召回率、特异性和 F1 分数等标准性能指标方面对所提出的模型进行了评估。因此,拟议系统的对比分析确保了其在心脏数据处理方面比传统模型更高效。
{"title":"Edge Computing with Fog-cloud for Heart Data Processing using Particle Swarm Optimized Deep Learning Technique","authors":"Sheng Chai, Lantian Guo","doi":"10.1007/s10723-023-09706-6","DOIUrl":"https://doi.org/10.1007/s10723-023-09706-6","url":null,"abstract":"<p>Chronic illnesses such as heart disease, diabetes, cancer, and respiratory diseases are complex and pose a significant threat to global health. Processing heart data is particularly challenging due to the variability of symptoms. However, advancements in smart wearable devices, computing technologies, and IoT solutions have made heart data processing easier. This proposed model integrates Edge-Fog-Cloud computing to provide rapid and accurate results, making it a promising solution for heart data processing. Patient data is collected using hardware components, and cardiac feature extraction is used to obtain crucial features from data signals. The Optimized Cascaded Convolution Neural Network (CCNN) processes these features, and the CCNN's hyperparameters are optimized using both PSO (Particle Swarm Optimization) and GSO(Galactic Swarm Optimization) techniques. The proposed system leverages the strengths of both optimization algorithms to improve the accuracy and efficiency of the heart data processing system. The GSO-CCNN optimizes the CCNN's hyperparameters, while the PSO-CCNN optimizes the feature selection process. Combining both algorithms enhances the system's ability to identify relevant features and optimize the CCNN's architecture. Performance analysis demonstrates that the proposed technique, which integrates Edge-Fog-Cloud computing with combined PSO-CCNN and GSO-CCNN techniques, outperforms traditional models such as PSO-CCNN, GSO-CCNN, WOA-CCNN, and DHOA-CCNN, which utilize traditional cloud and edge technologies. The proposed model is evaluated in terms of time, energy consumption, bandwidth, and the standard performance metrics of accuracy, precision, recall, specificity, and F1-score. Therefore, the proposed system's comparative analysis ensures its efficiency over conventional models for heart data processing.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139030887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DEEPBIN: Deep Learning Based Garbage Classification for Households Using Sustainable Natural Technologies DEEPBIN:利用可持续自然技术为家庭进行基于深度学习的垃圾分类
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-19 DOI: 10.1007/s10723-023-09722-6
Yu Song, Xin He, Xiwang Tang, Bo Yin, Jie Du, Jiali Liu, Zhongbao Zhao, Shigang Geng

Today, things that are accessible worldwide are upgrading to innovative technology. In this research, an intelligent garbage system will be designed with State-of-the-art methods using deep learning technologies. Garbage is highly produced due to urbanization and the rising population in urban areas. It is essential to manage daily trash from homes and living environments. This research aims to provide an intelligent IoT-based garbage bin system, and classification is done using Deep learning techniques. This smart bin is capable of sensing more varieties of garbage from home. Though there are more technologies successfully implemented with IoT and machine learning, there is still a need for sustainable natural technologies to manage daily waste. The innovative IoT-based garbage system uses various sensors like humidity, temperature, gas, and liquid sensors to identify the garbage condition. Initially, the Smart Garbage Bin system is designed, and then the data are collected using a garbage annotation application. Next, the deep learning method is used for object detection and classification of garbage images. Arithmetic Optimization Algorithm (AOA) with Improved RefineDet (IRD) is used for object detection. Next, the EfficientNet-B0 model is used for the classification of garbage images. The garbage content is identified, and the content is prepared to train the deep learning model to perform efficient classification tasks. For result evaluation, smart bins are deployed in real-time, and accuracy is estimated. Furthermore, fine-tuning region-specific litter photos led to enhanced categorization.

如今,全球范围内可以接触到的事物都在向创新技术升级。本研究将利用深度学习技术,采用最先进的方法设计一个智能垃圾处理系统。由于城市化和城市人口的增加,垃圾大量产生。管理家庭和生活环境中的日常垃圾至关重要。本研究旨在提供一个基于物联网的智能垃圾桶系统,并使用深度学习技术进行分类。这种智能垃圾桶能够感知更多种类的家庭垃圾。尽管物联网和机器学习技术的成功应用越来越多,但仍需要可持续的自然技术来管理日常垃圾。基于物联网的创新型垃圾系统使用各种传感器,如湿度、温度、气体和液体传感器来识别垃圾状况。首先,设计智能垃圾桶系统,然后使用垃圾标注应用程序收集数据。接下来,使用深度学习方法对垃圾图像进行对象检测和分类。物体检测采用算术优化算法(AOA)和改进的 RefineDet 算法(IRD)。然后,使用 EfficientNet-B0 模型对垃圾图像进行分类。首先识别垃圾内容,然后对内容进行准备,以训练深度学习模型执行高效的分类任务。为了评估结果,实时部署了智能垃圾箱,并估算了准确率。此外,对特定区域的垃圾照片进行微调也增强了分类效果。
{"title":"DEEPBIN: Deep Learning Based Garbage Classification for Households Using Sustainable Natural Technologies","authors":"Yu Song, Xin He, Xiwang Tang, Bo Yin, Jie Du, Jiali Liu, Zhongbao Zhao, Shigang Geng","doi":"10.1007/s10723-023-09722-6","DOIUrl":"https://doi.org/10.1007/s10723-023-09722-6","url":null,"abstract":"<p>Today, things that are accessible worldwide are upgrading to innovative technology. In this research, an intelligent garbage system will be designed with State-of-the-art methods using deep learning technologies. Garbage is highly produced due to urbanization and the rising population in urban areas. It is essential to manage daily trash from homes and living environments. This research aims to provide an intelligent IoT-based garbage bin system, and classification is done using Deep learning techniques. This smart bin is capable of sensing more varieties of garbage from home. Though there are more technologies successfully implemented with IoT and machine learning, there is still a need for sustainable natural technologies to manage daily waste. The innovative IoT-based garbage system uses various sensors like humidity, temperature, gas, and liquid sensors to identify the garbage condition. Initially, the Smart Garbage Bin system is designed, and then the data are collected using a garbage annotation application. Next, the deep learning method is used for object detection and classification of garbage images. Arithmetic Optimization Algorithm (AOA) with Improved RefineDet (IRD) is used for object detection. Next, the EfficientNet-B0 model is used for the classification of garbage images. The garbage content is identified, and the content is prepared to train the deep learning model to perform efficient classification tasks. For result evaluation, smart bins are deployed in real-time, and accuracy is estimated. Furthermore, fine-tuning region-specific litter photos led to enhanced categorization.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138745456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Markovian with Federated Deep Recurrent Neural Network for Edge—IoMT to Improve Healthcare in Smart Cities 用于边缘物联网技术的马尔可夫与联合深度递归神经网络,改善智能城市的医疗保健水平
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-19 DOI: 10.1007/s10723-023-09709-3
Yuliang Gai, Yuxin Liu, Minghao Li, Shengcheng Yang

The architectural design of smart cities should prioritize the provision of critical medical services. This involves establishing improved connectivity and leveraging supercomputing capabilities to enhance the quality of services (QoS) offered to residents. Edge computing is vital in healthcare applications by enabling low network latencies necessary for real-time data processing. By implementing edge computing, smart cities can benefit from reduced latency, increased bandwidth, and improved power consumption efficiency. In the context of Mobile Edge Computing (MEC), the study proposes a novel approach called the Markovian Decision Process with Federated Deep Recurrent Neural Network (MDP-FDRNN) as the primary algorithm for managing resource allocation. MEC focuses on utilizing edge computing capabilities to process data and perform computations at the network's edges. The conducted tests demonstrate that the MDP-FDRNN algorithm is superior and well-suited for effectively resolving high-processing traffic at the network's edges. It significantly reduces processing time, particularly crucial for healthcare operations related to patients' health problems. By employing the MDP-FDRNN algorithm in resource allocation management, smart cities can efficiently utilize their edge computing infrastructure to handle complex processing tasks. The superior performance of this algorithm in reducing processing time showcases its potential to support critical healthcare operations within smart cities, thereby enhancing the overall quality of healthcare services provided to residents. This article underscores the significance of implementing appropriate technology, including edge computing and the IoM, in developing prosperous smart cities. It also highlights the effectiveness of the MDP-FDRNN algorithm in managing resource allocation and addressing processing challenges at the network's edges, particularly in healthcare operations.

智慧城市的建筑设计应优先考虑提供关键的医疗服务。这包括建立更好的连接和利用超级计算能力,以提高为居民提供的服务质量(QoS)。边缘计算可实现实时数据处理所需的低网络延迟,因此在医疗保健应用中至关重要。通过实施边缘计算,智慧城市可以从减少延迟、增加带宽和提高能耗效率中获益。在移动边缘计算(MEC)方面,该研究提出了一种名为 "马尔可夫决策过程与联合深度循环神经网络"(MDP-FDRNN)的新方法,作为管理资源分配的主要算法。MEC 主要利用边缘计算能力在网络边缘处理数据和执行计算。所进行的测试表明,MDP-FDRNN 算法非常优越,非常适合有效解决网络边缘的高处理流量问题。它大大缩短了处理时间,这对于与患者健康问题相关的医疗操作尤为重要。通过在资源分配管理中采用 MDP-FDRNN 算法,智慧城市可以有效利用其边缘计算基础设施来处理复杂的处理任务。该算法在缩短处理时间方面的卓越性能显示了其支持智慧城市关键医疗业务的潜力,从而提高了为居民提供的医疗服务的整体质量。本文强调了在发展繁荣的智慧城市过程中采用适当技术(包括边缘计算和物联网)的重要性。文章还强调了 MDP-FDRNN 算法在管理资源分配和解决网络边缘处理难题方面的有效性,特别是在医疗保健业务中。
{"title":"Markovian with Federated Deep Recurrent Neural Network for Edge—IoMT to Improve Healthcare in Smart Cities","authors":"Yuliang Gai, Yuxin Liu, Minghao Li, Shengcheng Yang","doi":"10.1007/s10723-023-09709-3","DOIUrl":"https://doi.org/10.1007/s10723-023-09709-3","url":null,"abstract":"<p>The architectural design of smart cities should prioritize the provision of critical medical services. This involves establishing improved connectivity and leveraging supercomputing capabilities to enhance the quality of services (QoS) offered to residents. Edge computing is vital in healthcare applications by enabling low network latencies necessary for real-time data processing. By implementing edge computing, smart cities can benefit from reduced latency, increased bandwidth, and improved power consumption efficiency. In the context of Mobile Edge Computing (MEC), the study proposes a novel approach called the Markovian Decision Process with Federated Deep Recurrent Neural Network (MDP-FDRNN) as the primary algorithm for managing resource allocation. MEC focuses on utilizing edge computing capabilities to process data and perform computations at the network's edges. The conducted tests demonstrate that the MDP-FDRNN algorithm is superior and well-suited for effectively resolving high-processing traffic at the network's edges. It significantly reduces processing time, particularly crucial for healthcare operations related to patients' health problems. By employing the MDP-FDRNN algorithm in resource allocation management, smart cities can efficiently utilize their edge computing infrastructure to handle complex processing tasks. The superior performance of this algorithm in reducing processing time showcases its potential to support critical healthcare operations within smart cities, thereby enhancing the overall quality of healthcare services provided to residents. This article underscores the significance of implementing appropriate technology, including edge computing and the IoM, in developing prosperous smart cities. It also highlights the effectiveness of the MDP-FDRNN algorithm in managing resource allocation and addressing processing challenges at the network's edges, particularly in healthcare operations.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138745508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integration of a Lightweight Customized 2D CNN Model to an Edge Computing System for Real-Time Multiple Gesture Recognition 将轻量级定制 2D CNN 模型集成到边缘计算系统,用于实时多手势识别
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-15 DOI: 10.1007/s10723-023-09715-5
Hulin Jin, Zhiran Jin, Yong-Guk Kim, Chunyang Fan

Abstract

The human-machine interface (HMI) collects electrophysiology signals incoming from the patient and utilizes them to operate the device. However, most applications are currently in the testing phase and are typically unavailable to everyone. Developing wearable HMI devices that are intelligent and more comfortable has been a focus of study in recent times. This work developed a portable, eight-channel electromyography (EMG) signal-based device that can distinguish 21 different types of motion. To identify the EMG signals, an analog front-end (AFE) integrated chip (IC) was created, and an integrated EMG signal acquisition device combining a stretchy wristband was made. Using the EMG movement signals of 10 volunteers, a SIAT database of 21 gestures was created. Using the SIAT dataset, a lightweight 2D CNN-LSTM model was developed and specialized training was given. The signal recognition accuracy is 96.4%, and the training process took a median of 14 min 13 s. The model may be used on lower-performance edge computing devices because of its compact size, and it is anticipated that it will eventually be applied to smartphone terminals.

摘要 人机界面(HMI)收集来自病人的电生理信号,并利用这些信号操作设备。然而,大多数应用目前还处于测试阶段,通常无法普及。开发更智能、更舒适的可穿戴人机界面设备是近期研究的重点。这项研究开发了一种基于八通道肌电图(EMG)信号的便携式设备,可以区分 21 种不同类型的运动。为了识别肌电信号,制作了一个模拟前端(AFE)集成芯片(IC),并结合弹力腕带制作了一个集成的肌电信号采集装置。利用 10 名志愿者的肌电信号,创建了包含 21 个手势的 SIAT 数据库。利用 SIAT 数据集,开发了一个轻量级 2D CNN-LSTM 模型,并进行了专门训练。该模型由于体积小巧,可用于性能较低的边缘计算设备,预计最终将应用于智能手机终端。
{"title":"Integration of a Lightweight Customized 2D CNN Model to an Edge Computing System for Real-Time Multiple Gesture Recognition","authors":"Hulin Jin, Zhiran Jin, Yong-Guk Kim, Chunyang Fan","doi":"10.1007/s10723-023-09715-5","DOIUrl":"https://doi.org/10.1007/s10723-023-09715-5","url":null,"abstract":"<h3>Abstract</h3> <p>The human-machine interface (HMI) collects electrophysiology signals incoming from the patient and utilizes them to operate the device. However, most applications are currently in the testing phase and are typically unavailable to everyone. Developing wearable HMI devices that are intelligent and more comfortable has been a focus of study in recent times. This work developed a portable, eight-channel electromyography (EMG) signal-based device that can distinguish 21 different types of motion. To identify the EMG signals, an analog front-end (AFE) integrated chip (IC) was created, and an integrated EMG signal acquisition device combining a stretchy wristband was made. Using the EMG movement signals of 10 volunteers, a SIAT database of 21 gestures was created. Using the SIAT dataset, a lightweight 2D CNN-LSTM model was developed and specialized training was given. The signal recognition accuracy is 96.4%, and the training process took a median of 14 min 13 s. The model may be used on lower-performance edge computing devices because of its compact size, and it is anticipated that it will eventually be applied to smartphone terminals.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138682052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Healthcare and Fitness Services: A Comprehensive Assessment of Blockchain, IoT, and Edge Computing in Smart Cities 医疗保健和健身服务:智能城市中的区块链、物联网和边缘计算综合评估
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-15 DOI: 10.1007/s10723-023-09712-8
Yang-Yang Liu, Ying Zhang, Yue Wu, Man Feng

Edge computing, blockchain technology, and the Internet of Things have all been identified as key enablers of innovative city initiatives. A comprehensive examination of the research found that IoT, blockchain, and edge computing are now major factors in how efficiently smart cities provide healthcare. IoT has been determined to be the most used of the three technologies. In this observation, edge computing and blockchain technology are more applicable to the healthcare industry for assessing intelligent and secured data. Edge computing has been touted as an important technology for low-cost remote access, cutting latency, and boosting efficiency. Smart cities are incorporated with intelligent devices to enhance the person's day-to-day life. Intelligent of Medical Things (IoMT) and Edge computing (EC) are these things’ bases. The increasing Quality of Services (QoS) of healthcare services requires supercomputing that connects IoMT with intelligent devices with edge processing. The healthcare applications of smart cities need reduced latencies. Therefore, EC is necessary to reduce latency, energy, bandwidth, and scalability. This paper developed a deep Q reinforcement learning algorithm with evolutionary optimization and compared it with the traditional deep learning approaches for process congestion to reduce the time and latency related to patient health monitoring. The energy consumption, latency computation, and cost computation of the proposed model is less when compared to existing techniques. Among 100 tasks, nearly 95% of the tasks are offloaded efficiently in the minimum time.

边缘计算、区块链技术和物联网都被认为是创新城市计划的关键推动因素。一项综合研究发现,物联网、区块链和边缘计算目前已成为影响智慧城市提供医疗保健服务效率的主要因素。物联网已被确定为这三种技术中使用最多的技术。据此观察,边缘计算和区块链技术更适用于医疗保健行业,以评估智能和安全数据。边缘计算被誉为低成本远程访问、减少延迟和提高效率的重要技术。智能城市融入了智能设备,以改善人们的日常生活。医疗物联网(IoMT)和边缘计算(EC)是这些设备的基础。医疗保健服务的服务质量(QoS)不断提高,需要超级计算将 IoMT 与具有边缘处理功能的智能设备连接起来。智慧城市的医疗保健应用需要减少延迟。因此,EC 有必要降低延迟、能耗、带宽和可扩展性。本文开发了一种具有进化优化功能的深度 Q 强化学习算法,并将其与传统的深度学习方法进行了比较,以减少患者健康监测相关的时间和延迟。与现有技术相比,所提模型的能耗、延迟计算和成本计算都更少。在 100 个任务中,近 95% 的任务都能在最短时间内高效卸载。
{"title":"Healthcare and Fitness Services: A Comprehensive Assessment of Blockchain, IoT, and Edge Computing in Smart Cities","authors":"Yang-Yang Liu, Ying Zhang, Yue Wu, Man Feng","doi":"10.1007/s10723-023-09712-8","DOIUrl":"https://doi.org/10.1007/s10723-023-09712-8","url":null,"abstract":"<p>Edge computing, blockchain technology, and the Internet of Things have all been identified as key enablers of innovative city initiatives. A comprehensive examination of the research found that IoT, blockchain, and edge computing are now major factors in how efficiently smart cities provide healthcare. IoT has been determined to be the most used of the three technologies. In this observation, edge computing and blockchain technology are more applicable to the healthcare industry for assessing intelligent and secured data. Edge computing has been touted as an important technology for low-cost remote access, cutting latency, and boosting efficiency. Smart cities are incorporated with intelligent devices to enhance the person's day-to-day life. Intelligent of Medical Things (IoMT) and Edge computing (EC) are these things’ bases. The increasing Quality of Services (QoS) of healthcare services requires supercomputing that connects IoMT with intelligent devices with edge processing. The healthcare applications of smart cities need reduced latencies. Therefore, EC is necessary to reduce latency, energy, bandwidth, and scalability. This paper developed a deep Q reinforcement learning algorithm with evolutionary optimization and compared it with the traditional deep learning approaches for process congestion to reduce the time and latency related to patient health monitoring. The energy consumption, latency computation, and cost computation of the proposed model is less when compared to existing techniques. Among 100 tasks, nearly 95% of the tasks are offloaded efficiently in the minimum time.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138682200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cost-Availability Aware Scaling: Towards Optimal Scaling of Cloud Services 成本-可用性意识扩展:实现云服务的优化扩展
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-07 DOI: 10.1007/s10723-023-09718-2
Andre Bento, Filipe Araujo, Raul Barbosa

Cloud services have become increasingly popular for developing large-scale applications due to the abundance of resources they offer. The scalability and accessibility of these resources have made it easier for organizations of all sizes to develop and implement sophisticated and demanding applications to meet demand instantly. As monetary fees are involved in the use of the cloud, one of the challenges for application developers and operators is to balance their budget constraints with crucial quality attributes, such as availability. Industry standards usually default to simplified solutions that cannot simultaneously consider competing objectives. Our research addresses this challenge by proposing a Cost-Availability Aware Scaling (CAAS) approach that uses multi-objective optimization of availability and cost. We evaluate CAAS using two open-source microservices applications, yielding improved results compared to the industry standard CPU-based Autoscaler (AS). CAAS can find optimal system configurations with higher availability, between 1 and 2 nines on average, and reduced costs, 6% on average, with the first application, and 1 nine of availability on average, and reduced costs up to 18% on average, with the second application. The gap in the results between our model and the default AS suggests that operators can significantly improve the operation of their applications.

云服务因其提供的丰富资源,在开发大型应用程序方面越来越受欢迎。这些资源的可扩展性和可访问性使各种规模的组织更容易开发和实施复杂、高要求的应用程序,以即时满足需求。由于云的使用涉及货币费用,应用程序开发人员和运营商面临的挑战之一是如何在预算限制和关键质量属性(如可用性)之间取得平衡。行业标准通常默认采用简化的解决方案,无法同时考虑相互竞争的目标。为了应对这一挑战,我们的研究提出了一种成本-可用性感知扩展(CAAS)方法,该方法使用可用性和成本的多目标优化。我们使用两个开源微服务应用对 CAAS 进行了评估,与基于 CPU 的行业标准自动分级器(AS)相比,结果有所改进。CAAS 可以找到最佳系统配置,第一个应用的可用性平均在 1 到 2 个 9 之间,成本平均降低了 6%;第二个应用的可用性平均为 1 个 9,成本平均降低了 18%。我们的模型与默认 AS 之间的结果差距表明,运营商可以显著改善其应用程序的运行。
{"title":"Cost-Availability Aware Scaling: Towards Optimal Scaling of Cloud Services","authors":"Andre Bento, Filipe Araujo, Raul Barbosa","doi":"10.1007/s10723-023-09718-2","DOIUrl":"https://doi.org/10.1007/s10723-023-09718-2","url":null,"abstract":"<p>Cloud services have become increasingly popular for developing large-scale applications due to the abundance of resources they offer. The scalability and accessibility of these resources have made it easier for organizations of all sizes to develop and implement sophisticated and demanding applications to meet demand instantly. As monetary fees are involved in the use of the cloud, one of the challenges for application developers and operators is to balance their budget constraints with crucial quality attributes, such as availability. Industry standards usually default to simplified solutions that cannot simultaneously consider competing objectives. Our research addresses this challenge by proposing a Cost-Availability Aware Scaling (CAAS) approach that uses multi-objective optimization of availability and cost. We evaluate CAAS using two open-source microservices applications, yielding improved results compared to the industry standard CPU-based Autoscaler (AS). CAAS can find optimal system configurations with higher availability, between 1 and 2 nines on average, and reduced costs, 6% on average, with the first application, and 1 nine of availability on average, and reduced costs up to 18% on average, with the second application. The gap in the results between our model and the default AS suggests that operators can significantly improve the operation of their applications.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138545561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Multi-Domain Framework for End-to-End Services in 5G Networks 基于深度学习的5G端到端服务多域框架
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-04 DOI: 10.1007/s10723-023-09714-6
Yanjia Tian, Yan Dong, Xiang Feng

Over the past few years, network slicing has emerged as a pivotal component within the realm of 5G technology. It plays a critical role in effectively delineating network services based on a myriad of performance and operational requirements, all of which draw from a shared pool of common resources. The core objective of 5G technology is to facilitate simultaneous network slicing, thereby enabling the creation of multiple distinct end-to-end networks. This multiplicity of networks serves the paramount purpose of ensuring that the traffic within one network slice does not impede or adversely affect the traffic within another. Therefore, this paper proposes a Deep learning-based Multi Domain framework for end-to-end network slicing in traffic-aware prediction. The proposed method uses Deep Reinforcement Learning (DRL) for in-depth resource allocation analysis and improves the Quality of Service (QOS). The DRL-based Multi-domain framework provides traffic-aware prediction and enhances flexibility. The study results demonstrate that the suggested approach outperforms conventional, heuristic, and randomized methods and enhances resource use while maintaining QoS.

在过去的几年里,网络切片已经成为5G技术领域的关键组成部分。它在有效地描述基于无数性能和操作需求的网络服务方面发挥着关键作用,所有这些需求都来自一个共享的公共资源池。5G技术的核心目标是促进同时进行网络切片,从而创建多个不同的端到端网络。这种网络的多样性最重要的目的是确保一个网络片内的流量不会妨碍或对另一个网络片内的流量产生不利影响。因此,本文提出了一种基于深度学习的多域框架,用于流量感知预测中的端到端网络切片。该方法利用深度强化学习(DRL)进行深度资源分配分析,提高了服务质量(QOS)。基于drl的多域框架提供流量感知预测,增强了灵活性。研究结果表明,该方法优于传统的启发式和随机化方法,并在保持QoS的同时提高了资源利用率。
{"title":"Deep Learning-Based Multi-Domain Framework for End-to-End Services in 5G Networks","authors":"Yanjia Tian, Yan Dong, Xiang Feng","doi":"10.1007/s10723-023-09714-6","DOIUrl":"https://doi.org/10.1007/s10723-023-09714-6","url":null,"abstract":"<p>Over the past few years, network slicing has emerged as a pivotal component within the realm of 5G technology. It plays a critical role in effectively delineating network services based on a myriad of performance and operational requirements, all of which draw from a shared pool of common resources. The core objective of 5G technology is to facilitate simultaneous network slicing, thereby enabling the creation of multiple distinct end-to-end networks. This multiplicity of networks serves the paramount purpose of ensuring that the traffic within one network slice does not impede or adversely affect the traffic within another. Therefore, this paper proposes a Deep learning-based Multi Domain framework for end-to-end network slicing in traffic-aware prediction. The proposed method uses Deep Reinforcement Learning (DRL) for in-depth resource allocation analysis and improves the Quality of Service (QOS). The DRL-based Multi-domain framework provides traffic-aware prediction and enhances flexibility. The study results demonstrate that the suggested approach outperforms conventional, heuristic, and randomized methods and enhances resource use while maintaining QoS.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Bibliometric Analysis of Convergence of Artificial Intelligence and Blockchain for Edge of Things 物联网边缘人工智能与区块链融合的文献计量分析
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-04 DOI: 10.1007/s10723-023-09716-4
Deepak Sharma, Rajeev Kumar, Ki-Hyun Jung

The convergence of Artificial Intelligence (AI) and Blockchain technologies has emerged as a powerful paradigm to address the challenges of data management, security, and privacy in the Edge of Things (EoTs) environment. This bibliometric analysis aims to explore the research landscape and trends surrounding the topic of convergence of AI and Blockchain for EoTs to gain insights into its development and potential implications. For this, research published during the past six years (2018-2023) in the Web of Science indexed sources has been considered as it has been a new field. VoSViewer-based full counting methodology has been used to analyze citation, co-citation, and co-authorship based collaborations among authors, organizations, countries, sources, and documents. The full counting method in VoSViewer involves considering all authors or sources with equal weight when calculating various bibliometric indicators. Co-occurrence, timeline, and burst detection analysis of keywords and published articles were also carried out to unravel significant research trends on the convergence of AI and Blockchain for EoTs. Our findings reveal a steady growth in research output, indicating the increasing importance and interest in AI-enabled Blockchain solutions for EoTs. Further, the analysis uncovered key influential researchers and institutions driving advancements in this domain, shedding light on potential collaborative networks and knowledge hubs. Additionally, the study examines the evolution of research themes over time, offering insights into emerging areas and future research directions. This bibliometric analysis contributes to the understanding of the state-of-the-art in convergence of AI and Blockchain for EoTs, highlighting the most influential works and identifying knowledge gaps. Researchers, industry practitioners, and policymakers can leverage these findings to inform their research strategies and decision-making processes, fostering innovation and advancements in this cutting-edge interdisciplinary field.

人工智能(AI)和区块链技术的融合已经成为解决物联网(iot)环境中数据管理、安全和隐私挑战的强大范例。本文献计量分析旨在探索围绕人工智能和区块链融合主题的研究前景和趋势,以深入了解其发展和潜在影响。因此,过去6年(2018-2023年)在Web of Science索引来源中发表的研究被认为是一个新领域。基于vosviewer的完整计数方法已被用于分析作者、组织、国家、来源和文件之间的引用、共同引用和共同作者合作。VoSViewer的全计数方法包括在计算各种文献计量指标时考虑所有作者或来源的同等权重。还对关键词和已发表文章进行了共现、时间线和突发检测分析,以揭示人工智能与区块链融合的重要研究趋势。我们的研究结果显示,研究产出稳步增长,表明对支持人工智能的区块链解决方案的重要性和兴趣日益增加。此外,该分析还揭示了推动该领域进步的关键有影响力的研究人员和机构,揭示了潜在的合作网络和知识中心。此外,该研究还考察了研究主题随时间的演变,为新兴领域和未来的研究方向提供了见解。这种文献计量分析有助于理解人工智能和区块链在eot领域的融合,突出最具影响力的作品,并确定知识差距。研究人员、行业从业者和政策制定者可以利用这些发现来为他们的研究策略和决策过程提供信息,从而促进这一前沿跨学科领域的创新和进步。
{"title":"A Bibliometric Analysis of Convergence of Artificial Intelligence and Blockchain for Edge of Things","authors":"Deepak Sharma, Rajeev Kumar, Ki-Hyun Jung","doi":"10.1007/s10723-023-09716-4","DOIUrl":"https://doi.org/10.1007/s10723-023-09716-4","url":null,"abstract":"<p>The convergence of Artificial Intelligence (AI) and Blockchain technologies has emerged as a powerful paradigm to address the challenges of data management, security, and privacy in the Edge of Things (EoTs) environment. This bibliometric analysis aims to explore the research landscape and trends surrounding the topic of convergence of AI and Blockchain for EoTs to gain insights into its development and potential implications. For this, research published during the past six years (2018-2023) in the Web of Science indexed sources has been considered as it has been a new field. VoSViewer-based full counting methodology has been used to analyze citation, co-citation, and co-authorship based collaborations among authors, organizations, countries, sources, and documents. The full counting method in VoSViewer involves considering all authors or sources with equal weight when calculating various bibliometric indicators. Co-occurrence, timeline, and burst detection analysis of keywords and published articles were also carried out to unravel significant research trends on the convergence of AI and Blockchain for EoTs. Our findings reveal a steady growth in research output, indicating the increasing importance and interest in AI-enabled Blockchain solutions for EoTs. Further, the analysis uncovered key influential researchers and institutions driving advancements in this domain, shedding light on potential collaborative networks and knowledge hubs. Additionally, the study examines the evolution of research themes over time, offering insights into emerging areas and future research directions. This bibliometric analysis contributes to the understanding of the state-of-the-art in convergence of AI and Blockchain for EoTs, highlighting the most influential works and identifying knowledge gaps. Researchers, industry practitioners, and policymakers can leverage these findings to inform their research strategies and decision-making processes, fostering innovation and advancements in this cutting-edge interdisciplinary field.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Grid Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1