首页 > 最新文献

Journal of Cloud Computing最新文献

英文 中文
TCP Stratos for stratosphere based computing platforms 基于平流层计算平台的 TCP Stratos
Pub Date : 2024-03-15 DOI: 10.1186/s13677-024-00620-0
A. A. Periola
Stratosphere computing platforms (SCPs) benefit from free cooling but face challenges necessitating transmission control protocol (TCP) re-design. The redesign should be considered due to stratospheric gravity waves (SGWs), and sudden stratospheric warming (SSWs). SGWs, and SSWs disturb the wireless channel during SCPs packet communications. SCP packet transmission can be done using existing TCP variants at the expense of high packet loss as existing TCP variants do not consider SGWs, and SSWs. TCP variants designed for satellite links are not suitable as they do not explicitly consider the SSW, and SGW. Moreover, the use of SCPs in future internet is at a nascent stage. The presented research proposes a new TCP variant i.e., TCP Stratos. TCP Stratos incorporates a parameter transfer mechanism and comprises loss-based; and delay-based components. However, its window evolution considers the occurrence of SSWs, and SGWs. The performance benefit of the proposed approach is evaluated via MATLAB numerical simulation. MATLAB simulation has been used because of the consideration of the stratosphere. The modelling of the stratosphere in this case is challenging for conventional tools and frameworks. Performance evaluation shows that using TCP Stratos instead of existing TCP variants and improved TCP variants reduces the packet loss rate by an average of (7.1–23.1) % and (3.8–12.8) %, respectively. The throughput is enhanced by an average of (20.5–53)%, and (40.9–70)% when TCP Stratos is used instead of existing TCP variant and modified TCP variant, respectively.
平流层计算平台(SCP)受益于自由冷却,但也面临着需要重新设计传输控制协议(TCP)的挑战。重新设计应考虑平流层重力波(SGW)和平流层突然变暖(SSW)。SGW 和 SSW 会在 SCP 数据包通信过程中干扰无线信道。由于现有的 TCP 变体没有考虑 SGW 和 SSW,因此可以使用现有的 TCP 变体进行 SCP 数据包传输,但数据包丢失率较高。为卫星链路设计的 TCP 变体并不合适,因为它们没有明确考虑 SSW 和 SGW。此外,SCP 在未来互联网中的使用还处于初级阶段。本研究提出了一种新的 TCP 变种,即 TCP Stratos。TCP Stratos 融合了参数传输机制,包括基于损失的部分和基于延迟的部分。不过,它的窗口演进考虑到了 SSW 和 SGW 的发生。通过 MATLAB 数值仿真评估了所提方法的性能优势。使用 MATLAB 仿真是因为考虑到了平流层。在这种情况下,平流层的建模对传统工具和框架来说具有挑战性。性能评估显示,使用 TCP Stratos 代替现有 TCP 变体和改进的 TCP 变体,可将数据包丢失率分别平均降低 (7.1-23.1) % 和 (3.8-12.8) %。当使用 TCP Stratos 替代现有 TCP 变种和改进型 TCP 变种时,吞吐量平均分别提高了 (20.5-53)% 和 (40.9-70)%。
{"title":"TCP Stratos for stratosphere based computing platforms","authors":"A. A. Periola","doi":"10.1186/s13677-024-00620-0","DOIUrl":"https://doi.org/10.1186/s13677-024-00620-0","url":null,"abstract":"Stratosphere computing platforms (SCPs) benefit from free cooling but face challenges necessitating transmission control protocol (TCP) re-design. The redesign should be considered due to stratospheric gravity waves (SGWs), and sudden stratospheric warming (SSWs). SGWs, and SSWs disturb the wireless channel during SCPs packet communications. SCP packet transmission can be done using existing TCP variants at the expense of high packet loss as existing TCP variants do not consider SGWs, and SSWs. TCP variants designed for satellite links are not suitable as they do not explicitly consider the SSW, and SGW. Moreover, the use of SCPs in future internet is at a nascent stage. The presented research proposes a new TCP variant i.e., TCP Stratos. TCP Stratos incorporates a parameter transfer mechanism and comprises loss-based; and delay-based components. However, its window evolution considers the occurrence of SSWs, and SGWs. The performance benefit of the proposed approach is evaluated via MATLAB numerical simulation. MATLAB simulation has been used because of the consideration of the stratosphere. The modelling of the stratosphere in this case is challenging for conventional tools and frameworks. Performance evaluation shows that using TCP Stratos instead of existing TCP variants and improved TCP variants reduces the packet loss rate by an average of (7.1–23.1) % and (3.8–12.8) %, respectively. The throughput is enhanced by an average of (20.5–53)%, and (40.9–70)% when TCP Stratos is used instead of existing TCP variant and modified TCP variant, respectively.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"91 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140156309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing the resource allocation in cyber physical energy systems based on cloud storage and IoT infrastructure 基于云存储和物联网基础设施优化网络物理能源系统的资源分配
Pub Date : 2024-03-15 DOI: 10.1186/s13677-024-00615-x
Zhiqing Bai, Caizhong Li, Javad Pourzamani, Xuan Yang, Dejuan Li
Given the prohibited operating zones, losses, and valve point effects in power systems, energy optimization analysis in such systems includes numerous non-convex and non-smooth parameters, such as economic dispatch problems. In addition, in this paper, to include all possible scenarios in economic dispatch problems, multi-fuel generators, and transmission losses are considered. However, these features make economic dispatch problems more complex from a non-convexity standpoint. In order to solve economic dispatch problems as an important consideration in power systems, this paper presents a modified robust, and effective optimization algorithm. Here, some modifications are carried out to tackle such a sophisticated problem and find the best solution, considering multiple fuels, valve point effect, large-scale systems, prohibited operating zones, and transmission losses. Moreover, a few complicated power systems including 6, 13, and 40 generators which are fed by one type of fuel, 10 generators with multiple fuels, and two large-scale cases comprised of 80 and 120 generators are analyzed by the proposed optimization algorithm. The effectiveness of the proposed method, in terms of accuracy, robustness, and convergence speed is evaluated, as well. Furthermore, this paper explores the integration of cloud storage and internet of things (IoT) to augment the adaptability of monitoring capabilities of the proposed method in handling non-convex energy resource management and allocation problems across various generator quantities and constraints. The results show the capability of the proposed algorithm for solving non-convex energy resource management and allocation problems irrespective of the number of generators and constraints. Based on the obtained results, the proposed method provides good results for both small and large systems. The proposed method, for example, always yields the best results for the system of 6 power plants with and without losses, which are $15,276.894 and $15,443.7967. Moreover, the improvements made in the proposed method have allowed the economic dispatch problem regarding multi-fuel power plants to be solved not only with optimal results ($623.83) but also in less than 35 iterations. Lastly, the difference between the best-obtained results ($121,412) and the worst-obtained results ($121,316.1992) for the system of 40 power plants is only about $4 which is quite acceptable.
鉴于电力系统中存在禁止运行区、损耗和阀点效应,此类系统中的能源优化分析包括大量非凸和非平滑参数,如经济调度问题。此外,本文还考虑了多燃料发电机和输电损耗,以便将所有可能的情况都纳入经济调度问题。然而,从非凸的角度来看,这些特征使得经济调度问题变得更加复杂。为了解决作为电力系统重要考虑因素的经济调度问题,本文提出了一种改进的鲁棒、有效的优化算法。考虑到多种燃料、阀点效应、大规模系统、禁止运行区和输电损耗等因素,本文对算法进行了一些修改,以解决此类复杂问题并找到最佳解决方案。此外,提出的优化算法还分析了一些复杂的电力系统,包括 6 台、13 台和 40 台使用一种燃料的发电机、10 台使用多种燃料的发电机,以及由 80 台和 120 台发电机组成的两个大型系统。本文还评估了所提方法在准确性、鲁棒性和收敛速度方面的有效性。此外,本文还探讨了云存储和物联网(IoT)的整合,以增强所提方法在处理各种发电机数量和约束条件下的非凸能源资源管理和分配问题时的监控能力的适应性。结果表明,无论发电机数量和约束条件如何,所提出的算法都能解决非凸能源资源管理和分配问题。根据所获得的结果,无论是小型系统还是大型系统,所提出的方法都能提供良好的结果。例如,对于有损耗和无损耗的 6 个发电厂系统,所提出的方法总能获得最佳结果,分别为 15 276.894 美元和 15 443.7967 美元。此外,该方法的改进使得多燃料发电厂的经济调度问题不仅能以最优结果(623.83 美元)求解,而且迭代次数少于 35 次。最后,在由 40 个发电厂组成的系统中,最佳结果(121,412 美元)与最差结果(121,316.1992 美元)之间的差额仅为 4 美元左右,这是可以接受的。
{"title":"Optimizing the resource allocation in cyber physical energy systems based on cloud storage and IoT infrastructure","authors":"Zhiqing Bai, Caizhong Li, Javad Pourzamani, Xuan Yang, Dejuan Li","doi":"10.1186/s13677-024-00615-x","DOIUrl":"https://doi.org/10.1186/s13677-024-00615-x","url":null,"abstract":"Given the prohibited operating zones, losses, and valve point effects in power systems, energy optimization analysis in such systems includes numerous non-convex and non-smooth parameters, such as economic dispatch problems. In addition, in this paper, to include all possible scenarios in economic dispatch problems, multi-fuel generators, and transmission losses are considered. However, these features make economic dispatch problems more complex from a non-convexity standpoint. In order to solve economic dispatch problems as an important consideration in power systems, this paper presents a modified robust, and effective optimization algorithm. Here, some modifications are carried out to tackle such a sophisticated problem and find the best solution, considering multiple fuels, valve point effect, large-scale systems, prohibited operating zones, and transmission losses. Moreover, a few complicated power systems including 6, 13, and 40 generators which are fed by one type of fuel, 10 generators with multiple fuels, and two large-scale cases comprised of 80 and 120 generators are analyzed by the proposed optimization algorithm. The effectiveness of the proposed method, in terms of accuracy, robustness, and convergence speed is evaluated, as well. Furthermore, this paper explores the integration of cloud storage and internet of things (IoT) to augment the adaptability of monitoring capabilities of the proposed method in handling non-convex energy resource management and allocation problems across various generator quantities and constraints. The results show the capability of the proposed algorithm for solving non-convex energy resource management and allocation problems irrespective of the number of generators and constraints. Based on the obtained results, the proposed method provides good results for both small and large systems. The proposed method, for example, always yields the best results for the system of 6 power plants with and without losses, which are $15,276.894 and $15,443.7967. Moreover, the improvements made in the proposed method have allowed the economic dispatch problem regarding multi-fuel power plants to be solved not only with optimal results ($623.83) but also in less than 35 iterations. Lastly, the difference between the best-obtained results ($121,412) and the worst-obtained results ($121,316.1992) for the system of 40 power plants is only about $4 which is quite acceptable.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140155613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SRA-E-ABCO: terminal task offloading for cloud-edge-end environments SRA-E-ABCO:面向云端环境的终端任务卸载
Pub Date : 2024-03-14 DOI: 10.1186/s13677-024-00622-y
Shun Jiao, Haiyan Wang, Jian Luo
The rapid development of the Internet technology along with the emergence of intelligent applications has put forward higher requirements for task offloading. In Cloud-Edge-End (CEE) environments, offloading computing tasks of terminal devices to edge and cloud servers can effectively reduce system delay and alleviate network congestion. Designing a reliable task offloading strategy in CEE environments to meet users’ requirements is a challenging issue. To design an effective offloading strategy, a Service Reliability Analysis and Elite-Artificial Bee Colony Offloading model (SRA-E-ABCO) is presented for cloud-edge-end environments. Specifically, a Service Reliability Analysis (SRA) method is proposed to assist in predicting the offloading necessity of terminal tasks and analyzing the attributes of terminal devices and edge nodes. An Elite Artificial Bee Colony Offloading (E-ABCO) method is also proposed, which optimizes the offloading strategy by combining elite populations with improved fitness formulas, position update formulas, and population initialization methods. Simulation results on real datasets validate the efficient performance of the proposed scheme that not only reduces task offloading delay but also optimize system overhead in comparison to baseline schemes.
互联网技术的飞速发展和智能应用的不断涌现,对任务卸载提出了更高的要求。在云-边缘-端(CEE)环境中,将终端设备的计算任务卸载到边缘服务器和云服务器可以有效减少系统延迟,缓解网络拥塞。在 CEE 环境中设计可靠的任务卸载策略以满足用户需求是一个具有挑战性的问题。为了设计有效的卸载策略,本文提出了针对云-边缘-终端环境的服务可靠性分析和精英-人工蜂群卸载模型(SRA-E-ABCO)。具体而言,提出了一种服务可靠性分析(SRA)方法,以协助预测终端任务的卸载必要性,并分析终端设备和边缘节点的属性。此外,还提出了一种精英人工蜂群卸载(E-ABCO)方法,该方法通过将精英种群与改进的适合度公式、位置更新公式和种群初始化方法相结合来优化卸载策略。在真实数据集上的仿真结果验证了所提方案的高效性能,与基线方案相比,该方案不仅减少了任务卸载延迟,还优化了系统开销。
{"title":"SRA-E-ABCO: terminal task offloading for cloud-edge-end environments","authors":"Shun Jiao, Haiyan Wang, Jian Luo","doi":"10.1186/s13677-024-00622-y","DOIUrl":"https://doi.org/10.1186/s13677-024-00622-y","url":null,"abstract":"The rapid development of the Internet technology along with the emergence of intelligent applications has put forward higher requirements for task offloading. In Cloud-Edge-End (CEE) environments, offloading computing tasks of terminal devices to edge and cloud servers can effectively reduce system delay and alleviate network congestion. Designing a reliable task offloading strategy in CEE environments to meet users’ requirements is a challenging issue. To design an effective offloading strategy, a Service Reliability Analysis and Elite-Artificial Bee Colony Offloading model (SRA-E-ABCO) is presented for cloud-edge-end environments. Specifically, a Service Reliability Analysis (SRA) method is proposed to assist in predicting the offloading necessity of terminal tasks and analyzing the attributes of terminal devices and edge nodes. An Elite Artificial Bee Colony Offloading (E-ABCO) method is also proposed, which optimizes the offloading strategy by combining elite populations with improved fitness formulas, position update formulas, and population initialization methods. Simulation results on real datasets validate the efficient performance of the proposed scheme that not only reduces task offloading delay but also optimize system overhead in comparison to baseline schemes.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140127587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FLM-ICR: a federated learning model for classification of internet of vehicle terminals using connection records FLM-ICR:利用连接记录对车载互联网终端进行分类的联合学习模型
Pub Date : 2024-03-13 DOI: 10.1186/s13677-024-00623-x
Kai Yang, Jiawei Du, Jingchao Liu, Feng Xu, Ye Tang, Ming Liu, Zhibin Li
With the rapid growth of Internet of Vehicles (IoV) technology, the performance and privacy of IoV terminals (IoVT) have become increasingly important. This paper proposes a federated learning model for IoVT classification using connection records (FLM-ICR) to address privacy concerns and poor computational performance in analyzing users' private data in IoV. FLM-ICR, in the horizontally federated learning client-server architecture, utilizes an improved multi-layer perceptron and logistic regression network as the model backbone, employs the federated momentum gradient algorithm as the local model training optimizer, and uses the federated Gaussian differential privacy algorithm to protect the security of the computation process. The experiment evaluates the model's classification performance using the confusion matrix, explores the impact of client collaboration on model performance, demonstrates the model's suitability for imbalanced data distribution, and confirms the effectiveness of federated learning for model training. FLM-ICR achieves the accuracy, precision, recall, specificity, and F1 score of 0.795, 0.735, 0.835, 0.75, and 0.782, respectively, outperforming existing research methods and balancing classification performance and privacy security, making it suitable for IoV computation and analysis of private data.
随着车联网(IoV)技术的快速发展,车联网终端(IoVT)的性能和隐私变得越来越重要。本文提出了一种利用连接记录进行 IoVT 分类的联合学习模型(FLM-ICR),以解决在 IoV 中分析用户隐私数据时存在的隐私问题和计算性能低下的问题。FLM-ICR采用横向联盟学习的客户端-服务器架构,以改进的多层感知器和逻辑回归网络作为模型骨干,采用联盟动量梯度算法作为局部模型训练优化器,并使用联盟高斯差分隐私算法保护计算过程的安全性。实验利用混淆矩阵评估了模型的分类性能,探讨了客户端协作对模型性能的影响,证明了模型对不平衡数据分布的适用性,并证实了联合学习在模型训练中的有效性。FLM-ICR 的准确度、精确度、召回率、特异性和 F1 分数分别达到了 0.795、0.735、0.835、0.75 和 0.782,优于现有研究方法,兼顾了分类性能和隐私安全,适用于 IoV 计算和隐私数据分析。
{"title":"FLM-ICR: a federated learning model for classification of internet of vehicle terminals using connection records","authors":"Kai Yang, Jiawei Du, Jingchao Liu, Feng Xu, Ye Tang, Ming Liu, Zhibin Li","doi":"10.1186/s13677-024-00623-x","DOIUrl":"https://doi.org/10.1186/s13677-024-00623-x","url":null,"abstract":"With the rapid growth of Internet of Vehicles (IoV) technology, the performance and privacy of IoV terminals (IoVT) have become increasingly important. This paper proposes a federated learning model for IoVT classification using connection records (FLM-ICR) to address privacy concerns and poor computational performance in analyzing users' private data in IoV. FLM-ICR, in the horizontally federated learning client-server architecture, utilizes an improved multi-layer perceptron and logistic regression network as the model backbone, employs the federated momentum gradient algorithm as the local model training optimizer, and uses the federated Gaussian differential privacy algorithm to protect the security of the computation process. The experiment evaluates the model's classification performance using the confusion matrix, explores the impact of client collaboration on model performance, demonstrates the model's suitability for imbalanced data distribution, and confirms the effectiveness of federated learning for model training. FLM-ICR achieves the accuracy, precision, recall, specificity, and F1 score of 0.795, 0.735, 0.835, 0.75, and 0.782, respectively, outperforming existing research methods and balancing classification performance and privacy security, making it suitable for IoV computation and analysis of private data.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140127220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-dimensional resource allocation strategy for LEO satellite communication uplinks based on deep reinforcement learning 基于深度强化学习的低地球轨道卫星通信上行链路多维资源分配策略
Pub Date : 2024-03-08 DOI: 10.1186/s13677-024-00621-z
Yu Hu, Feipeng Qiu, Fei Zheng, Jilong Zhao
In the LEO satellite communication system, the resource utilization rate is very low due to the constrained resources on satellites and the non-uniform distribution of traffics. In addition, the rapid movement of LEO satellites leads to complicated and changeable networks, which makes it difficult for traditional resource allocation strategies to improve the resource utilization rate. To solve the above problem, this paper proposes a resource allocation strategy based on deep reinforcement learning. The strategy takes the weighted sum of spectral efficiency, energy efficiency and blocking rate as the optimization objective, and constructs a joint power and channel allocation model. The strategy allocates channels and power according to the number of channels, the number of users and the type of business. In the reward decision mechanism, the maximum reward is obtained by maximizing the increment of the optimization target. However, during the optimization process, the decision always focuses on the optimal allocation for current users, and ignores QoS for new users. To avoid the situation, current service beams are integrated with high- traffic beams, and states of beams are refactored to maximize long-term benefits to improve system performance. Simulation experiments show that in scenarios with a high number of users, the proposed resource allocation strategy reduces the blocking rate by at least 5% compared to reinforcement learning methods, effectively enhancing resource utilization.
在低地轨道卫星通信系统中,由于卫星资源受限和流量分布不均匀,资源利用率非常低。此外,低地轨道卫星的快速移动导致网络复杂多变,传统的资源分配策略难以提高资源利用率。为解决上述问题,本文提出了一种基于深度强化学习的资源分配策略。该策略以频谱效率、能效和阻塞率的加权和为优化目标,构建了功率和信道联合分配模型。该策略根据信道数量、用户数量和业务类型分配信道和功率。在奖励决策机制中,通过最大化优化目标的增量来获得最大奖励。然而,在优化过程中,决策总是关注当前用户的最优分配,而忽略了新用户的 QoS。为避免出现这种情况,当前服务波束与高流量波束进行了整合,并对波束状态进行了重构,以实现长期利益最大化,从而提高系统性能。仿真实验表明,在用户数量较多的场景中,与强化学习方法相比,所提出的资源分配策略至少降低了 5%的阻塞率,有效提高了资源利用率。
{"title":"Multi-dimensional resource allocation strategy for LEO satellite communication uplinks based on deep reinforcement learning","authors":"Yu Hu, Feipeng Qiu, Fei Zheng, Jilong Zhao","doi":"10.1186/s13677-024-00621-z","DOIUrl":"https://doi.org/10.1186/s13677-024-00621-z","url":null,"abstract":"In the LEO satellite communication system, the resource utilization rate is very low due to the constrained resources on satellites and the non-uniform distribution of traffics. In addition, the rapid movement of LEO satellites leads to complicated and changeable networks, which makes it difficult for traditional resource allocation strategies to improve the resource utilization rate. To solve the above problem, this paper proposes a resource allocation strategy based on deep reinforcement learning. The strategy takes the weighted sum of spectral efficiency, energy efficiency and blocking rate as the optimization objective, and constructs a joint power and channel allocation model. The strategy allocates channels and power according to the number of channels, the number of users and the type of business. In the reward decision mechanism, the maximum reward is obtained by maximizing the increment of the optimization target. However, during the optimization process, the decision always focuses on the optimal allocation for current users, and ignores QoS for new users. To avoid the situation, current service beams are integrated with high- traffic beams, and states of beams are refactored to maximize long-term benefits to improve system performance. Simulation experiments show that in scenarios with a high number of users, the proposed resource allocation strategy reduces the blocking rate by at least 5% compared to reinforcement learning methods, effectively enhancing resource utilization.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140075383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge-cloud computing oriented large-scale online music education mechanism driven by neural networks 神经网络驱动的面向边缘云计算的大规模在线音乐教育机制
Pub Date : 2024-03-07 DOI: 10.1186/s13677-023-00555-y
Wen Xing, Adam Slowik, J. Dinesh Peter
With the advent of the big data era, edge cloud computing has developed rapidly. In this era of popular digital music, various technologies have brought great convenience to online music education. But vast databases of digital music prevent educators from making specific-purpose choices. Music recommendation will be a potential development direction for online music education. In this paper, we propose a deep learning model based on multi-source information fusion for music recommendation under the scenario of edge-cloud computing. First, we use the music latent factor vector obtained by the Weighted Matrix Factorization (WMF) algorithm as the ground truth. Second, we build a neural network model to fuse multiple sources of music information, including music spectrum extracted from extra music information to predict the latent spatial features of music. Finally, we predict the user’s preference for music through the inner product of the user vector and the music vector for recommendation. Experimental results on public datasets and real music data collected by edge devices demonstrate the effectiveness of the proposed method in music recommendation.
随着大数据时代的到来,边缘云计算发展迅速。在这个流行数字音乐的时代,各种技术为在线音乐教育带来了极大的便利。但庞大的数字音乐数据库阻碍了教育者做出特定用途的选择。音乐推荐将是在线音乐教育的一个潜在发展方向。本文提出了一种基于多源信息融合的深度学习模型,用于边缘云计算场景下的音乐推荐。首先,我们使用加权矩阵因式分解(WMF)算法获得的音乐潜在因子向量作为地面实况。其次,我们建立了一个神经网络模型来融合多种音乐信息源,包括从额外音乐信息中提取的音乐频谱,从而预测音乐的潜在空间特征。最后,我们通过用户向量和音乐向量的内积来预测用户对音乐的偏好,从而进行推荐。在公共数据集和由边缘设备收集的真实音乐数据上的实验结果证明了所提方法在音乐推荐方面的有效性。
{"title":"Edge-cloud computing oriented large-scale online music education mechanism driven by neural networks","authors":"Wen Xing, Adam Slowik, J. Dinesh Peter","doi":"10.1186/s13677-023-00555-y","DOIUrl":"https://doi.org/10.1186/s13677-023-00555-y","url":null,"abstract":"With the advent of the big data era, edge cloud computing has developed rapidly. In this era of popular digital music, various technologies have brought great convenience to online music education. But vast databases of digital music prevent educators from making specific-purpose choices. Music recommendation will be a potential development direction for online music education. In this paper, we propose a deep learning model based on multi-source information fusion for music recommendation under the scenario of edge-cloud computing. First, we use the music latent factor vector obtained by the Weighted Matrix Factorization (WMF) algorithm as the ground truth. Second, we build a neural network model to fuse multiple sources of music information, including music spectrum extracted from extra music information to predict the latent spatial features of music. Finally, we predict the user’s preference for music through the inner product of the user vector and the music vector for recommendation. Experimental results on public datasets and real music data collected by edge devices demonstrate the effectiveness of the proposed method in music recommendation.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"117 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140057209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RNA-RBP interactions recognition using multi-label learning and feature attention allocation 利用多标签学习和特征注意力分配识别 RNA-RBP 相互作用
Pub Date : 2024-03-07 DOI: 10.1186/s13677-024-00612-0
Huirui Han, Bandeh Ali Talpur, Wei Liu, Limei Wang, Bilal Ahmed, Nadia Sarhan, Emad Mahrous Awwad
In this study, we present a sophisticated multi-label deep learning framework for the prediction of RNA-RBP (RNA-binding protein) interactions, a critical aspect in understanding RNA functionality modulation and its implications in disease pathogenesis. Our approach leverages machine learning to develop a rapid and cost-efficient predictive model for these interactions. The proposed model captures the complex characteristics of RNA and recognizes corresponding RBPs through its dual-module architecture. The first module employs convolutional neural networks (CNNs) for intricate feature extraction from RNA sequences, enabling the model to discern nuanced patterns and attributes. The second module is a multi-view multi-label classification system incorporating a feature attention mechanism. The second module is a multi-view multi-label classification system that utilizes a feature attention mechanism. This mechanism is designed to intricately analyze and distinguish between common and unique deep features derived from the diverse RNA characteristics. To evaluate the model's efficacy, extensive experiments were conducted on a comprehensive RNA-RBP interaction dataset. The results emphasize substantial improvements in the model's ability to predict RNA-RBP interactions compared to existing methodologies. This advancement emphasizes the model's potential in contributing to the understanding of RNA-mediated biological processes and disease etiology.
在这项研究中,我们提出了一种复杂的多标签深度学习框架,用于预测 RNA-RBP(RNA 结合蛋白)的相互作用,这是理解 RNA 功能调节及其在疾病发病机制中的影响的一个关键方面。我们的方法利用机器学习为这些相互作用开发了一个快速、经济高效的预测模型。所提出的模型通过双模块架构捕捉 RNA 的复杂特性并识别相应的 RBPs。第一个模块采用卷积神经网络(CNN)从 RNA 序列中提取复杂的特征,使模型能够识别细微的模式和属性。第二个模块是一个多视角多标签分类系统,包含一个特征关注机制。第二个模块是一个利用特征注意机制的多视角多标签分类系统。该机制旨在复杂地分析和区分来自不同 RNA 特征的共同和独特的深层特征。为了评估该模型的功效,我们在一个全面的 RNA-RBP 相互作用数据集上进行了广泛的实验。结果表明,与现有方法相比,该模型预测 RNA-RBP 相互作用的能力有了大幅提高。这一进步凸显了该模型在帮助理解 RNA 介导的生物过程和疾病病因学方面的潜力。
{"title":"RNA-RBP interactions recognition using multi-label learning and feature attention allocation","authors":"Huirui Han, Bandeh Ali Talpur, Wei Liu, Limei Wang, Bilal Ahmed, Nadia Sarhan, Emad Mahrous Awwad","doi":"10.1186/s13677-024-00612-0","DOIUrl":"https://doi.org/10.1186/s13677-024-00612-0","url":null,"abstract":"In this study, we present a sophisticated multi-label deep learning framework for the prediction of RNA-RBP (RNA-binding protein) interactions, a critical aspect in understanding RNA functionality modulation and its implications in disease pathogenesis. Our approach leverages machine learning to develop a rapid and cost-efficient predictive model for these interactions. The proposed model captures the complex characteristics of RNA and recognizes corresponding RBPs through its dual-module architecture. The first module employs convolutional neural networks (CNNs) for intricate feature extraction from RNA sequences, enabling the model to discern nuanced patterns and attributes. The second module is a multi-view multi-label classification system incorporating a feature attention mechanism. The second module is a multi-view multi-label classification system that utilizes a feature attention mechanism. This mechanism is designed to intricately analyze and distinguish between common and unique deep features derived from the diverse RNA characteristics. To evaluate the model's efficacy, extensive experiments were conducted on a comprehensive RNA-RBP interaction dataset. The results emphasize substantial improvements in the model's ability to predict RNA-RBP interactions compared to existing methodologies. This advancement emphasizes the model's potential in contributing to the understanding of RNA-mediated biological processes and disease etiology.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140057502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-cost and high-performance abnormal trajectory detection based on the GRU model with deep spatiotemporal sequence analysis in cloud computing 基于 GRU 模型的低成本高性能异常轨迹检测与云计算中的深度时空序列分析
Pub Date : 2024-03-05 DOI: 10.1186/s13677-024-00611-1
Guohao Tang, Huaying Zhao, Baohua Yu
Trajectory anomalies serve as early indicators of potential issues and frequently provide valuable insights into event occurrence. Existing methods for detecting abnormal trajectories primarily focus on comparing the spatial characteristics of the trajectories. However, they fail to capture the temporal dimension’s pattern and evolution within the trajectory data, thereby inadequately identifying the behavioral inertia of the target group. A few detection methods that incorporate spatiotemporal features have also failed to adequately analyze the spatiotemporal sequence evolution information; consequently, detection methods that ignore temporal and spatial correlations are too one-sided. Recurrent neural networks (RNNs), especially gate recurrent unit (GRU) that design reset and update gate control units, process nonlinear sequence processing capabilities, enabling effective extraction and analysis of both temporal and spatial characteristics. However, the basic GRU network model has limited expressive power and may not be able to adequately capture complex sequence patterns and semantic information. To address the above issues, an abnormal trajectory detection method based on the improved GRU model is proposed in cloud computing in this paper. To enhance the anomaly detection ability and training efficiency of relevant models, strictly control the input of irrelevant features and improve the model fitting effect, an improved model combining the random forest algorithm and fully connected layer network is designed. The method deconstructs spatiotemporal semantics through reset and update gated units, while effectively capturing feature evolution information and target behavioral inertia by leveraging the integration of features and nonlinear mapping capabilities of the fully connected layer network. The experimental results based on the GeoLife GPS trajectory dataset indicate that the proposed approach improves both generalization ability by 1% and reduces training cost by 31.68%. This success do provides a practical solution for the task of anomaly trajectory detection.
轨迹异常可作为潜在问题的早期指标,并经常为事件发生提供有价值的见解。检测异常轨迹的现有方法主要侧重于比较轨迹的空间特征。然而,这些方法无法捕捉轨迹数据中的时间维度模式和演变,因此无法充分识别目标群体的行为惯性。少数包含时空特征的检测方法也未能充分分析时空序列演变信息;因此,忽略时空相关性的检测方法过于片面。递归神经网络(RNN),尤其是设计重置和更新门控单元的门递归单元(GRU),具有处理非线性序列的能力,能有效提取和分析时空特征。然而,基本 GRU 网络模型的表达能力有限,可能无法充分捕捉复杂的序列模式和语义信息。针对上述问题,本文提出了一种基于改进 GRU 模型的云计算异常轨迹检测方法。为提高相关模型的异常检测能力和训练效率,严格控制无关特征的输入,改善模型拟合效果,设计了一种结合随机森林算法和全连接层网络的改进模型。该方法通过重置和更新门控单元解构时空语义,同时利用全连接层网络的特征整合和非线性映射能力,有效捕捉特征演化信息和目标行为惯性。基于 GeoLife GPS 轨迹数据集的实验结果表明,所提方法的泛化能力提高了 1%,训练成本降低了 31.68%。这一成功为异常轨迹检测任务提供了实用的解决方案。
{"title":"Low-cost and high-performance abnormal trajectory detection based on the GRU model with deep spatiotemporal sequence analysis in cloud computing","authors":"Guohao Tang, Huaying Zhao, Baohua Yu","doi":"10.1186/s13677-024-00611-1","DOIUrl":"https://doi.org/10.1186/s13677-024-00611-1","url":null,"abstract":"Trajectory anomalies serve as early indicators of potential issues and frequently provide valuable insights into event occurrence. Existing methods for detecting abnormal trajectories primarily focus on comparing the spatial characteristics of the trajectories. However, they fail to capture the temporal dimension’s pattern and evolution within the trajectory data, thereby inadequately identifying the behavioral inertia of the target group. A few detection methods that incorporate spatiotemporal features have also failed to adequately analyze the spatiotemporal sequence evolution information; consequently, detection methods that ignore temporal and spatial correlations are too one-sided. Recurrent neural networks (RNNs), especially gate recurrent unit (GRU) that design reset and update gate control units, process nonlinear sequence processing capabilities, enabling effective extraction and analysis of both temporal and spatial characteristics. However, the basic GRU network model has limited expressive power and may not be able to adequately capture complex sequence patterns and semantic information. To address the above issues, an abnormal trajectory detection method based on the improved GRU model is proposed in cloud computing in this paper. To enhance the anomaly detection ability and training efficiency of relevant models, strictly control the input of irrelevant features and improve the model fitting effect, an improved model combining the random forest algorithm and fully connected layer network is designed. The method deconstructs spatiotemporal semantics through reset and update gated units, while effectively capturing feature evolution information and target behavioral inertia by leveraging the integration of features and nonlinear mapping capabilities of the fully connected layer network. The experimental results based on the GeoLife GPS trajectory dataset indicate that the proposed approach improves both generalization ability by 1% and reduces training cost by 31.68%. This success do provides a practical solution for the task of anomaly trajectory detection.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140035370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-empowered mobile edge computing: inducing balanced federated learning strategy over edge for balanced data and optimized computation cost 人工智能赋能移动边缘计算:通过边缘诱导均衡联合学习策略,实现数据均衡和计算成本优化
Pub Date : 2024-03-04 DOI: 10.1186/s13677-024-00614-y
Momina Shaheen, Muhammad S. Farooq, Tariq Umer
In Mobile Edge Computing, the framework of federated learning can enable collaborative learning models across edge nodes, without necessitating the direct exchange of data from edge nodes. It addresses significant challenges encompassing access rights, privacy, security, and the utilization of heterogeneous data sources over mobile edge computing. Edge devices generate and gather data, across the network, in non-IID (independent and identically distributed) manner leading to potential variations in the number of data samples among these edge networks. A method is proposed to work in federated learning under edge computing setting, which involves AI techniques such as data augmentation and class estimation and balancing during training process with minimized computational overhead. This is accomplished through the implementation of data augmentation techniques to refine data distribution. Additionally, we leveraged class estimation and employed linear regression for client-side model training. This strategic approach yields a reduction in computational costs. To validate the effectiveness of the proposed approach, it is applied to two distinct datasets. One dataset pertains to image data (FashionMNIST), while the other comprises numerical and textual data concerning stocks for predictive analysis of stock values. This approach demonstrates commendable performance across both dataset types and approaching more than 92% of accuracy in the paradigm of federated learning.
在移动边缘计算中,联合学习框架可实现跨边缘节点的协作学习模式,而无需从边缘节点直接交换数据。它解决了移动边缘计算在访问权限、隐私、安全和利用异构数据源等方面的重大挑战。边缘设备在整个网络中以非 IID(独立且相同的分布)方式生成和收集数据,导致这些边缘网络之间的数据样本数量可能存在差异。本文提出了一种在边缘计算环境下进行联合学习的方法,该方法涉及人工智能技术,如数据增强和类估计,以及在训练过程中以最小的计算开销实现平衡。这是通过实施数据增强技术来完善数据分布来实现的。此外,我们还利用类估计和线性回归进行客户端模型训练。这种战略性方法降低了计算成本。为了验证所提方法的有效性,我们将其应用于两个不同的数据集。一个数据集涉及图像数据(FashionMNIST),而另一个数据集则包含有关股票的数字和文本数据,用于股票价值的预测分析。这种方法在两种数据集类型中都表现出了值得称赞的性能,在联合学习范例中接近 92% 以上的准确率。
{"title":"AI-empowered mobile edge computing: inducing balanced federated learning strategy over edge for balanced data and optimized computation cost","authors":"Momina Shaheen, Muhammad S. Farooq, Tariq Umer","doi":"10.1186/s13677-024-00614-y","DOIUrl":"https://doi.org/10.1186/s13677-024-00614-y","url":null,"abstract":"In Mobile Edge Computing, the framework of federated learning can enable collaborative learning models across edge nodes, without necessitating the direct exchange of data from edge nodes. It addresses significant challenges encompassing access rights, privacy, security, and the utilization of heterogeneous data sources over mobile edge computing. Edge devices generate and gather data, across the network, in non-IID (independent and identically distributed) manner leading to potential variations in the number of data samples among these edge networks. A method is proposed to work in federated learning under edge computing setting, which involves AI techniques such as data augmentation and class estimation and balancing during training process with minimized computational overhead. This is accomplished through the implementation of data augmentation techniques to refine data distribution. Additionally, we leveraged class estimation and employed linear regression for client-side model training. This strategic approach yields a reduction in computational costs. To validate the effectiveness of the proposed approach, it is applied to two distinct datasets. One dataset pertains to image data (FashionMNIST), while the other comprises numerical and textual data concerning stocks for predictive analysis of stock values. This approach demonstrates commendable performance across both dataset types and approaching more than 92% of accuracy in the paradigm of federated learning.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140026487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated visual quality assessment for virtual and augmented reality based digital twins 基于虚拟现实和增强现实的数字双胞胎的自动视觉质量评估
Pub Date : 2024-02-26 DOI: 10.1186/s13677-024-00616-w
Ben Roullier, Frank McQuade, Ashiq Anjum, Craig Bower, Lu Liu
Virtual and augmented reality digital twins are becoming increasingly prevalent in a number of industries, though the production of digital-twin systems applications is still prohibitively expensive for many smaller organisations. A key step towards reducing the cost of digital twins lies in automating the production of 3D assets, however efforts are complicated by the lack of suitable automated methods for determining the visual quality of these assets. While visual quality assessment has been an active area of research for a number of years, few publications consider this process in the context of asset creation in digital twins. In this work, we introduce an automated decimation procedure using machine learning to assess the visual impact of decimation, a process commonly used in the production of 3D assets which has thus far been underrepresented in the visual assessment literature. Our model combines 108 geometric and perceptual metrics to determine if a 3D object has been unacceptably distorted during decimation. Our model is trained on almost 4, 000 distorted meshes, giving a significantly wider range of applicability than many models in the literature. Our results show a precision of over 97% against a set of test models, and performance tests show our model is capable of performing assessments within 2 minutes on models of up to 25, 000 polygons. Based on these results we believe our model presents both a significant advance in the field of visual quality assessment and an important step towards reducing the cost of virtual and augmented reality-based digital-twins.
虚拟现实和增强现实数字孪生在许多行业中正变得越来越普遍,尽管数字孪生系统应用的生产成本对于许多小型机构来说仍然过于昂贵。降低数字孪生成本的关键步骤在于实现三维资产的自动化生产,然而,由于缺乏合适的自动化方法来确定这些资产的视觉质量,这项工作变得更加复杂。虽然视觉质量评估多年来一直是一个活跃的研究领域,但很少有出版物在数字孪生资产创建的背景下考虑这一过程。在这项工作中,我们利用机器学习引入了一种自动去角质程序,以评估去角质对视觉的影响,去角质是三维资产生产中常用的一个过程,但迄今为止在视觉评估文献中还没有得到充分的体现。我们的模型结合了 108 项几何和感知指标,以确定三维物体在去边过程中是否发生了不可接受的扭曲。我们的模型是在近 4,000 个扭曲网格上训练出来的,与文献中的许多模型相比,适用范围更广。我们的结果表明,对一组测试模型的精确度超过 97%,性能测试表明,我们的模型能够在 2 分钟内对多达 25000 个多边形的模型进行评估。基于这些结果,我们相信我们的模型既是视觉质量评估领域的重大进步,也是降低基于虚拟现实和增强现实的数字双胞胎成本的重要一步。
{"title":"Automated visual quality assessment for virtual and augmented reality based digital twins","authors":"Ben Roullier, Frank McQuade, Ashiq Anjum, Craig Bower, Lu Liu","doi":"10.1186/s13677-024-00616-w","DOIUrl":"https://doi.org/10.1186/s13677-024-00616-w","url":null,"abstract":"Virtual and augmented reality digital twins are becoming increasingly prevalent in a number of industries, though the production of digital-twin systems applications is still prohibitively expensive for many smaller organisations. A key step towards reducing the cost of digital twins lies in automating the production of 3D assets, however efforts are complicated by the lack of suitable automated methods for determining the visual quality of these assets. While visual quality assessment has been an active area of research for a number of years, few publications consider this process in the context of asset creation in digital twins. In this work, we introduce an automated decimation procedure using machine learning to assess the visual impact of decimation, a process commonly used in the production of 3D assets which has thus far been underrepresented in the visual assessment literature. Our model combines 108 geometric and perceptual metrics to determine if a 3D object has been unacceptably distorted during decimation. Our model is trained on almost 4, 000 distorted meshes, giving a significantly wider range of applicability than many models in the literature. Our results show a precision of over 97% against a set of test models, and performance tests show our model is capable of performing assessments within 2 minutes on models of up to 25, 000 polygons. Based on these results we believe our model presents both a significant advance in the field of visual quality assessment and an important step towards reducing the cost of virtual and augmented reality-based digital-twins.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139968517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1