首页 > 最新文献

ITU journal : ICT discoveries最新文献

英文 中文
Build your own closed loop: Graph-based proof of concept in closed loop for autonomous networks 构建自己的闭环:基于图的自主网络闭环概念证明
Pub Date : 2023-09-14 DOI: 10.52953/opdk5666
None Jaime Fúster de la Fuente, None Álvaro Pendás Recondo, None Paul Harvey, None Tarek Mohamed, None Chandan Singh, None Vipul Sanap, None Ayush Kumar, None Sathish Venkateswaran, None Sarvasuddi Balaganesh, None Rajat Duggal, None Sree Ganesh Lalitaditya Divakarla, None Vaibhava Krishna Devulapali, None Ebeledike Frank Chukwubuikem, None Emmanuel Othniel Eggah, None Abel Oche Moses, None Nuhu Kontagora Bello, None James Agajo, None Wael Alron, None Fathi Abdeldayem, None Melanie Espinoza Hernández, None Abigail Morales Retana, None Jackeline García Alvarado, None Nicolle Gamboa Mena, None Juliana Morales Alvarado, None Ericka Pérez Chinchilla, None Amanda Calderón Campos, None Derek Rodríguez Villalobos, None Oscar Castillo Brenes, None Kodandram Ranganath, None Ayushi Khandal, None Rakshesh P Bhatt, None Kunal Mahajan, None Prikshit CS, None Ashok Kamaraj, None Srinwaynti Samaddar, None Sivaramakrishnan Swaminathan, None M Sri Bhuvan, None Nagaswaroop S N, None Blessed Guda, None Ibrahim Aliyu, None Kim Jinsul, None Vishnu Ram
Next Generation Networks (NGNs) are expected to handle heterogeneous technologies, services, verticals and devices of increasing complexity. It is essential to fathom an innovative approach to automatically and efficiently manage NGNs to deliver an adequate end-to-end Quality of Experience (QoE) while reducing operational expenses. An Autonomous Network (AN) using a closed loop can self-monitor, self-evaluate and self-heal, making it a potential solution for managing the NGN dynamically. This study describes the major results of building a closed-loop Proof of Concept (PoC) for various AN use cases organized by the International Telecommunication Union Focus Group on Autonomous Networks (ITU FG-AN). The scope of this PoC includes the representation of closed-loop use cases in a graph format, the development of evolution/exploration mechanisms to create new closed loops based on the graph representations, and the implementation of a reference orchestrator to demonstrate the parsing and validation of the closed loops. The main conclusions and future directions are summarized here, including observations and limitations of the PoC.
下一代网络(ngn)有望处理日益复杂的异构技术、服务、垂直行业和设备。探索一种创新的方法来自动有效地管理下一代网络,以提供足够的端到端体验质量(QoE),同时降低运营成本,这一点至关重要。采用闭环的自治网络(An)具有自我监控、自我评估和自我修复功能,是动态管理下一代网络的潜在解决方案。本研究描述了由国际电信联盟自治网络焦点小组(ITU FG-AN)组织的各种AN用例构建闭环概念验证(PoC)的主要结果。该PoC的范围包括以图形格式表示闭环用例,开发基于图形表示创建新闭环的进化/探索机制,以及实现用于演示解析和验证闭环的参考编排器。本文总结了主要结论和未来发展方向,包括PoC的观察结果和局限性。
{"title":"Build your own closed loop: Graph-based proof of concept in closed loop for autonomous networks","authors":"None Jaime Fúster de la Fuente, None Álvaro Pendás Recondo, None Paul Harvey, None Tarek Mohamed, None Chandan Singh, None Vipul Sanap, None Ayush Kumar, None Sathish Venkateswaran, None Sarvasuddi Balaganesh, None Rajat Duggal, None Sree Ganesh Lalitaditya Divakarla, None Vaibhava Krishna Devulapali, None Ebeledike Frank Chukwubuikem, None Emmanuel Othniel Eggah, None Abel Oche Moses, None Nuhu Kontagora Bello, None James Agajo, None Wael Alron, None Fathi Abdeldayem, None Melanie Espinoza Hernández, None Abigail Morales Retana, None Jackeline García Alvarado, None Nicolle Gamboa Mena, None Juliana Morales Alvarado, None Ericka Pérez Chinchilla, None Amanda Calderón Campos, None Derek Rodríguez Villalobos, None Oscar Castillo Brenes, None Kodandram Ranganath, None Ayushi Khandal, None Rakshesh P Bhatt, None Kunal Mahajan, None Prikshit CS, None Ashok Kamaraj, None Srinwaynti Samaddar, None Sivaramakrishnan Swaminathan, None M Sri Bhuvan, None Nagaswaroop S N, None Blessed Guda, None Ibrahim Aliyu, None Kim Jinsul, None Vishnu Ram","doi":"10.52953/opdk5666","DOIUrl":"https://doi.org/10.52953/opdk5666","url":null,"abstract":"Next Generation Networks (NGNs) are expected to handle heterogeneous technologies, services, verticals and devices of increasing complexity. It is essential to fathom an innovative approach to automatically and efficiently manage NGNs to deliver an adequate end-to-end Quality of Experience (QoE) while reducing operational expenses. An Autonomous Network (AN) using a closed loop can self-monitor, self-evaluate and self-heal, making it a potential solution for managing the NGN dynamically. This study describes the major results of building a closed-loop Proof of Concept (PoC) for various AN use cases organized by the International Telecommunication Union Focus Group on Autonomous Networks (ITU FG-AN). The scope of this PoC includes the representation of closed-loop use cases in a graph format, the development of evolution/exploration mechanisms to create new closed loops based on the graph representations, and the implementation of a reference orchestrator to demonstrate the parsing and validation of the closed loops. The main conclusions and future directions are summarized here, including observations and limitations of the PoC.","PeriodicalId":93013,"journal":{"name":"ITU journal : ICT discoveries","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134915151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing graph neural networks training data with limited samples and small network sizes 设计具有有限样本和小网络大小的图神经网络训练数据
Pub Date : 2023-09-12 DOI: 10.52953/afyw5455
None Junior Momo Ziazet, None Charles Boudreau, None Oscar Delgado, None Brigitte Jaumard
Machine learning is a data-driven domain, which means a learning model's performance depends on the availability of large volumes of data to train it. However, by improving data quality, we can train effective machine learning models with little data. This paper demonstrates this possibility by proposing a methodology to generate high-quality data in the networking domain. We designed a dataset to train a given Graph Neural Network (GNN) that not only contains a small number of samples, but whose samples also feature network graphs of a reduced size (10-node networks). Our evaluations indicate that the dataset generated by the proposed pipeline can train a GNN model that scales well to larger networks of 50 to 300 nodes. The trained model compares favorably to the baseline, achieving a mean absolute percentage error of 5-6%, while being significantly smaller at 90 samples total (vs. thousands of samples for the baseline).
机器学习是一个数据驱动的领域,这意味着学习模型的性能取决于大量数据的可用性来训练它。然而,通过提高数据质量,我们可以用很少的数据训练有效的机器学习模型。本文通过提出一种在网络领域生成高质量数据的方法来证明这种可能性。我们设计了一个数据集来训练给定的图神经网络(GNN),该网络不仅包含少量样本,而且其样本还具有缩小尺寸的网络图(10节点网络)。我们的评估表明,由提议的管道生成的数据集可以训练一个GNN模型,该模型可以很好地扩展到50到300个节点的更大网络。训练后的模型与基线相比更有利,平均绝对百分比误差为5-6%,同时在总共90个样本时明显更小(与基线的数千个样本相比)。
{"title":"Designing graph neural networks training data with limited samples and small network sizes","authors":"None Junior Momo Ziazet, None Charles Boudreau, None Oscar Delgado, None Brigitte Jaumard","doi":"10.52953/afyw5455","DOIUrl":"https://doi.org/10.52953/afyw5455","url":null,"abstract":"Machine learning is a data-driven domain, which means a learning model's performance depends on the availability of large volumes of data to train it. However, by improving data quality, we can train effective machine learning models with little data. This paper demonstrates this possibility by proposing a methodology to generate high-quality data in the networking domain. We designed a dataset to train a given Graph Neural Network (GNN) that not only contains a small number of samples, but whose samples also feature network graphs of a reduced size (10-node networks). Our evaluations indicate that the dataset generated by the proposed pipeline can train a GNN model that scales well to larger networks of 50 to 300 nodes. The trained model compares favorably to the baseline, achieving a mean absolute percentage error of 5-6%, while being significantly smaller at 90 samples total (vs. thousands of samples for the baseline).","PeriodicalId":93013,"journal":{"name":"ITU journal : ICT discoveries","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135878488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Oracle-based data generation for highly efficient digital twin network training 基于oracle的数据生成,用于高效的数字孪生网络训练
Pub Date : 2023-09-08 DOI: 10.52953/aweu6345
None Eliyahu Sason, None Yackov Lubarsky, None Alexei Gaissinski, None Eli Kravchik, None Pavel Kisilev
Recent advances in Graph Neural Networks (GNNs) has opened new capabilities to analyze complex communication systems. However, little work has been done to study the effects of limited data samples on the performance of GNN-based systems. In this paper, we present a novel solution to the problem of finding an optimal training set for efficient training of a RouteNet-Fermi GNN model. The proposed solution ensures good model generalization to large previously unseen networks under strict limitations on the training data budget and training topology sizes. Specifically, we generate an initial data set by emulating the flow distribution of large networks while using small networks. We then deploy a new clustering method that efficiently samples the above generated data set by analyzing the data embeddings from different Oracle models. This procedure provides a very small but information-rich training set. The above data embedding method translates highly heterogeneous network samples into a common embedding spac, wherein the samples can be easily related to each other. The proposed method outperforms state-of-the-art approaches, including the winning solutions of the 2022 Graph Neural Networking challenge.
图神经网络(gnn)的最新进展开辟了分析复杂通信系统的新能力。然而,很少有人研究有限数据样本对基于gnn的系统性能的影响。本文针对RouteNet-Fermi GNN模型有效训练的最优训练集问题,提出了一种新的解决方案。该解决方案在严格限制训练数据预算和训练拓扑大小的情况下,确保了对以前未见过的大型网络的良好模型泛化。具体来说,我们通过在使用小型网络的同时模拟大型网络的流量分布来生成初始数据集。然后,我们部署了一种新的聚类方法,通过分析来自不同Oracle模型的数据嵌入,有效地对上述生成的数据集进行采样。这个程序提供了一个非常小但信息丰富的训练集。上述数据嵌入方法将高度异构的网络样本转化为一个公共的嵌入空间,其中样本可以很容易地相互关联。所提出的方法优于最先进的方法,包括2022年图神经网络挑战赛的获奖解决方案。
{"title":"Oracle-based data generation for highly efficient digital twin network training","authors":"None Eliyahu Sason, None Yackov Lubarsky, None Alexei Gaissinski, None Eli Kravchik, None Pavel Kisilev","doi":"10.52953/aweu6345","DOIUrl":"https://doi.org/10.52953/aweu6345","url":null,"abstract":"Recent advances in Graph Neural Networks (GNNs) has opened new capabilities to analyze complex communication systems. However, little work has been done to study the effects of limited data samples on the performance of GNN-based systems. In this paper, we present a novel solution to the problem of finding an optimal training set for efficient training of a RouteNet-Fermi GNN model. The proposed solution ensures good model generalization to large previously unseen networks under strict limitations on the training data budget and training topology sizes. Specifically, we generate an initial data set by emulating the flow distribution of large networks while using small networks. We then deploy a new clustering method that efficiently samples the above generated data set by analyzing the data embeddings from different Oracle models. This procedure provides a very small but information-rich training set. The above data embedding method translates highly heterogeneous network samples into a common embedding spac, wherein the samples can be easily related to each other. The proposed method outperforms state-of-the-art approaches, including the winning solutions of the 2022 Graph Neural Networking challenge.","PeriodicalId":93013,"journal":{"name":"ITU journal : ICT discoveries","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136362241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-efficient GNN models of communication networks using beta-distribution-based sample ranking 基于beta分布的样本排序的通信网络数据高效GNN模型
Pub Date : 2023-09-08 DOI: 10.52953/fuqe7013
None Max Helm, None Benedikt Jaeger, None Georg Carle
Machine learning models for tasks in communication networks often require large datasets to be trained. This training is cost intensive, and solutions to reduce these costs are required. It is not clear what the best approach to solve this problem is. Here we show an approach that is able to create a minimally-sized training dataset while maintaining high predictive power of the model. We apply our approach to a state-of-the-art graph neural network model for performance prediction in communication networks. Our approach is limited to a dataset of 100 samples with reduced sizes and achieves an MAPE of 9.79% on a test dataset containing significantly larger problem sizes, compared to a baseline approach which achieved an MAPE of 37.82%. We think this approach can be useful to create high-quality datasets of communication networks and decrease the time needed to train graph neural network models on performance prediction tasks.
用于通信网络任务的机器学习模型通常需要训练大型数据集。这种培训是成本密集的,需要降低这些成本的解决方案。目前还不清楚解决这个问题的最佳方法是什么。在这里,我们展示了一种能够创建最小大小的训练数据集,同时保持模型的高预测能力的方法。我们将我们的方法应用于通信网络中性能预测的最先进的图神经网络模型。我们的方法仅限于100个样本的数据集,并且在包含更大问题的测试数据集上实现了9.79%的MAPE,而基线方法的MAPE为37.82%。我们认为这种方法可以用于创建高质量的通信网络数据集,并减少在性能预测任务上训练图神经网络模型所需的时间。
{"title":"Data-efficient GNN models of communication networks using beta-distribution-based sample ranking","authors":"None Max Helm, None Benedikt Jaeger, None Georg Carle","doi":"10.52953/fuqe7013","DOIUrl":"https://doi.org/10.52953/fuqe7013","url":null,"abstract":"Machine learning models for tasks in communication networks often require large datasets to be trained. This training is cost intensive, and solutions to reduce these costs are required. It is not clear what the best approach to solve this problem is. Here we show an approach that is able to create a minimally-sized training dataset while maintaining high predictive power of the model. We apply our approach to a state-of-the-art graph neural network model for performance prediction in communication networks. Our approach is limited to a dataset of 100 samples with reduced sizes and achieves an MAPE of 9.79% on a test dataset containing significantly larger problem sizes, compared to a baseline approach which achieved an MAPE of 37.82%. We think this approach can be useful to create high-quality datasets of communication networks and decrease the time needed to train graph neural network models on performance prediction tasks.","PeriodicalId":93013,"journal":{"name":"ITU journal : ICT discoveries","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136362063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-driven container security approaches for 5G and beyond: A survey ai驱动的5G及以后的容器安全方法:一项调查
Pub Date : 2023-06-23 DOI: 10.52953/zrck3746
None Ilter Taha Aktolga, None Elif Sena Kuru, None Yigit Sever, None Pelin Angin
The rising use of microservice-based software deployment on the cloud leverages containerized software extensively. The security of applications running inside containers, as well as the container environment itself, are critical for infrastructure in cloud settings and 5G. To address security concerns, research efforts have been focused on container security with subfields such as intrusion detection, malware detection and container placement strategies. These security efforts are roughly divided into two categories: rule-based approaches and machine learning that can respond to novel threats. In this study, we survey the container security literature focusing on approaches that leverage machine learning to address security challenges.
基于微服务的软件部署在云上的使用越来越多,这广泛地利用了容器化软件。在容器内运行的应用程序以及容器环境本身的安全性对于云设置和5G中的基础设施至关重要。为了解决安全问题,研究工作一直集中在容器安全的子领域,如入侵检测、恶意软件检测和容器放置策略。这些安全工作大致分为两类:基于规则的方法和可以响应新威胁的机器学习。在本研究中,我们调查了容器安全文献,重点关注利用机器学习解决安全挑战的方法。
{"title":"AI-driven container security approaches for 5G and beyond: A survey","authors":"None Ilter Taha Aktolga, None Elif Sena Kuru, None Yigit Sever, None Pelin Angin","doi":"10.52953/zrck3746","DOIUrl":"https://doi.org/10.52953/zrck3746","url":null,"abstract":"The rising use of microservice-based software deployment on the cloud leverages containerized software extensively. The security of applications running inside containers, as well as the container environment itself, are critical for infrastructure in cloud settings and 5G. To address security concerns, research efforts have been focused on container security with subfields such as intrusion detection, malware detection and container placement strategies. These security efforts are roughly divided into two categories: rule-based approaches and machine learning that can respond to novel threats. In this study, we survey the container security literature focusing on approaches that leverage machine learning to address security challenges.","PeriodicalId":93013,"journal":{"name":"ITU journal : ICT discoveries","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136044739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ANALYTIC MODELS FOR BISTATIC SCATTERING FROM A RANDOMLY ROUGH SURFACE WITH COMPLEX RELATIVE PERMITTIVITY. 复杂相对介电常数随机粗糙表面双稳态散射的解析模型。
Pub Date : 2019-11-19
Mostafa A Karam, Ryan S McDonough

This study provides explicit mathematical formulations for the bistatic scattering coefficient from a randomly rough surface with a complex relative permittivity based on the following analytic models: small perturbation model (SPM), physical optics model (PO), and Kirchhoff approximation model (KA). Then it addresses the two shortcomings associated with each of the three models: i) limited applicability domain, and ii) null predicted values for the cross polarized bistatic scattering coefficients within plane of incidence. The plane of incidence contains both backscattering direction and forward (specular reflection) direction which are of interest to the spectrum community.

本文基于小微扰模型(SPM)、物理光学模型(PO)和Kirchhoff近似模型(KA),给出了具有复杂相对介电常数的随机粗糙表面双稳态散射系数的显式数学表达式。然后解决了三种模型各自的两个缺点:1)适用范围有限,2)入射平面内交叉极化双基地散射系数预测值为零。入射面包含了光谱界感兴趣的后向散射方向和正向(镜面反射)方向。
{"title":"ANALYTIC MODELS FOR BISTATIC SCATTERING FROM A RANDOMLY ROUGH SURFACE WITH COMPLEX RELATIVE PERMITTIVITY.","authors":"Mostafa A Karam,&nbsp;Ryan S McDonough","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This study provides explicit mathematical formulations for the bistatic scattering coefficient from a randomly rough surface with a complex relative permittivity based on the following analytic models: small perturbation model (SPM), physical optics model (PO), and Kirchhoff approximation model (KA). Then it addresses the two shortcomings associated with each of the three models: i) limited applicability domain, and ii) null predicted values for the cross polarized bistatic scattering coefficients within plane of incidence. The plane of incidence contains both backscattering direction and forward (specular reflection) direction which are of interest to the spectrum community.</p>","PeriodicalId":93013,"journal":{"name":"ITU journal : ICT discoveries","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7323588/pdf/nihms-1585761.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38099208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ITU journal : ICT discoveries
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1