首页 > 最新文献

International Journal of Computer Network and Information Security最新文献

英文 中文
Secure Data Storage and Retrieval over the Encrypted Cloud Computing 加密云计算上的安全数据存储和检索
Q1 Mathematics Pub Date : 2024-08-08 DOI: 10.5815/ijcnis.2024.04.04
Jaydip Kumar, Hemant Kumar, K. Singh, Vipin Saxena
Information security in cloud computing refers to the protection of data items such as text, images, audios and video files. In the modern era, data size is increasing rapidly from gigabytes to terabytes or even petabytes, due to development of a significant amount of real-time data. The majority of data is stored in cloud computing environments and is sent or received over the internet. Due to the fact that cloud computing offers internet-based services, there are various attackers and illegal users over the internet who are consistently trying to gain access to user’s private data without the appropriate permission. Hackers frequently replace any fake data with actual data. As a result, data security has recently generated a lot of attention. To provide access rights of files, the cloud computing is only option for authorized user. To overcome from security threats, a security model is proposed for cloud computing to enhance the security of cloud data through the fingerprint authentication for access control and genetic algorithm is also used for encryption/decryption of cloud data. To search desired data from cloud, fuzzy encrypted keyword search technique is used. The encrypted keyword is stored in cloud storage using SHA256 hashing techniques. The proposed model minimizes the computation time and maximizes the security threats over the cloud. The computed results are presented in the form of figures and tables.
云计算中的信息安全是指对文本、图像、音频和视频文件等数据项的保护。在现代社会,由于大量实时数据的发展,数据量从千兆字节迅速增加到 TB 甚至 PB。大部分数据都存储在云计算环境中,并通过互联网发送或接收。由于云计算提供基于互联网的服务,互联网上有各种攻击者和非法用户,他们总是试图在未经适当许可的情况下获取用户的私人数据。黑客经常用真实数据替换任何虚假数据。因此,数据安全问题近来备受关注。为了提供文件访问权限,云计算是授权用户的唯一选择。为了克服安全威胁,我们提出了一种云计算安全模型,通过指纹验证进行访问控制,并使用遗传算法对云数据进行加密/解密,从而提高云数据的安全性。为从云中搜索所需数据,使用了模糊加密关键字搜索技术。加密关键词使用 SHA256 哈希技术存储在云存储中。所提出的模型最大限度地减少了计算时间,并最大限度地降低了云安全威胁。计算结果以图和表的形式呈现。
{"title":"Secure Data Storage and Retrieval over the Encrypted Cloud Computing","authors":"Jaydip Kumar, Hemant Kumar, K. Singh, Vipin Saxena","doi":"10.5815/ijcnis.2024.04.04","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.04.04","url":null,"abstract":"Information security in cloud computing refers to the protection of data items such as text, images, audios and video files. In the modern era, data size is increasing rapidly from gigabytes to terabytes or even petabytes, due to development of a significant amount of real-time data. The majority of data is stored in cloud computing environments and is sent or received over the internet. Due to the fact that cloud computing offers internet-based services, there are various attackers and illegal users over the internet who are consistently trying to gain access to user’s private data without the appropriate permission. Hackers frequently replace any fake data with actual data. As a result, data security has recently generated a lot of attention. To provide access rights of files, the cloud computing is only option for authorized user. To overcome from security threats, a security model is proposed for cloud computing to enhance the security of cloud data through the fingerprint authentication for access control and genetic algorithm is also used for encryption/decryption of cloud data. To search desired data from cloud, fuzzy encrypted keyword search technique is used. The encrypted keyword is stored in cloud storage using SHA256 hashing techniques. The proposed model minimizes the computation time and maximizes the security threats over the cloud. The computed results are presented in the form of figures and tables.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":"20 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141927029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Enhanced Process Scheduler Using Multi-Access Edge Computing in An IoT Network 物联网网络中使用多接入边缘计算的增强型进程调度器
Q1 Mathematics Pub Date : 2024-08-08 DOI: 10.5815/ijcnis.2024.04.09
P. S., S. Kuzhalvaimozhi, Bhuvan K., Ramitha R., Tanisha Machaiah M.
Multi-access edge computing has the ability to provide high bandwidth, and low latency, ensuring high efficiency in performing network operations and thus, it seems to be promising in the technical field. MEC allows processing and analysis of data at the network edges but it has finite number of resources which can be used. To overcome this restriction, a scheduling algorithm can be used by an orchestrator to deliver high quality services by choosing when and where each process should be executed. The scheduling algorithm must meet the expected outcome by utilizing lesser number of resources. This paper provides a scheduling algorithm containing two cooperative levels with an orchestrator layer acting at the center. The first level schedules local processes on the MEC servers and the next layer represents the orchestrator and allocates processes to nearby stations or cloud. Depending on latency and throughput, the processes are executed according to their priority. A resource optimization algorithm has also been proposed for extra performance. This offers a cost-efficient solution which provides good service availability. The proposed algorithm has a balanced wait time (Avg) and blocking percentage (Avg) of 2.37ms and 0.4 respectively. The blocking percentage is 1.65 times better than Shortest Job First Scheduling (SJFS) and 1.3 times better than Earliest Deadline First Scheduling (EDFS). The optimization algorithm can work on many kinds of network traffic models such as uniformly distributed and base stations with unbalanced loads.
多访问边缘计算能够提供高带宽和低延迟,确保高效执行网络操作,因此在技术领域前景广阔。多访问边缘计算允许在网络边缘处理和分析数据,但可使用的资源数量有限。为了克服这一限制,协调者可以使用调度算法来选择每个进程的执行时间和地点,从而提供高质量的服务。调度算法必须通过利用较少的资源达到预期结果。本文提供了一种调度算法,它包含两个合作层,以协调者层为中心。第一层在 MEC 服务器上调度本地进程,下一层代表协调者,将进程分配到附近的站点或云上。根据延迟和吞吐量,进程按照优先级执行。为提高性能,还提出了一种资源优化算法。这提供了一种具有成本效益的解决方案,可提供良好的服务可用性。拟议算法的平衡等待时间(平均值)和阻塞百分比(平均值)分别为 2.37ms 和 0.4。阻塞百分比是最短作业优先调度(SJFS)的 1.65 倍,是最早截止时间优先调度(EDFS)的 1.3 倍。该优化算法适用于多种网络流量模型,如均匀分布和不平衡负载的基站。
{"title":"An Enhanced Process Scheduler Using Multi-Access Edge Computing in An IoT Network","authors":"P. S., S. Kuzhalvaimozhi, Bhuvan K., Ramitha R., Tanisha Machaiah M.","doi":"10.5815/ijcnis.2024.04.09","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.04.09","url":null,"abstract":"Multi-access edge computing has the ability to provide high bandwidth, and low latency, ensuring high efficiency in performing network operations and thus, it seems to be promising in the technical field. MEC allows processing and analysis of data at the network edges but it has finite number of resources which can be used. To overcome this restriction, a scheduling algorithm can be used by an orchestrator to deliver high quality services by choosing when and where each process should be executed. The scheduling algorithm must meet the expected outcome by utilizing lesser number of resources. This paper provides a scheduling algorithm containing two cooperative levels with an orchestrator layer acting at the center. The first level schedules local processes on the MEC servers and the next layer represents the orchestrator and allocates processes to nearby stations or cloud. Depending on latency and throughput, the processes are executed according to their priority. A resource optimization algorithm has also been proposed for extra performance. This offers a cost-efficient solution which provides good service availability. The proposed algorithm has a balanced wait time (Avg) and blocking percentage (Avg) of 2.37ms and 0.4 respectively. The blocking percentage is 1.65 times better than Shortest Job First Scheduling (SJFS) and 1.3 times better than Earliest Deadline First Scheduling (EDFS). The optimization algorithm can work on many kinds of network traffic models such as uniformly distributed and base stations with unbalanced loads.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":"18 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141927620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Universal On-board Neural Network System for Restoring Information in Case of Helicopter Turboshaft Engine Sensor Failure 用于在直升机涡轮轴发动机传感器故障时恢复信息的通用机载神经网络系统
Q1 Mathematics Pub Date : 2024-08-08 DOI: 10.5815/ijcnis.2024.04.05
Serhii Vladov, Ruslan Yakovliev, Victoria Vysotska, Dmytro Uhryn, Yuriy Ushenko
This work focuses on developing a universal onboard neural network system for restoring information when helicopter turboshaft engine sensors fail. A mathematical task was formulated to determine the occurrence and location of these sensor failures using a multi-class Bayesian classification model that incorporates prior knowledge and updates probabilities with new data. The Bayesian approach was employed for identifying and localizing sensor failures, utilizing a Bayesian neural network with a 4–6–3 structure as the core of the developed system. A training algorithm for the Bayesian neural network was created, which estimates the prior distribution of network parameters through variational approximation, maximizes the evidence lower bound of direct likelihood instead, and updates parameters by calculating gradients of the log-likelihood and evidence lower bound, while adding regularization terms for warnings, distributions, and uncertainty estimates to interpret results. This approach ensures balanced data handling, effective training (achieving nearly 100% accuracy on both training and validation sets), and improved model understanding (with training losses not exceeding 2.5%). An example is provided that demonstrates solving the information restoration task in the event of a gas-generator rotor r.p.m. sensor failure in the TV3-117 helicopter turboshaft engine. The developed onboard neural network system implementing feasibility on a helicopter using the neuro-processor Intel Neural Compute Stick 2 has been analytically proven.
这项工作的重点是开发一种通用的机载神经网络系统,用于在直升机涡轮轴发动机传感器发生故障时恢复信息。我们制定了一项数学任务,利用多类贝叶斯分类模型确定这些传感器故障的发生和位置,该模型结合了先验知识,并根据新数据更新概率。贝叶斯方法被用于识别和定位传感器故障,利用一个具有 4-6-3 结构的贝叶斯神经网络作为所开发系统的核心。为贝叶斯神经网络创建了一种训练算法,该算法通过变分近似估计网络参数的先验分布,最大化直接似然的证据下限,并通过计算对数似然和证据下限的梯度来更新参数,同时为警告、分布和不确定性估计添加正则化项以解释结果。这种方法确保了平衡的数据处理、有效的训练(在训练集和验证集上都达到了近 100%的准确率)以及对模型理解的提高(训练损失不超过 2.5%)。本文以 TV3-117 直升机涡轮轴发动机的燃气发电机转子 r.p.m. 传感器故障为例,演示了如何解决信息恢复任务。利用英特尔神经计算棒 2 神经处理器开发的机载神经网络系统在直升机上的可行性已经过分析验证。
{"title":"Universal On-board Neural Network System for Restoring Information in Case of Helicopter Turboshaft Engine Sensor Failure","authors":"Serhii Vladov, Ruslan Yakovliev, Victoria Vysotska, Dmytro Uhryn, Yuriy Ushenko","doi":"10.5815/ijcnis.2024.04.05","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.04.05","url":null,"abstract":"This work focuses on developing a universal onboard neural network system for restoring information when helicopter turboshaft engine sensors fail. A mathematical task was formulated to determine the occurrence and location of these sensor failures using a multi-class Bayesian classification model that incorporates prior knowledge and updates probabilities with new data. The Bayesian approach was employed for identifying and localizing sensor failures, utilizing a Bayesian neural network with a 4–6–3 structure as the core of the developed system. A training algorithm for the Bayesian neural network was created, which estimates the prior distribution of network parameters through variational approximation, maximizes the evidence lower bound of direct likelihood instead, and updates parameters by calculating gradients of the log-likelihood and evidence lower bound, while adding regularization terms for warnings, distributions, and uncertainty estimates to interpret results. This approach ensures balanced data handling, effective training (achieving nearly 100% accuracy on both training and validation sets), and improved model understanding (with training losses not exceeding 2.5%). An example is provided that demonstrates solving the information restoration task in the event of a gas-generator rotor r.p.m. sensor failure in the TV3-117 helicopter turboshaft engine. The developed onboard neural network system implementing feasibility on a helicopter using the neuro-processor Intel Neural Compute Stick 2 has been analytically proven.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":"17 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141926018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BSHOA: Energy Efficient Task Scheduling in Cloud-fog Environment BSHOA:云雾环境中的高能效任务调度
Q1 Mathematics Pub Date : 2024-08-08 DOI: 10.5815/ijcnis.2024.04.06
Santhosh Kumar Medishetti, Ganesh Reddy
Cloud-fog computing frameworks are innovative frameworks that have been designed to improve the present Internet of Things (IoT) infrastructures. The major limitation for IoT applications is the availability of ongoing energy sources for fog computing servers because transmitting the enormous amount of data generated by IoT devices will increase network bandwidth overhead and slow down the responsive time. Therefore, in this paper, the Butterfly Spotted Hyena Optimization algorithm (BSHOA) is proposed to find an alternative energy-aware task scheduling technique for IoT requests in a cloud-fog environment. In this hybrid BSHOA algorithm, the Butterfly optimization algorithm (BOA) is combined with Spotted Hyena Optimization (SHO) to enhance the global and local search behavior of BOA in the process of finding the optimal solution for the problem under consideration. To show the applicability and efficiency of the presented BSHOA approach, experiments will be done on real workloads taken from the Parallel Workload Archive comprising NASA Ames iPSC/860 and HP2CN (High-Performance Computing Center North) workloads. The investigation findings indicate that BSHOA has a strong capacity for dealing with the task scheduling issue and outperforms other approaches in terms of performance parameters including throughput, energy usage, and makespan time.
云雾计算框架是一种创新框架,旨在改善目前的物联网(IoT)基础设施。物联网应用的主要限制因素是雾计算服务器的持续能源供应,因为传输物联网设备产生的海量数据会增加网络带宽开销并减慢响应时间。因此,本文提出了 "蝶斑鬣狗优化算法"(BSHOA),为云-雾环境中的物联网请求寻找另一种能源感知任务调度技术。在这种混合 BSHOA 算法中,蝴蝶优化算法(BOA)与斑点鬣狗优化算法(SHO)相结合,以增强 BOA 在为所考虑的问题寻找最优解的过程中的全局和局部搜索行为。为了证明所提出的 BSHOA 方法的适用性和效率,我们将在真实工作负载上进行实验,这些工作负载来自并行工作负载档案,包括 NASA Ames iPSC/860 和 HP2CN(高性能计算中心北区)工作负载。调查结果表明,BSHOA 在处理任务调度问题方面具有很强的能力,在吞吐量、能源使用和间隔时间等性能参数方面优于其他方法。
{"title":"BSHOA: Energy Efficient Task Scheduling in Cloud-fog Environment","authors":"Santhosh Kumar Medishetti, Ganesh Reddy","doi":"10.5815/ijcnis.2024.04.06","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.04.06","url":null,"abstract":"Cloud-fog computing frameworks are innovative frameworks that have been designed to improve the present Internet of Things (IoT) infrastructures. The major limitation for IoT applications is the availability of ongoing energy sources for fog computing servers because transmitting the enormous amount of data generated by IoT devices will increase network bandwidth overhead and slow down the responsive time. Therefore, in this paper, the Butterfly Spotted Hyena Optimization algorithm (BSHOA) is proposed to find an alternative energy-aware task scheduling technique for IoT requests in a cloud-fog environment. In this hybrid BSHOA algorithm, the Butterfly optimization algorithm (BOA) is combined with Spotted Hyena Optimization (SHO) to enhance the global and local search behavior of BOA in the process of finding the optimal solution for the problem under consideration. To show the applicability and efficiency of the presented BSHOA approach, experiments will be done on real workloads taken from the Parallel Workload Archive comprising NASA Ames iPSC/860 and HP2CN (High-Performance Computing Center North) workloads. The investigation findings indicate that BSHOA has a strong capacity for dealing with the task scheduling issue and outperforms other approaches in terms of performance parameters including throughput, energy usage, and makespan time.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":"52 15","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141928029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Targeted Attacks Detection and Security Intruders Identification in the Cyber Space 网络空间定向攻击检测和安全入侵者识别
Q1 Mathematics Pub Date : 2024-08-08 DOI: 10.5815/ijcnis.2024.04.10
Z. Avkurova, Sergiy Gnatyuk, Bayan Abduraimova, Kaiyrbek Makulov
The number of new cybersecurity threats and opportunities is increasing over time, as well as the amount of information that is generated, processed, stored and transmitted using ICTs. Particularly sensitive are the objects of critical infrastructure of the state, which include the mining industry, transport, telecommunications, the banking system, etc. From these positions, the development of systems for detecting attacks and identifying intruders (including the critical infrastructure of the state) is an important and relevant scientific task, which determined the tasks of this article. The paper identifies the main factors influencing the choice of the most effective method for calculating the importance coefficients to increase the objectivity and simplicity of expert assessment of security events in cyberspace. Also, a methodology for conducting an experimental study was developed, in which the goals and objectives of the experiment, input and output parameters, the hypothesis and research criteria, the sufficiency of experimental objects and the sequence of necessary actions were determined. The conducted experimental study confirmed the adequacy of the models proposed in the work, as well as the ability of the method and system created on their basis to detect targeted attacks and identify intruders in cyberspace at an early stage, which is not included in the functionality of modern intrusion detection and prevention systems.
随着时间的推移,新的网络安全威胁和机会越来越多,利用信息和传播技术生成、处理、存储和传输的信息量也越来越大。尤其敏感的是国家关键基础设施的对象,包括采矿业、运输、电信、银行系统等。从这些角度来看,开发检测攻击和识别入侵者(包括国家重要基础设施)的系统是一项重要而相关的科学任务,这也决定了本文的任务。本文确定了影响选择最有效的重要性系数计算方法的主要因素,以提高网络空间安全事件专家评估的客观性和简便性。此外,还制定了开展实验研究的方法,其中确定了实验的目的和目标、输入和输出参数、假设和研究标准、实验对象的充分性以及必要行动的顺序。所进行的实验研究证实了工作中提出的模型的适当性,以及在其基础上创建的方法和系统在早期阶段检测有针对性的攻击和识别网络空间入侵者的能力,这是现代入侵检测和防御系统的功能所不包括的。
{"title":"Targeted Attacks Detection and Security Intruders Identification in the Cyber Space","authors":"Z. Avkurova, Sergiy Gnatyuk, Bayan Abduraimova, Kaiyrbek Makulov","doi":"10.5815/ijcnis.2024.04.10","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.04.10","url":null,"abstract":"The number of new cybersecurity threats and opportunities is increasing over time, as well as the amount of information that is generated, processed, stored and transmitted using ICTs. Particularly sensitive are the objects of critical infrastructure of the state, which include the mining industry, transport, telecommunications, the banking system, etc. From these positions, the development of systems for detecting attacks and identifying intruders (including the critical infrastructure of the state) is an important and relevant scientific task, which determined the tasks of this article. The paper identifies the main factors influencing the choice of the most effective method for calculating the importance coefficients to increase the objectivity and simplicity of expert assessment of security events in cyberspace. Also, a methodology for conducting an experimental study was developed, in which the goals and objectives of the experiment, input and output parameters, the hypothesis and research criteria, the sufficiency of experimental objects and the sequence of necessary actions were determined. The conducted experimental study confirmed the adequacy of the models proposed in the work, as well as the ability of the method and system created on their basis to detect targeted attacks and identify intruders in cyberspace at an early stage, which is not included in the functionality of modern intrusion detection and prevention systems.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":"56 50","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141928930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attack Modeling and Security Analysis Using Machine Learning Algorithms Enabled with Augmented Reality and Virtual Reality 利用增强现实和虚拟现实技术的机器学习算法进行攻击建模和安全分析
Q1 Mathematics Pub Date : 2024-08-08 DOI: 10.5815/ijcnis.2024.04.08
Momina Mushtaq, Rakesh Kumar Jha, Manish Sabraj, Shubha Jain
Augmented Reality (AR) and Virtual Reality (VR) are innovative technologies that are experiencing a widespread recognition. These technologies possess the capability to transform and redefine our interactions with the surrounding environment. However, as these technologies spread, they also introduce new security challenges. In this paper, we discuss the security challenges posed by Augmented reality and Virtual Reality, and propose a Machine Learning-based approach to address these challenges. We also discuss how Machine Learning can be used to detect and prevent attacks in Augmented reality and Virtual Reality. By leveraging the power of Machine Learning algorithms, we aim to bolster the security defences of Augmented reality and Virtual Reality systems. To accomplish this, we have conducted a comprehensive evaluation of various Machine Learning algorithms, meticulously analysing their performance and efficacy in enhancing security. Our results show that Machine Learning can be an effective way to improve the security of Augmented reality and virtual reality systems.
增强现实(AR)和虚拟现实(VR)是一种创新技术,正在得到广泛认可。这些技术有能力改变和重新定义我们与周围环境的互动。然而,随着这些技术的普及,它们也带来了新的安全挑战。在本文中,我们将讨论增强现实和虚拟现实带来的安全挑战,并提出一种基于机器学习的方法来应对这些挑战。我们还讨论了如何利用机器学习来检测和预防增强现实和虚拟现实中的攻击。通过利用机器学习算法的力量,我们旨在加强增强现实和虚拟现实系统的安全防御。为此,我们对各种机器学习算法进行了全面评估,仔细分析了它们在增强安全性方面的性能和功效。我们的结果表明,机器学习可以有效地提高增强现实和虚拟现实系统的安全性。
{"title":"Attack Modeling and Security Analysis Using Machine Learning Algorithms Enabled with Augmented Reality and Virtual Reality","authors":"Momina Mushtaq, Rakesh Kumar Jha, Manish Sabraj, Shubha Jain","doi":"10.5815/ijcnis.2024.04.08","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.04.08","url":null,"abstract":"Augmented Reality (AR) and Virtual Reality (VR) are innovative technologies that are experiencing a widespread recognition. These technologies possess the capability to transform and redefine our interactions with the surrounding environment. However, as these technologies spread, they also introduce new security challenges. In this paper, we discuss the security challenges posed by Augmented reality and Virtual Reality, and propose a Machine Learning-based approach to address these challenges. We also discuss how Machine Learning can be used to detect and prevent attacks in Augmented reality and Virtual Reality. By leveraging the power of Machine Learning algorithms, we aim to bolster the security defences of Augmented reality and Virtual Reality systems. To accomplish this, we have conducted a comprehensive evaluation of various Machine Learning algorithms, meticulously analysing their performance and efficacy in enhancing security. Our results show that Machine Learning can be an effective way to improve the security of Augmented reality and virtual reality systems.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":"42 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141929415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Chaotic Map based Random Binary Key Sequence Generation 基于混沌图的随机二进制密钥序列生成
Q1 Mathematics Pub Date : 2024-08-08 DOI: 10.5815/ijcnis.2024.04.07
Vishwas C. G. M., R. Kunte
Image encryption is an efficient mechanism by which digital images can be secured during transmission over communication in which key sequence generation plays a vital role. The proposed system consists of stages such as the generation of four chaotic maps, conversion of generated maps to binary vectors, rotation of Linear Feedback Shift Register (LFSR), and selection of generated binary chaotic key sequences from the generated key pool. The novelty of this implementation is to generate binary sequences by selecting from all four chaotic maps viz., Tent, Logistic, Henon, and Arnold Cat map (ACM). LFSR selects chaotic maps to produce random key sequences. Five primitive polynomials of degrees 5, 6, 7, and 8 are considered for the generation of key sequences. Each primitive polynomial generates 61 binary key sequences stored in a binary key pool. All 61 binary key sequences generated are submitted for the NIST and FIPS tests. Performance analysis is carried out of the generated binary key sequences. From the obtained results, it can be concluded that the binary key sequences are random and unpredictable and have a large key space based on the individual and combination of key sequences. Also, the generated binary key sequences can be efficiently utilized for the encryption of digital images.
图像加密是一种有效的机制,可在通信传输过程中确保数字图像的安全,其中密钥序列的生成起着至关重要的作用。拟议的系统包括四个阶段,如生成四个混沌图、将生成的混沌图转换为二进制矢量、旋转线性反馈移位寄存器(LFSR)以及从生成的密钥池中选择生成的二进制混沌密钥序列。该实现方法的新颖之处在于通过从所有四个混沌图(即 Tent、Logistic、Henon 和 Arnold Cat 图 (ACM))中进行选择来生成二进制序列。LFSR 通过选择混沌图来生成随机密钥序列。在生成密钥序列时,考虑了 5、6、7 和 8 度的五个基元多项式。每个基元多项式生成 61 个二进制密钥序列,存储在二进制密钥池中。生成的所有 61 个二进制密钥序列都提交给 NIST 和 FIPS 测试。对生成的二进制密钥序列进行了性能分析。从得到的结果可以得出结论,二进制密钥序列是随机的、不可预测的,并且根据密钥序列的单个和组合,具有较大的密钥空间。此外,生成的二进制密钥序列可以有效地用于数字图像加密。
{"title":"Chaotic Map based Random Binary Key Sequence Generation","authors":"Vishwas C. G. M., R. Kunte","doi":"10.5815/ijcnis.2024.04.07","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.04.07","url":null,"abstract":"Image encryption is an efficient mechanism by which digital images can be secured during transmission over communication in which key sequence generation plays a vital role. The proposed system consists of stages such as the generation of four chaotic maps, conversion of generated maps to binary vectors, rotation of Linear Feedback Shift Register (LFSR), and selection of generated binary chaotic key sequences from the generated key pool. The novelty of this implementation is to generate binary sequences by selecting from all four chaotic maps viz., Tent, Logistic, Henon, and Arnold Cat map (ACM). LFSR selects chaotic maps to produce random key sequences. Five primitive polynomials of degrees 5, 6, 7, and 8 are considered for the generation of key sequences. Each primitive polynomial generates 61 binary key sequences stored in a binary key pool. All 61 binary key sequences generated are submitted for the NIST and FIPS tests. Performance analysis is carried out of the generated binary key sequences. From the obtained results, it can be concluded that the binary key sequences are random and unpredictable and have a large key space based on the individual and combination of key sequences. Also, the generated binary key sequences can be efficiently utilized for the encryption of digital images.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":"23 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141928124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quality of Experience Improvement and Service Time Optimization through Dynamic Computation Offloading Algorithms in Multi-access Edge Computing Networks 通过多接入边缘计算网络中的动态计算卸载算法提高体验质量和优化服务时间
Q1 Mathematics Pub Date : 2024-08-08 DOI: 10.5815/ijcnis.2024.04.01
Marouane Myyara, Oussama Lagnfdi, A. Darif, Abderrazak Farchane
Multi-access Edge Computing optimizes computation in proximity to smart mobile devices, addressing the limitations of devices with insufficient capabilities. In scenarios featuring multiple compute-intensive and delay-sensitive applications, computation offloading becomes essential. The objective of this research is to enhance user experience, minimize service time, and balance workloads while optimizing computation offloading and resource utilization. In this study, we introduce dynamic computation offloading algorithms that concurrently minimize service time and maximize the quality of experience. These algorithms take into account task and resource characteristics to determine the optimal execution location based on evaluated metrics. To assess the positive impact of the proposed algorithms, we employed the Edgecloudsim simulator, offering a realistic assessment of a Multi-access Edge Computing system. Simulation results showcase the superiority of our dynamic computation offloading algorithm compared to alternatives, achieving enhanced quality of experience and minimal service time. The findings underscore the effectiveness of the proposed algorithm and its potential to enhance mobile application performance. The comprehensive evaluation provides insights into the robustness and practical applicability of the proposed approach, positioning it as a valuable solution in the context of MEC networks. This research contributes to the ongoing efforts in advancing computation offloading strategies for improved performance in edge computing environments.
多接入边缘计算优化了智能移动设备附近的计算,解决了设备能力不足的限制。在具有多个计算密集型和延迟敏感型应用的场景中,计算卸载变得至关重要。本研究的目标是在优化计算卸载和资源利用的同时,提升用户体验,最大限度地缩短服务时间,平衡工作负载。在本研究中,我们引入了动态计算卸载算法,可同时最大限度地缩短服务时间和提高体验质量。这些算法考虑了任务和资源特征,根据评估指标确定最佳执行位置。为评估所提算法的积极影响,我们采用了 Edgecloudsim 仿真器,对多接入边缘计算系统进行了真实评估。仿真结果表明,与其他算法相比,我们的动态计算卸载算法更胜一筹,既提高了体验质量,又缩短了服务时间。研究结果强调了所提算法的有效性及其提高移动应用性能的潜力。综合评估深入揭示了所提方法的稳健性和实际适用性,将其定位为 MEC 网络背景下有价值的解决方案。这项研究有助于推进计算卸载策略,提高边缘计算环境的性能。
{"title":"Quality of Experience Improvement and Service Time Optimization through Dynamic Computation Offloading Algorithms in Multi-access Edge Computing Networks","authors":"Marouane Myyara, Oussama Lagnfdi, A. Darif, Abderrazak Farchane","doi":"10.5815/ijcnis.2024.04.01","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.04.01","url":null,"abstract":"Multi-access Edge Computing optimizes computation in proximity to smart mobile devices, addressing the limitations of devices with insufficient capabilities. In scenarios featuring multiple compute-intensive and delay-sensitive applications, computation offloading becomes essential. The objective of this research is to enhance user experience, minimize service time, and balance workloads while optimizing computation offloading and resource utilization. In this study, we introduce dynamic computation offloading algorithms that concurrently minimize service time and maximize the quality of experience. These algorithms take into account task and resource characteristics to determine the optimal execution location based on evaluated metrics. To assess the positive impact of the proposed algorithms, we employed the Edgecloudsim simulator, offering a realistic assessment of a Multi-access Edge Computing system. Simulation results showcase the superiority of our dynamic computation offloading algorithm compared to alternatives, achieving enhanced quality of experience and minimal service time. The findings underscore the effectiveness of the proposed algorithm and its potential to enhance mobile application performance. The comprehensive evaluation provides insights into the robustness and practical applicability of the proposed approach, positioning it as a valuable solution in the context of MEC networks. This research contributes to the ongoing efforts in advancing computation offloading strategies for improved performance in edge computing environments.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":"58 35","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141928905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cascaded Machine Learning Approach with Data Augmentation for Intrusion Detection System 入侵检测系统的级联机器学习方法与数据增强
Q1 Mathematics Pub Date : 2024-08-08 DOI: 10.5815/ijcnis.2024.04.02
Argha Chandra Dhar, Arna Roy, M. Akhand, Md Abdus Samad Kamal, Kou Yamada
Cybersecurity has received significant attention globally, with the ever-continuing expansion of internet usage, due to growing trends and adverse impacts of cybercrimes, which include disrupting businesses, corrupting or altering sensitive data, stealing or exposing information, and illegally accessing a computer network. As a popular way, different kinds of firewalls, antivirus systems, and Intrusion Detection Systems (IDS) have been introduced to protect a network from such attacks. Recently, Machine Learning (ML), including Deep Learning (DL) based autonomous systems, have been state-of-the-art in cyber security, along with their drastic growth and superior performance. This study aims to develop a novel IDS system that gives more attention to classifying attack cases correctly and categorizes attacks into subclass levels by proposing a two-step process with a cascaded framework. The proposed framework recognizes the attacks using one ML model and classifies them into subclass levels using the other ML model in successive operations. The most challenging part is to train both models with unbalanced cases of attacks and non-attacks in the datasets, which is overcome by proposing a data augmentation technique. Precisely, limited attack samples of the dataset are augmented in the training set to learn the attack cases properly. Finally, the proposed framework is implemented with NN, the most popular ML model, and evaluated with the NSL-KDD dataset by conducting a rigorous analysis of each subclass emphasizing the major attack class. The proficiency of the proposed cascaded approach with data augmentation is compared with the other three models: the cascaded model without data augmentation and the standard single NN model with and without the data augmentation technique. Experimental results on the NSL-KDD dataset have revealed the proposed method as an effective IDS system and outperformed existing state-of-the-art ML models.
随着互联网使用的不断扩大,网络犯罪的趋势和负面影响日益严重,包括扰乱业务、破坏或篡改敏感数据、窃取或暴露信息以及非法访问计算机网络,因此网络安全在全球范围内受到极大关注。作为一种流行的方式,不同类型的防火墙、防病毒系统和入侵检测系统(IDS)已被引入,以保护网络免受此类攻击。最近,机器学习(ML),包括基于深度学习(DL)的自主系统,在网络安全领域成为最先进的技术,其发展迅猛,性能优越。本研究旨在开发一种新型 IDS 系统,该系统更注重正确分类攻击案例,并通过级联框架提出一个两步流程,将攻击分为子类级别。提议的框架使用一个 ML 模型识别攻击,并在连续操作中使用另一个 ML 模型将攻击分类为子类级别。最具挑战性的部分是在数据集中攻击和非攻击情况不平衡的情况下训练这两个模型,通过提出数据增强技术克服了这一难题。确切地说,在训练集中增加了数据集中有限的攻击样本,以正确学习攻击案例。最后,利用最流行的 ML 模型 NN 实现了所提出的框架,并通过 NSL-KDD 数据集对每个子类进行了严格的分析评估,强调了主要的攻击类别。与其他三种模型(不带数据增强的级联模型和带或不带数据增强技术的标准单一 NN 模型)相比,建议的级联方法在数据增强方面的能力更强。在 NSL-KDD 数据集上的实验结果表明,所提出的方法是一种有效的 IDS 系统,其性能优于现有的最先进的 ML 模型。
{"title":"Cascaded Machine Learning Approach with Data Augmentation for Intrusion Detection System","authors":"Argha Chandra Dhar, Arna Roy, M. Akhand, Md Abdus Samad Kamal, Kou Yamada","doi":"10.5815/ijcnis.2024.04.02","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.04.02","url":null,"abstract":"Cybersecurity has received significant attention globally, with the ever-continuing expansion of internet usage, due to growing trends and adverse impacts of cybercrimes, which include disrupting businesses, corrupting or altering sensitive data, stealing or exposing information, and illegally accessing a computer network. As a popular way, different kinds of firewalls, antivirus systems, and Intrusion Detection Systems (IDS) have been introduced to protect a network from such attacks. Recently, Machine Learning (ML), including Deep Learning (DL) based autonomous systems, have been state-of-the-art in cyber security, along with their drastic growth and superior performance. This study aims to develop a novel IDS system that gives more attention to classifying attack cases correctly and categorizes attacks into subclass levels by proposing a two-step process with a cascaded framework. The proposed framework recognizes the attacks using one ML model and classifies them into subclass levels using the other ML model in successive operations. The most challenging part is to train both models with unbalanced cases of attacks and non-attacks in the datasets, which is overcome by proposing a data augmentation technique. Precisely, limited attack samples of the dataset are augmented in the training set to learn the attack cases properly. Finally, the proposed framework is implemented with NN, the most popular ML model, and evaluated with the NSL-KDD dataset by conducting a rigorous analysis of each subclass emphasizing the major attack class. The proficiency of the proposed cascaded approach with data augmentation is compared with the other three models: the cascaded model without data augmentation and the standard single NN model with and without the data augmentation technique. Experimental results on the NSL-KDD dataset have revealed the proposed method as an effective IDS system and outperformed existing state-of-the-art ML models.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":"59 47","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141929042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parameter Estimation of Cellular Communication Systems Models in Computational MATLAB Environment: A Systematic Solver-based Numerical Optimization Approaches 计算 MATLAB 环境中蜂窝通信系统模型的参数估计:基于求解器的系统数值优化方法
Q1 Mathematics Pub Date : 2024-06-08 DOI: 10.5815/ijcnis.2024.03.06
J. Isabona, Sayo A. Akinwumi, Theophilus E. Arijaje, Odesanya Ituabhor, A. Imoize
Model-based parameter estimation, identification, and optimisation play a dominant role in many aspects of physical and operational processes in applied sciences, engineering, and other related disciplines. The intricate task involves engaging and fitting the most appropriate parametric model with nonlinear or linear features to experimental field datasets priori to selecting the best optimisation algorithm with the best configuration. Thus, the task is usually geared towards solving a clear optimsation problem. In this paper, a systematic-stepwise approach has been employed to review and benchmark six numerical-based optimization algorithms in MATLAB computational Environment. The algorithms include the Gradient Descent (GRA), Levenberg-Marguardt (LEM), Quasi-Newton (QAN), Gauss-Newton (GUN), Nelda-Meald (NEM), and Trust-Region-Dogleg (TRD). This has been accomplished by engaging them to solve an intricate radio frequency propagation modelling and parametric estimation in connection with practical spatial signal data. The spatial signal data were obtained via real-time field drive test conducted around six eNodeBs transmitters, with case studies taken from different terrains where 4G LTE transmitters are operational. Accordingly, three criteria in connection with rate of convergence Results show that the approximate hessian-based QAN algorithm, followed by the LEM algorithm yielded the best results in optimizing and estimating the RF propagation models parameters. The resultant approach and output of this paper will be of countless assets in assisting the end-users to select the most preferable optimization algorithm to handle their respective intricate problems.
基于模型的参数估计、识别和优化在应用科学、工程学和其他相关学科的物理和操作过程的许多方面发挥着主导作用。这项复杂的任务包括在选择具有最佳配置的最佳优化算法之前,将具有非线性或线性特征的最合适参数模型与现场实验数据集进行关联和拟合。因此,这项任务通常是为了解决一个明确的优化问题。本文在 MATLAB 计算环境中采用了一种系统的逐步方法,对六种基于数值的优化算法进行了评测和基准测试。这些算法包括梯度下降算法(GRA)、Levenberg-Marguardt 算法(LEM)、准牛顿算法(QAN)、高斯-牛顿算法(GUN)、Nelda-Meald 算法(NEM)和信任区域-狗腿算法(TRD)。通过让它们结合实际空间信号数据解决复杂的无线电频率传播建模和参数估计问题,实现了这一目标。空间信号数据是通过围绕六个 eNodeB 发射机进行的实时现场驱动测试获得的,案例研究来自 4G LTE 发射机运行的不同地形。因此,与收敛速度相关的三个标准结果表明,在优化和估计射频传播模型参数方面,基于近似哈希值的 QAN 算法和 LEM 算法取得了最佳结果。本文的方法和成果将为最终用户选择最合适的优化算法来处理各自的复杂问题提供无数帮助。
{"title":"Parameter Estimation of Cellular Communication Systems Models in Computational MATLAB Environment: A Systematic Solver-based Numerical Optimization Approaches","authors":"J. Isabona, Sayo A. Akinwumi, Theophilus E. Arijaje, Odesanya Ituabhor, A. Imoize","doi":"10.5815/ijcnis.2024.03.06","DOIUrl":"https://doi.org/10.5815/ijcnis.2024.03.06","url":null,"abstract":"Model-based parameter estimation, identification, and optimisation play a dominant role in many aspects of physical and operational processes in applied sciences, engineering, and other related disciplines. The intricate task involves engaging and fitting the most appropriate parametric model with nonlinear or linear features to experimental field datasets priori to selecting the best optimisation algorithm with the best configuration. Thus, the task is usually geared towards solving a clear optimsation problem. In this paper, a systematic-stepwise approach has been employed to review and benchmark six numerical-based optimization algorithms in MATLAB computational Environment. The algorithms include the Gradient Descent (GRA), Levenberg-Marguardt (LEM), Quasi-Newton (QAN), Gauss-Newton (GUN), Nelda-Meald (NEM), and Trust-Region-Dogleg (TRD). This has been accomplished by engaging them to solve an intricate radio frequency propagation modelling and parametric estimation in connection with practical spatial signal data. The spatial signal data were obtained via real-time field drive test conducted around six eNodeBs transmitters, with case studies taken from different terrains where 4G LTE transmitters are operational. Accordingly, three criteria in connection with rate of convergence Results show that the approximate hessian-based QAN algorithm, followed by the LEM algorithm yielded the best results in optimizing and estimating the RF propagation models parameters. The resultant approach and output of this paper will be of countless assets in assisting the end-users to select the most preferable optimization algorithm to handle their respective intricate problems.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":" 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141370390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Computer Network and Information Security
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1