首页 > 最新文献

International journal of machine learning and computing最新文献

英文 中文
Self-Organizing Computational System for Network Anomaly Exploration using Learning Algorithms 基于学习算法的网络异常探测自组织计算系统
Pub Date : 2023-10-05 DOI: 10.53759/7669/jmc202303035
Preethi P, Lalitha K, Yogapriya J
The forum in the nation for reporting information security flaws had 14,871 reports by the end of 2021, a 46.6% increase from 2020. The total of 5,567 high risk vulnerabilities, an increase of nearly 1,400 over the previous year. Evidently, both the total number of vulnerabilities found annually, and the total number of high-risk vulnerabilities are rising. In order for data mining technology to play a wider part in the predictive investigation of network security models, it is advised that its capability have to be improved. This paper combines the concepts of data mining (DM) with machine learning (ML), which introduces similar technologies from DM technology and security establishing collection channel, thereby finally introduces the computer network security maintenance process based on data mining in order to improve the application effect of DM in the predictive analysis of network security models. In this paper, a self-organizing neural network technique that detects denial of service (DOS) in complicated networks quickly, effectively, and precisely is introduced. It also analyses a number of frequently employed computer data mining methods, including association, clustering, classification, neural networks, regression, and web data mining. Finally, it introduces a computer data mining method based on the self-organizing (SO) algorithm. In comparison to conventional techniques, the SO algorithm-based computer data mining technology is also used in defensive detection tests against Dos attacks. A detection average accuracy rate of more than 98.56% and a detection average efficiency gain of more than 20% are demonstrated by experimental data to demonstrate that tests based on the Data Mining connected SO algorithm have superior defensive detection effects than standard algorithms.
截至2021年底,全国信息安全漏洞报告论坛共收到14871份报告,比2020年增加了46.6%。共有5567个高风险漏洞,比上一年增加了近1400个。显然,每年发现的漏洞总数和高风险漏洞总数都在上升。为了使数据挖掘技术在网络安全模型的预测研究中发挥更广泛的作用,建议提高其能力。本文将数据挖掘(DM)的概念与机器学习(ML)相结合,从DM技术和安全建立收集通道中引入了类似的技术,从而最后介绍了基于数据挖掘的计算机网络安全维护流程,以提高DM在网络安全模型预测分析中的应用效果。本文介绍了一种快速、有效、精确地检测复杂网络中拒绝服务攻击的自组织神经网络技术。它还分析了一些常用的计算机数据挖掘方法,包括关联、聚类、分类、神经网络、回归和web数据挖掘。最后介绍了一种基于自组织(SO)算法的计算机数据挖掘方法。与传统技术相比,基于SO算法的计算机数据挖掘技术也用于针对Dos攻击的防御检测测试。实验数据表明,基于数据挖掘连接SO算法的检测平均准确率超过98.56%,检测平均效率增益超过20%,证明基于数据挖掘连接SO算法的测试比标准算法具有更好的防御检测效果。
{"title":"Self-Organizing Computational System for Network Anomaly Exploration using Learning Algorithms","authors":"Preethi P, Lalitha K, Yogapriya J","doi":"10.53759/7669/jmc202303035","DOIUrl":"https://doi.org/10.53759/7669/jmc202303035","url":null,"abstract":"The forum in the nation for reporting information security flaws had 14,871 reports by the end of 2021, a 46.6% increase from 2020. The total of 5,567 high risk vulnerabilities, an increase of nearly 1,400 over the previous year. Evidently, both the total number of vulnerabilities found annually, and the total number of high-risk vulnerabilities are rising. In order for data mining technology to play a wider part in the predictive investigation of network security models, it is advised that its capability have to be improved. This paper combines the concepts of data mining (DM) with machine learning (ML), which introduces similar technologies from DM technology and security establishing collection channel, thereby finally introduces the computer network security maintenance process based on data mining in order to improve the application effect of DM in the predictive analysis of network security models. In this paper, a self-organizing neural network technique that detects denial of service (DOS) in complicated networks quickly, effectively, and precisely is introduced. It also analyses a number of frequently employed computer data mining methods, including association, clustering, classification, neural networks, regression, and web data mining. Finally, it introduces a computer data mining method based on the self-organizing (SO) algorithm. In comparison to conventional techniques, the SO algorithm-based computer data mining technology is also used in defensive detection tests against Dos attacks. A detection average accuracy rate of more than 98.56% and a detection average efficiency gain of more than 20% are demonstrated by experimental data to demonstrate that tests based on the Data Mining connected SO algorithm have superior defensive detection effects than standard algorithms.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"438 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134975369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational Engineering based approach on Artificial Intelligence and Machine learning-Driven Robust Data Centre for Safe Management 基于计算工程的人工智能和机器学习驱动的鲁棒数据中心安全管理方法
Pub Date : 2023-10-05 DOI: 10.53759/7669/jmc202303038
Senthilkumar G, Rajendran P, Suresh Y, Herald Anantha Rufus N, Rama chaithanya Tanguturi, Rajdeep Singh Solanki
This research explores the integration of Artificial Intelligence (AI), specifically the Recurrent Neural Network (RNN) model, into the optimization of data center cooling systems through Computational Engineering. Utilizing Computational Fluid Dynamics (CFD) simulations as a foundational data source, the study aimed to enhance operational efficiency and sustainability in data centers through predictive modeling. The findings revealed that the RNN model, trained on CFD datasets, proficiently forecasted key data center conditions, including temperature variations and airflow dynamics. This AI-driven approach demonstrated marked advantages over traditional methods, significantly minimizing energy wastage commonly incurred through overcooling. Additionally, the proactive nature of the model allowed for the timely identification and mitigation of potential equipment challenges or heat hotspots, ensuring uninterrupted operations and equipment longevity. While the research showcased the transformative potential of merging AI with data center operations, it also indicated areas for further refinement, including the model's adaptability to diverse real-world scenarios and its management of long-term dependencies. In conclusion, the study illuminates a promising avenue for enhancing data center operations, highlighting the significant benefits of an AI-driven approach in achieving efficiency, cost reduction, and environmental sustainability.
本研究探讨了人工智能(AI),特别是递归神经网络(RNN)模型,通过计算工程整合到数据中心冷却系统的优化中。该研究利用计算流体动力学(CFD)模拟作为基础数据源,旨在通过预测建模提高数据中心的运营效率和可持续性。研究结果表明,经过CFD数据集训练的RNN模型可以熟练地预测数据中心的关键条件,包括温度变化和气流动力学。与传统方法相比,这种人工智能驱动的方法具有明显的优势,可以显著减少因过冷而导致的能源浪费。此外,该模型的主动特性允许及时识别和缓解潜在的设备挑战或热热点,确保不间断运行和设备寿命。虽然该研究展示了将人工智能与数据中心运营相结合的变革潜力,但它也指出了进一步完善的领域,包括模型对各种现实场景的适应性以及对长期依赖关系的管理。总之,该研究阐明了增强数据中心运营的一条有前途的途径,强调了人工智能驱动方法在实现效率、降低成本和环境可持续性方面的显著优势。
{"title":"Computational Engineering based approach on Artificial Intelligence and Machine learning-Driven Robust Data Centre for Safe Management","authors":"Senthilkumar G, Rajendran P, Suresh Y, Herald Anantha Rufus N, Rama chaithanya Tanguturi, Rajdeep Singh Solanki","doi":"10.53759/7669/jmc202303038","DOIUrl":"https://doi.org/10.53759/7669/jmc202303038","url":null,"abstract":"This research explores the integration of Artificial Intelligence (AI), specifically the Recurrent Neural Network (RNN) model, into the optimization of data center cooling systems through Computational Engineering. Utilizing Computational Fluid Dynamics (CFD) simulations as a foundational data source, the study aimed to enhance operational efficiency and sustainability in data centers through predictive modeling. The findings revealed that the RNN model, trained on CFD datasets, proficiently forecasted key data center conditions, including temperature variations and airflow dynamics. This AI-driven approach demonstrated marked advantages over traditional methods, significantly minimizing energy wastage commonly incurred through overcooling. Additionally, the proactive nature of the model allowed for the timely identification and mitigation of potential equipment challenges or heat hotspots, ensuring uninterrupted operations and equipment longevity. While the research showcased the transformative potential of merging AI with data center operations, it also indicated areas for further refinement, including the model's adaptability to diverse real-world scenarios and its management of long-term dependencies. In conclusion, the study illuminates a promising avenue for enhancing data center operations, highlighting the significant benefits of an AI-driven approach in achieving efficiency, cost reduction, and environmental sustainability.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134975373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Analysis of Transaction Speed and Throughput in Blockchain and Hashgraph: A Performance Study for Distributed Ledger Technologies 区块链和哈希图中交易速度和吞吐量的比较分析:分布式账本技术的性能研究
Pub Date : 2023-10-05 DOI: 10.53759/7669/jmc202303041
Dinesh Kumar K, Duraimutharasan N, Shanthi HJ, Vennila G, Prabu Shankar B, Senthil P
Blockchain technology garners significant attention and recognition due to several key advantages it offers. Trust, reliability, speed, and transparency are among the prominent benefits that contribute to its growing prominence. The decentralized nature of blockchain allows for a high level of trust as transactions are recorded and verified by multiple participants across the network. This, in turn, enhances reliability as there is no single point of failure. Speed is also a notable advantage, particularly when compared to traditional systems that involve intermediaries and complex processes for verification. Blockchain enables faster and more efficient transaction processing, reducing delays and costs. This research paper aims to provide a comprehensive comparative analysis of two prominent distributed ledger technologies, namely blockchain and hashgraph. Both blockchain and hashgraph offer decentralized and secure systems for recording and validating transactions or information. It explores the underlying mechanisms, consensus algorithms, advantages, and limitations of these technologies. It also examines their potential applications and discusses the implications of their respective design choices. By understanding the nuances of blockchain and hashgraph, seeks to contribute to the ongoing discourse on distributed ledger technologies and aids in decision-making for their appropriate adoption in various domains and applications.
区块链技术由于其提供的几个关键优势而获得了极大的关注和认可。信任、可靠性、速度和透明度是其日益突出的优势之一。区块链的分散性允许高度信任,因为交易由网络上的多个参与者记录和验证。这反过来又提高了可靠性,因为没有单点故障。速度也是一个显著的优势,特别是与涉及中介和复杂验证过程的传统系统相比。区块链可以实现更快、更有效的交易处理,减少延迟和成本。本研究论文旨在对区块链和哈希图这两种突出的分布式账本技术进行全面的比较分析。区块链和哈希图都提供了分散和安全的系统来记录和验证交易或信息。它探讨了这些技术的潜在机制、共识算法、优势和局限性。它还检查了它们的潜在应用,并讨论了它们各自设计选择的含义。通过理解区块链和哈希图的细微差别,旨在为分布式账本技术的持续讨论做出贡献,并帮助决策在各种领域和应用中适当采用。
{"title":"Comparative Analysis of Transaction Speed and Throughput in Blockchain and Hashgraph: A Performance Study for Distributed Ledger Technologies","authors":"Dinesh Kumar K, Duraimutharasan N, Shanthi HJ, Vennila G, Prabu Shankar B, Senthil P","doi":"10.53759/7669/jmc202303041","DOIUrl":"https://doi.org/10.53759/7669/jmc202303041","url":null,"abstract":"Blockchain technology garners significant attention and recognition due to several key advantages it offers. Trust, reliability, speed, and transparency are among the prominent benefits that contribute to its growing prominence. The decentralized nature of blockchain allows for a high level of trust as transactions are recorded and verified by multiple participants across the network. This, in turn, enhances reliability as there is no single point of failure. Speed is also a notable advantage, particularly when compared to traditional systems that involve intermediaries and complex processes for verification. Blockchain enables faster and more efficient transaction processing, reducing delays and costs. This research paper aims to provide a comprehensive comparative analysis of two prominent distributed ledger technologies, namely blockchain and hashgraph. Both blockchain and hashgraph offer decentralized and secure systems for recording and validating transactions or information. It explores the underlying mechanisms, consensus algorithms, advantages, and limitations of these technologies. It also examines their potential applications and discusses the implications of their respective design choices. By understanding the nuances of blockchain and hashgraph, seeks to contribute to the ongoing discourse on distributed ledger technologies and aids in decision-making for their appropriate adoption in various domains and applications.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134975243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and Implementation of an Intelligent Health Monitoring System using IoT and Advanced Machine Learning Techniques 利用物联网和先进机器学习技术开发和实施智能健康监测系统
Pub Date : 2023-10-05 DOI: 10.53759/7669/jmc202303037
Pabitha C, Kalpana V, Evangelin Sonia SV, Pushpalatha A, Mahendran G, Sivarajan S
Healthcare practices have a tremendous amount of potential to change as a result of the convergence of IoT technologies with cutting-edge machine learning. This study offers an IoT-connected sensor-based Intelligent Health Monitoring System for real-time patient health assessment. Our system offers continuous health monitoring and early anomaly identification by integrating temperature, blood pressure, and ECG sensors. The Support Vector Machine (SVM) model proves to be a reliable predictor after thorough analysis, obtaining astounding accuracy rates of 94% for specificity, 95% for the F1 score, 92% for recall, and 94% for total accuracy. These outcomes demonstrate how well our system performs when it comes to providing precise and timely health predictions. Healthcare facilities can easily integrate our Intelligent Health Monitoring System as part of the practical application of our research. Real-time sensor data can be used by doctors to proactively spot health issues and provide prompt interventions, improving the quality of patient care. This study's integration of advanced machine learning and IoT underlines the strategy's disruptive potential for transforming healthcare procedures. This study provides the foundation for a more effective, responsive, and patient-centered healthcare ecosystem by employing the potential of connected devices and predictive analytics.
由于物联网技术与尖端机器学习的融合,医疗保健实践具有巨大的改变潜力。本研究提供了一种基于物联网传感器的智能健康监测系统,用于实时患者健康评估。我们的系统通过集成温度、血压和ECG传感器,提供持续的健康监测和早期异常识别。经过深入分析,支持向量机(SVM)模型被证明是一个可靠的预测器,其特异性的准确率达到了惊人的94%,F1评分的准确率为95%,召回率为92%,总准确率为94%。这些结果表明,我们的系统在提供准确和及时的健康预测方面表现得多么出色。医疗机构可以很容易地集成我们的智能健康监测系统作为我们研究的实际应用的一部分。医生可以使用实时传感器数据主动发现健康问题并及时提供干预措施,从而提高患者护理质量。这项研究将先进的机器学习和物联网相结合,强调了该战略在改变医疗保健程序方面的颠覆性潜力。本研究通过利用连接设备和预测分析的潜力,为更有效、响应更快、以患者为中心的医疗保健生态系统奠定了基础。
{"title":"Development and Implementation of an Intelligent Health Monitoring System using IoT and Advanced Machine Learning Techniques","authors":"Pabitha C, Kalpana V, Evangelin Sonia SV, Pushpalatha A, Mahendran G, Sivarajan S","doi":"10.53759/7669/jmc202303037","DOIUrl":"https://doi.org/10.53759/7669/jmc202303037","url":null,"abstract":"Healthcare practices have a tremendous amount of potential to change as a result of the convergence of IoT technologies with cutting-edge machine learning. This study offers an IoT-connected sensor-based Intelligent Health Monitoring System for real-time patient health assessment. Our system offers continuous health monitoring and early anomaly identification by integrating temperature, blood pressure, and ECG sensors. The Support Vector Machine (SVM) model proves to be a reliable predictor after thorough analysis, obtaining astounding accuracy rates of 94% for specificity, 95% for the F1 score, 92% for recall, and 94% for total accuracy. These outcomes demonstrate how well our system performs when it comes to providing precise and timely health predictions. Healthcare facilities can easily integrate our Intelligent Health Monitoring System as part of the practical application of our research. Real-time sensor data can be used by doctors to proactively spot health issues and provide prompt interventions, improving the quality of patient care. This study's integration of advanced machine learning and IoT underlines the strategy's disruptive potential for transforming healthcare procedures. This study provides the foundation for a more effective, responsive, and patient-centered healthcare ecosystem by employing the potential of connected devices and predictive analytics.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134975246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Discussion of Key Aspects and Trends in Self Driving Vehicle Technology 自动驾驶汽车技术的关键方面和趋势讨论
Pub Date : 2023-10-05 DOI: 10.53759/7669/jmc202303047
Dong Jo Kim
Autonomous vehicles use remote-sensing technologies such as radar, GPS, cameras, and lidar to effectively observe their immediate environment and construct a comprehensive three-dimensional representation. The conventional constituents of this particular environment include structures, additional vehicles, people, as well as signage and traffic indicators. At now, a self-driving car is equipped with a wide array of sensors that are not found in a traditional automobile. Commonly used sensors include lasers and visual sensors, which serve the purpose of acquiring comprehensive understanding of the immediate environment. The cost of these sensors is high and they exhibit selectivity in their use requirements. The installation of these sensors in a mobile vehicle also significantly diminishes their operational longevity. Furthermore, the issue of trustworthiness is a matter of significant concern. The present article is structured into distinct parts, each of which delves into a significant aspect and obstacle pertaining to the trend and development of autonomous vehicles. The parts describing the obstacles in the development of autonomous vehicles define the conflict arising from the use of cameras and LiDAR technology, the influence of social norms, the impact of human psychology, and the legal complexities involved.
自动驾驶汽车利用雷达、GPS、摄像头、激光雷达等遥感技术,有效地观察周围环境,构建全面的三维表征。这种特殊环境的传统组成部分包括结构,额外的车辆,人员,以及标志和交通指示。目前,自动驾驶汽车配备了传统汽车所没有的各种传感器。常用的传感器包括激光和视觉传感器,它们的目的是获得对周围环境的全面了解。这些传感器的成本很高,它们在使用要求中表现出选择性。在移动车辆中安装这些传感器也会大大缩短其使用寿命。此外,可信度问题也是一个值得关注的问题。本文分为不同的部分,每个部分都深入探讨了与自动驾驶汽车的趋势和发展有关的一个重要方面和障碍。描述自动驾驶汽车发展障碍的部分定义了使用摄像头和激光雷达技术所产生的冲突、社会规范的影响、人类心理的影响以及所涉及的法律复杂性。
{"title":"A Discussion of Key Aspects and Trends in Self Driving Vehicle Technology","authors":"Dong Jo Kim","doi":"10.53759/7669/jmc202303047","DOIUrl":"https://doi.org/10.53759/7669/jmc202303047","url":null,"abstract":"Autonomous vehicles use remote-sensing technologies such as radar, GPS, cameras, and lidar to effectively observe their immediate environment and construct a comprehensive three-dimensional representation. The conventional constituents of this particular environment include structures, additional vehicles, people, as well as signage and traffic indicators. At now, a self-driving car is equipped with a wide array of sensors that are not found in a traditional automobile. Commonly used sensors include lasers and visual sensors, which serve the purpose of acquiring comprehensive understanding of the immediate environment. The cost of these sensors is high and they exhibit selectivity in their use requirements. The installation of these sensors in a mobile vehicle also significantly diminishes their operational longevity. Furthermore, the issue of trustworthiness is a matter of significant concern. The present article is structured into distinct parts, each of which delves into a significant aspect and obstacle pertaining to the trend and development of autonomous vehicles. The parts describing the obstacles in the development of autonomous vehicles define the conflict arising from the use of cameras and LiDAR technology, the influence of social norms, the impact of human psychology, and the legal complexities involved.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134975239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Voice Authentication System using Enhanced Inceptionv3 Algorithm 基于增强型Inceptionv3算法的高效语音鉴权系统
Pub Date : 2023-10-05 DOI: 10.53759/7669/jmc202303032
Kaladharan N, Arunkumar R
Automatic voice authentication based on deep learning is a promising technology that has received much attention from academia and industry. It has proven to be effective in a variety of applications, including biometric access control systems. Using biometric data in such systems is difficult, particularly in a centralized setting. It introduces numerous risks, such as information disclosure, unreliability, security, privacy, etc. Voice authentication systems are becoming increasingly important in solving these issues. This is especially true if the device relies on voice commands from the user. This work investigates the development of a text-independent voice authentication system. The spatial features of the voiceprint (corresponding to the speech spectrum) are present in the speech signal as a result of the spectrogram, and the weighted wavelet packet cepstral coefficients (W-WPCC) are effective for spatial feature extraction (corresponding to the speech spectrum). W- WPCC characteristics are calculated by combining sub-band energies with sub-band spectral centroids using a weighting scheme to generate noise-resistant acoustic characteristics. In addition, this work proposes an enhanced inception v3 model for voice authentication. The proposed InceptionV3 system extracts feature from input data from the convolutional and pooling layers. By employing fewer parameters, this architecture reduces the complexity of the convolution process while increasing learning speed. Following model training, the enhanced Inception v3 model classifies audio samples as authenticated or not based on extracted features. Experiments were carried out on the speech of five English speakers whose voices were collected from YouTube. The results reveal that the suggested improved method, based on enhanced Inception v3 and trained on speech spectrogram pictures, outperforms the existing methods. The approach generates tests with an average categorization accuracy of 99%. Compared to the performance of these network models on the given dataset, the proposed enhanced Inception v3 network model achieves the best results regarding model training time, recognition accuracy, and stability.
基于深度学习的自动语音认证是一项很有前途的技术,受到了学术界和工业界的广泛关注。它已被证明在各种应用中是有效的,包括生物识别访问控制系统。在这样的系统中使用生物识别数据是困难的,特别是在集中设置中。它带来了许多风险,如信息泄露、不可靠性、安全性、隐私性等。语音认证系统在解决这些问题方面变得越来越重要。如果设备依赖于用户的语音命令,这一点尤其正确。本文研究了一种独立于文本的语音认证系统的开发。声纹的空间特征(对应于语音频谱)作为频谱图的结果存在于语音信号中,加权小波包倒谱系数(W-WPCC)对于提取空间特征(对应于语音频谱)是有效的。利用加权方案将子带能量与子带频谱质心相结合来计算W- WPCC特性,从而产生抗噪声声学特性。此外,本工作还提出了一个用于语音认证的增强inception v3模型。提出的InceptionV3系统从卷积层和池化层提取输入数据的特征。通过使用更少的参数,该架构降低了卷积过程的复杂性,同时提高了学习速度。在模型训练之后,增强的Inception v3模型根据提取的特征将音频样本分类为已验证的或未验证的。研究人员从YouTube上收集了5名说英语的人的声音,并对他们的演讲进行了实验。结果表明,基于增强的Inception v3并对语音谱图进行训练的改进方法优于现有的方法。该方法生成的测试平均分类准确率为99%。与这些网络模型在给定数据集上的性能相比,本文提出的增强Inception v3网络模型在模型训练时间、识别精度和稳定性方面取得了最好的结果。
{"title":"An Efficient Voice Authentication System using Enhanced Inceptionv3 Algorithm","authors":"Kaladharan N, Arunkumar R","doi":"10.53759/7669/jmc202303032","DOIUrl":"https://doi.org/10.53759/7669/jmc202303032","url":null,"abstract":"Automatic voice authentication based on deep learning is a promising technology that has received much attention from academia and industry. It has proven to be effective in a variety of applications, including biometric access control systems. Using biometric data in such systems is difficult, particularly in a centralized setting. It introduces numerous risks, such as information disclosure, unreliability, security, privacy, etc. Voice authentication systems are becoming increasingly important in solving these issues. This is especially true if the device relies on voice commands from the user. This work investigates the development of a text-independent voice authentication system. The spatial features of the voiceprint (corresponding to the speech spectrum) are present in the speech signal as a result of the spectrogram, and the weighted wavelet packet cepstral coefficients (W-WPCC) are effective for spatial feature extraction (corresponding to the speech spectrum). W- WPCC characteristics are calculated by combining sub-band energies with sub-band spectral centroids using a weighting scheme to generate noise-resistant acoustic characteristics. In addition, this work proposes an enhanced inception v3 model for voice authentication. The proposed InceptionV3 system extracts feature from input data from the convolutional and pooling layers. By employing fewer parameters, this architecture reduces the complexity of the convolution process while increasing learning speed. Following model training, the enhanced Inception v3 model classifies audio samples as authenticated or not based on extracted features. Experiments were carried out on the speech of five English speakers whose voices were collected from YouTube. The results reveal that the suggested improved method, based on enhanced Inception v3 and trained on speech spectrogram pictures, outperforms the existing methods. The approach generates tests with an average categorization accuracy of 99%. Compared to the performance of these network models on the given dataset, the proposed enhanced Inception v3 network model achieves the best results regarding model training time, recognition accuracy, and stability.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134975240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Machine Learning Technique to Detect Active Botnet Attacks for Network Security and Privacy 基于网络安全和隐私的主动僵尸网络攻击检测混合机器学习技术
Pub Date : 2023-10-05 DOI: 10.53759/7669/jmc202303044
Venkatesan C, Thamaraimanalan T, Balamurugan D, Gowrishankar J, Manjunath R, Sivaramakrishnan A
A botnet is a malware application controlled from a distance by a programmer with the assistance of a botmaster. Botnets can launch enormous cyber-attacks like Denial-of-Service (DOS), phishing, spam, data stealing, and identity theft. The botnet can also affect the security and privacy of the systems. The conventional approach to detecting botnets is made by signature-based analysis, which cannot discover botnets that are not visible. The behavior-based analysis appears to be an appropriate solution to the current botnet characteristics that are constantly developing. This paper aims to develop an efficient botnet detection algorithm using machine learning with traffic reduction to increase accuracy. Based on behavioural analysis, a traffic reduction strategy is applied to reduce network traffic to improve overall system performance. Several network devices are typically used to retrieve network traffic information. With a detection accuracy of 98.4%, the known and unknown botnet activities are measured using the supplied datasets. The machine learning-based traffic reduction system has achieved a high rate of traffic reduction, about 82%, and false-positive rates range between 0% to 2%. Both findings demonstrate that the suggested technique is efficient and accurate.
僵尸网络是一种恶意软件应用程序,由程序员在僵尸管理员的帮助下远程控制。僵尸网络可以发起大规模的网络攻击,如拒绝服务(DOS)、网络钓鱼、垃圾邮件、数据窃取和身份盗窃。僵尸网络还会影响系统的安全性和隐私性。传统的僵尸网络检测方法是基于特征的分析,无法发现不可见的僵尸网络。针对当前不断发展的僵尸网络特征,基于行为的分析似乎是一种合适的解决方案。本文旨在开发一种高效的僵尸网络检测算法,利用机器学习减少流量来提高准确性。基于行为分析,采用流量减少策略来减少网络流量,以提高系统的整体性能。通常使用几个网络设备来检索网络流量信息。检测准确率为98.4%,使用提供的数据集测量已知和未知的僵尸网络活动。基于机器学习的流量减少系统实现了较高的流量减少率,约为82%,假阳性率在0%至2%之间。这两个结果都证明了所建议的技术是有效和准确的。
{"title":"Hybrid Machine Learning Technique to Detect Active Botnet Attacks for Network Security and Privacy","authors":"Venkatesan C, Thamaraimanalan T, Balamurugan D, Gowrishankar J, Manjunath R, Sivaramakrishnan A","doi":"10.53759/7669/jmc202303044","DOIUrl":"https://doi.org/10.53759/7669/jmc202303044","url":null,"abstract":"A botnet is a malware application controlled from a distance by a programmer with the assistance of a botmaster. Botnets can launch enormous cyber-attacks like Denial-of-Service (DOS), phishing, spam, data stealing, and identity theft. The botnet can also affect the security and privacy of the systems. The conventional approach to detecting botnets is made by signature-based analysis, which cannot discover botnets that are not visible. The behavior-based analysis appears to be an appropriate solution to the current botnet characteristics that are constantly developing. This paper aims to develop an efficient botnet detection algorithm using machine learning with traffic reduction to increase accuracy. Based on behavioural analysis, a traffic reduction strategy is applied to reduce network traffic to improve overall system performance. Several network devices are typically used to retrieve network traffic information. With a detection accuracy of 98.4%, the known and unknown botnet activities are measured using the supplied datasets. The machine learning-based traffic reduction system has achieved a high rate of traffic reduction, about 82%, and false-positive rates range between 0% to 2%. Both findings demonstrate that the suggested technique is efficient and accurate.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134975241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementation of the Internet of Things for early Floods in Agricultural Land using Dimensionality Reduction Technique and Ensemble ML 基于降维技术和集成ML的农用地早期洪水物联网实现
Pub Date : 2023-10-05 DOI: 10.53759/7669/jmc202303050
Murali Dhar M S, Kishore Kumar A, Rajkumar B, Poonguzhali P K, Hemakesavulu O, Mahaveerakannan R
Due to human activities like global warming, pollution, ozone depletion, deforestation, etc., the frequency and severity of natural disasters have increased in recent years. Unlike many other types of natural disasters, floods may be anticipated and warned about in advance. This work presents a flood monitoring and alarm system enabled by a smart device. A microcontroller (Arduino) is included, and its support for detection and indication makes it useful for keeping tabs on and managing the gadget. The device uses its own sensors to take readings of its immediate surroundings, then uploads that data to the cloud and notifies a central administrator of the impending flood. When admin discovers a crisis situation based on the data it has collected, it quickly sends out alerts to those in the local vicinity of any places that are likely to be flooded. Using an Android app, it alerts the user's screen. The project's end goal is to develop an application that swiftly disseminates flood warning information to rural agricultural communities. Scaled principal component analysis (SPCA) is used to filter out extraneous data, and an ensemble machine learning technique is used to make flood predictions. The tests are performed on a dataset that is being collected in real-time and analysed in terms of a number of different parameters. In this research, we propose a strategy for long-term agricultural output through the mitigation of flood risk.
由于人类活动,如全球变暖、污染、臭氧消耗、森林砍伐等,近年来自然灾害的频率和严重程度都有所增加。与许多其他类型的自然灾害不同,洪水是可以预测和提前预警的。本文提出了一种基于智能设备的洪水监测报警系统。包含一个微控制器(Arduino),它对检测和指示的支持使得它对监视和管理小工具很有用。该设备使用自己的传感器读取周围环境的数据,然后将数据上传到云端,并通知中央管理员即将到来的洪水。当管理员根据收集的数据发现危机情况时,它会迅速向任何可能被淹没的地方附近的人们发出警报。使用Android应用程序,它会提醒用户的屏幕。该项目的最终目标是开发一种能够迅速向农村农业社区传播洪水预警信息的应用程序。采用比例主成分分析(SPCA)过滤掉无关数据,采用集成机器学习技术进行洪水预测。测试是在实时收集的数据集上执行的,并根据许多不同的参数进行分析。在本研究中,我们提出了一个通过减轻洪水风险来实现长期农业产出的策略。
{"title":"Implementation of the Internet of Things for early Floods in Agricultural Land using Dimensionality Reduction Technique and Ensemble ML","authors":"Murali Dhar M S, Kishore Kumar A, Rajkumar B, Poonguzhali P K, Hemakesavulu O, Mahaveerakannan R","doi":"10.53759/7669/jmc202303050","DOIUrl":"https://doi.org/10.53759/7669/jmc202303050","url":null,"abstract":"Due to human activities like global warming, pollution, ozone depletion, deforestation, etc., the frequency and severity of natural disasters have increased in recent years. Unlike many other types of natural disasters, floods may be anticipated and warned about in advance. This work presents a flood monitoring and alarm system enabled by a smart device. A microcontroller (Arduino) is included, and its support for detection and indication makes it useful for keeping tabs on and managing the gadget. The device uses its own sensors to take readings of its immediate surroundings, then uploads that data to the cloud and notifies a central administrator of the impending flood. When admin discovers a crisis situation based on the data it has collected, it quickly sends out alerts to those in the local vicinity of any places that are likely to be flooded. Using an Android app, it alerts the user's screen. The project's end goal is to develop an application that swiftly disseminates flood warning information to rural agricultural communities. Scaled principal component analysis (SPCA) is used to filter out extraneous data, and an ensemble machine learning technique is used to make flood predictions. The tests are performed on a dataset that is being collected in real-time and analysed in terms of a number of different parameters. In this research, we propose a strategy for long-term agricultural output through the mitigation of flood risk.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135546541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Highway Self-Attention Dilated Casual Convolutional Neural Network Based Short Term Load Forecasting in Micro Grid 基于高速公路自关注扩展随机卷积神经网络的微网短期负荷预测
Pub Date : 2023-10-05 DOI: 10.53759/7669/jmc202303033
Shreenidhi H S, Narayana Swamy Ramaiah
Forecasting the electricity load is crucial for power system planning and energy management. Since the season of the year, weather, weekdays, and holidays are the key aspects that have an effect on the load consumption, it is difficult to anticipate the future demands. Therefore, we proposed a weather-based short-term load forecasting framework in this paper. First, the missing data is filled, and data normalisation is performed in the pre-processing step. Normalization accelerates convergence and improves network training efficiency by preventing gradient explosion during the training phase. Then the weather, PV, and load features are extracted and fed into the proposed Highway Self-Attention Dilated Casual Convolutional Neural Network (HSAD-CNN) forecasting model. The dilated casual convolutions increase the receptive field without significantly raising computing costs. The multi-head self-attention mechanism (MHSA) gives importance to the most significant time steps for a more accurate forecast. The highway skip network (HS-Net) uses shortcut paths and skip connections to improve the information flow. This speed up the network convergence and prevents feature reuse, vanishing gradients, and negative learning problems. The performance of the HSAD-CNN forecasting technique is evaluated and compared to existing techniques under different day types and seasonal changes. The outcomes indicate that the HSAD-CNN forecasting model has low Mean Absolute Error (MAE), Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and a high R2.
电力负荷预测是电力系统规划和能源管理的重要内容。由于一年中的季节,天气、工作日和节假日是影响负荷消耗的关键方面,因此很难预测未来的需求。因此,本文提出了一个基于天气的短期负荷预测框架。首先,对缺失的数据进行填充,并在预处理步骤中进行数据归一化。归一化通过防止训练阶段的梯度爆炸,加快了收敛速度,提高了网络训练效率。然后提取天气、光伏和负荷特征,并将其输入到高速公路自关注扩展随机卷积神经网络(HSAD-CNN)预测模型中。扩张的随机卷积增加了接受野,但没有显著提高计算成本。多头自注意机制(MHSA)强调了最重要的时间步长对更准确的预测的重要性。高速跳网(HS-Net)采用快捷路径和跳接的方式来改善信息的流通。这加快了网络收敛速度,防止了特征重用、梯度消失和负学习问题。在不同的日型和季节变化条件下,对HSAD-CNN预测技术的性能进行了评价和比较。结果表明,HSAD-CNN预测模型具有较低的平均绝对误差(MAE)、均方误差(MSE)、平均绝对百分比误差(MAPE)和较高的R2。
{"title":"Highway Self-Attention Dilated Casual Convolutional Neural Network Based Short Term Load Forecasting in Micro Grid","authors":"Shreenidhi H S, Narayana Swamy Ramaiah","doi":"10.53759/7669/jmc202303033","DOIUrl":"https://doi.org/10.53759/7669/jmc202303033","url":null,"abstract":"Forecasting the electricity load is crucial for power system planning and energy management. Since the season of the year, weather, weekdays, and holidays are the key aspects that have an effect on the load consumption, it is difficult to anticipate the future demands. Therefore, we proposed a weather-based short-term load forecasting framework in this paper. First, the missing data is filled, and data normalisation is performed in the pre-processing step. Normalization accelerates convergence and improves network training efficiency by preventing gradient explosion during the training phase. Then the weather, PV, and load features are extracted and fed into the proposed Highway Self-Attention Dilated Casual Convolutional Neural Network (HSAD-CNN) forecasting model. The dilated casual convolutions increase the receptive field without significantly raising computing costs. The multi-head self-attention mechanism (MHSA) gives importance to the most significant time steps for a more accurate forecast. The highway skip network (HS-Net) uses shortcut paths and skip connections to improve the information flow. This speed up the network convergence and prevents feature reuse, vanishing gradients, and negative learning problems. The performance of the HSAD-CNN forecasting technique is evaluated and compared to existing techniques under different day types and seasonal changes. The outcomes indicate that the HSAD-CNN forecasting model has low Mean Absolute Error (MAE), Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and a high R2.","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134975375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Prediction Model Based Energy Efficient Data Collection for Wireless Sensor Networks 基于预测模型的无线传感器网络节能数据采集
Pub Date : 2023-10-05 DOI: 10.53759/7669/jmc202303031
Balakumar D, Rangaraj J
Many real-time applications make use of advanced wireless sensor networks (WSNs). Because of the limited memory, power limits, narrow communication bandwidth, and low processing units of wireless sensor nodes (SNs), WSNs suffer severe resource constraints. Data prediction algorithms in WSNs have become crucial for reducing redundant data transmission and extending the network's longevity. Redundancy can be decreased using proper machine learning (ML) techniques while the data aggregation process operates. Researchers persist in searching for effective modelling strategies and algorithms to help generate efficient and acceptable data aggregation methodologies from preexisting WSN models. This work proposes an energy-efficient Adaptive Seagull Optimization Algorithm (ASOA) protocol for selecting the best cluster head (CH). An extreme learning machine (ELM) is employed to select the data corresponding to each node as a way to generate a tree to cluster sensor data. The Dual Graph Convolutional Network (DGCN) is an analytical method that predicts future trends using time series data. Data clustering and aggregation are employed for each cluster head to efficiently perform sample data prediction across WSNs, primarily to minimize the processing overhead caused by the prediction algorithm. Simulation findings suggest that the presented method is practical and efficient regarding reliability, data reduction, and power usage. The results demonstrate that the suggested data collection approach surpasses the existing Least Mean Square (LMS), Periodic Data Prediction Algorithm (P-PDA), and Combined Data Prediction Model (CDPM) methods significantly. The proposed DGCN method has a transmission suppression rate of 92.68%, a difference of 22.33%, 16.69%, and 12.54% compared to the current methods (i.e., LMS, P-PDA, and CDPM).
许多实时应用都使用了先进的无线传感器网络(wsn)。由于有限的内存、功率限制、较窄的通信带宽和较低的无线传感器节点处理单元,无线传感器网络受到严重的资源约束。无线传感器网络中的数据预测算法对于减少冗余数据传输和延长网络寿命至关重要。当数据聚合过程运行时,可以使用适当的机器学习(ML)技术来减少冗余。研究人员一直在寻找有效的建模策略和算法,以帮助从已有的WSN模型中生成高效且可接受的数据聚合方法。本文提出了一种节能的自适应海鸥优化算法(ASOA)协议,用于选择最佳簇头(CH)。利用极限学习机(extreme learning machine, ELM)选择每个节点对应的数据,生成一棵树对传感器数据进行聚类。双图卷积网络(DGCN)是一种利用时间序列数据预测未来趋势的分析方法。对每个簇头进行数据聚类和聚合,以有效地跨wsn进行样本数据预测,主要是为了最小化预测算法带来的处理开销。仿真结果表明,该方法在可靠性、数据减少和功耗方面是实用和高效的。结果表明,所提出的数据收集方法显著优于现有的最小均方(LMS)、周期数据预测算法(P-PDA)和组合数据预测模型(CDPM)方法。所提出的DGCN方法传输抑制率为92.68%,与现有方法(LMS、P-PDA和CDPM)相比分别有22.33%、16.69%和12.54%的差异。
{"title":"A Prediction Model Based Energy Efficient Data Collection for Wireless Sensor Networks","authors":"Balakumar D, Rangaraj J","doi":"10.53759/7669/jmc202303031","DOIUrl":"https://doi.org/10.53759/7669/jmc202303031","url":null,"abstract":"Many real-time applications make use of advanced wireless sensor networks (WSNs). Because of the limited memory, power limits, narrow communication bandwidth, and low processing units of wireless sensor nodes (SNs), WSNs suffer severe resource constraints. Data prediction algorithms in WSNs have become crucial for reducing redundant data transmission and extending the network's longevity. Redundancy can be decreased using proper machine learning (ML) techniques while the data aggregation process operates. Researchers persist in searching for effective modelling strategies and algorithms to help generate efficient and acceptable data aggregation methodologies from preexisting WSN models. This work proposes an energy-efficient Adaptive Seagull Optimization Algorithm (ASOA) protocol for selecting the best cluster head (CH). An extreme learning machine (ELM) is employed to select the data corresponding to each node as a way to generate a tree to cluster sensor data. The Dual Graph Convolutional Network (DGCN) is an analytical method that predicts future trends using time series data. Data clustering and aggregation are employed for each cluster head to efficiently perform sample data prediction across WSNs, primarily to minimize the processing overhead caused by the prediction algorithm. Simulation findings suggest that the presented method is practical and efficient regarding reliability, data reduction, and power usage. The results demonstrate that the suggested data collection approach surpasses the existing Least Mean Square (LMS), Periodic Data Prediction Algorithm (P-PDA), and Combined Data Prediction Model (CDPM) methods significantly. The proposed DGCN method has a transmission suppression rate of 92.68%, a difference of 22.33%, 16.69%, and 12.54% compared to the current methods (i.e., LMS, P-PDA, and CDPM).","PeriodicalId":91709,"journal":{"name":"International journal of machine learning and computing","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134975247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International journal of machine learning and computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1