首页 > 最新文献

Network-Computation in Neural Systems最新文献

英文 中文
A novel approach for heart disease prediction using hybridized AITH2O algorithm and SANFIS classifier. 使用混合 AITH2O 算法和 SANFIS 分类器预测心脏病的新方法。
IF 1.6 3区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-01 Epub Date: 2024-09-25 DOI: 10.1080/0954898X.2024.2404915
Jayachitra Sekar, Prasanth Aruchamy

In today's world, heart disease threatens human life owing to higher mortality and morbidity across the globe. The earlier prediction of heart disease engenders interoperability for the treatment of patients and offers better diagnostic recommendations from medical professionals. However, the existing machine learning classifiers suffer from computational complexity and overfitting problems, which reduces the classification accuracy of the diagnostic system. To address these constraints, this work proposes a new hybrid optimization algorithm to improve the classification accuracy and optimize computation time in smart healthcare applications. Primarily, the optimal features are selected through the hybrid Arithmetic Optimization and Inter-Twinned Mutation-Based Harris Hawk Optimization (AITH2O) algorithm. The proposed hybrid AITH2O algorithm entails advantages of both exploration and exploitation abilities and acquires faster convergence. It is further employed to tune the parameters of the Stabilized Adaptive Neuro-Fuzzy Inference System (SANFIS) classifier for predicting heart disease accurately. The Cleveland heart disease dataset is utilized to validate the efficacy of the proposed algorithm. The simulation is carried out using MATLAB 2020a environment. The simulation results show that the proposed hybrid SANFIS classifier attains a superior accuracy of 99.28% and true positive rate of 99.46% compared to existing state-of-the-art techniques.

当今世界,心脏病威胁着人类的生命,导致全球死亡率和发病率上升。及早预测心脏病可为患者的治疗提供互操作性,并为医疗专业人员提供更好的诊断建议。然而,现有的机器学习分类器存在计算复杂性和过度拟合问题,从而降低了诊断系统的分类准确性。针对这些制约因素,本研究提出了一种新的混合优化算法,以提高智能医疗应用中的分类准确性并优化计算时间。主要是通过基于算术优化和孪生突变的哈里斯-霍克优化(AITH2O)混合算法来选择最佳特征。所提出的混合 AITH2O 算法具有探索和利用两种能力的优势,收敛速度更快。该算法还可用于调整稳定自适应神经模糊推理系统(SANFIS)分类器的参数,以准确预测心脏病。利用克利夫兰心脏病数据集来验证所提算法的有效性。仿真是在 MATLAB 2020a 环境下进行的。仿真结果表明,与现有的最先进技术相比,所提出的混合 SANFIS 分类器的准确率高达 99.28%,真阳性率高达 99.46%。
{"title":"A novel approach for heart disease prediction using hybridized AITH<sup>2</sup>O algorithm and SANFIS classifier.","authors":"Jayachitra Sekar, Prasanth Aruchamy","doi":"10.1080/0954898X.2024.2404915","DOIUrl":"10.1080/0954898X.2024.2404915","url":null,"abstract":"<p><p>In today's world, heart disease threatens human life owing to higher mortality and morbidity across the globe. The earlier prediction of heart disease engenders interoperability for the treatment of patients and offers better diagnostic recommendations from medical professionals. However, the existing machine learning classifiers suffer from computational complexity and overfitting problems, which reduces the classification accuracy of the diagnostic system. To address these constraints, this work proposes a new hybrid optimization algorithm to improve the classification accuracy and optimize computation time in smart healthcare applications. Primarily, the optimal features are selected through the hybrid Arithmetic Optimization and Inter-Twinned Mutation-Based Harris Hawk Optimization (AITH<sup>2</sup>O) algorithm. The proposed hybrid AITH<sup>2</sup>O algorithm entails advantages of both exploration and exploitation abilities and acquires faster convergence. It is further employed to tune the parameters of the Stabilized Adaptive Neuro-Fuzzy Inference System (SANFIS) classifier for predicting heart disease accurately. The Cleveland heart disease dataset is utilized to validate the efficacy of the proposed algorithm. The simulation is carried out using MATLAB 2020a environment. The simulation results show that the proposed hybrid SANFIS classifier attains a superior accuracy of 99.28% and true positive rate of 99.46% compared to existing state-of-the-art techniques.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"109-147"},"PeriodicalIF":1.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142332540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid deep learning based stroke detection using CT images with routing in an IoT environment. 基于混合深度学习的脑卒中检测,在物联网环境中使用CT图像和路由。
IF 1.1 3区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-01 DOI: 10.1080/0954898X.2025.2452280
Anchana Balakrishnannair Sreekumari, Arul Teen Yesudasan Paulsy

Stroke remains a leading global health concern and early diagnosis and accurate identification of stroke lesions are essential for improving treatment outcomes and reducing long-term disabilities. Computed Tomography (CT) imaging is widely used in clinical settings for diagnosing stroke, assessing lesion size, and determining the severity. However, the accurate segmentation and early detection of stroke lesions in CT images remain challenging. Thus, a Jaccard_Residual SqueezeNet is proposed for predicting stroke from CT images with the integration of the Internet of Things (IoT). The Jaccard_Residual SqueezeNet is the integration of the Jaccard index in Residual SqueezeNet. Firstly, the brain CT image is routed to the Base Station (BS) using the Fractional Jellyfish Search Pelican Optimization Algorithm (FJSPOA) and preprocessing is accomplished by median filter. Then, the skull segmentation is accomplished by ENet and then feature extraction is done. Lastly, Stroke is detected using the Jaccard_Residual SqueezeNet. The values of throughput, energy, distance, trust, and delay determined in terms of routing are 72.172 Mbps, 0.580J, 22.243 m, 0.915, and 0.083S. Also, the accuracy, sensitivity, precision, and F1-score for stroke detection are 0.902, 0.896, 0.916, and 0.906. These findings suggest that Jaccard_Residual SqueezeNet offers a robust and efficient platform for stroke detection.

中风仍然是全球主要的健康问题,早期诊断和准确识别中风病变对于改善治疗结果和减少长期残疾至关重要。计算机断层扫描(CT)成像广泛应用于临床诊断中风,评估病变大小,并确定严重程度。然而,脑卒中CT图像的准确分割和早期检测仍然是一个挑战。为此,提出了一种结合物联网(IoT)的Jaccard_Residual SqueezeNet来预测CT图像的脑卒中。Jaccard_Residual SqueezeNet是Jaccard指数在Residual SqueezeNet中的积分。首先,采用分数水母搜索鹈鹕优化算法(FJSPOA)将脑CT图像路由到基站(BS),并进行中值滤波预处理;然后利用ENet对颅骨进行分割,再进行特征提取。最后,使用Jaccard_Residual SqueezeNet检测笔画。由路由决定的吞吐量、能量、距离、信任和时延分别为72.172 Mbps、0.580J、22.243 m、0.915和0.083S。脑卒中检测的准确度、灵敏度、精密度和f1评分分别为0.902、0.896、0.916和0.906。这些发现表明,Jaccard_Residual SqueezeNet为脑卒中检测提供了一个强大而高效的平台。
{"title":"Hybrid deep learning based stroke detection using CT images with routing in an IoT environment.","authors":"Anchana Balakrishnannair Sreekumari, Arul Teen Yesudasan Paulsy","doi":"10.1080/0954898X.2025.2452280","DOIUrl":"https://doi.org/10.1080/0954898X.2025.2452280","url":null,"abstract":"<p><p>Stroke remains a leading global health concern and early diagnosis and accurate identification of stroke lesions are essential for improving treatment outcomes and reducing long-term disabilities. Computed Tomography (CT) imaging is widely used in clinical settings for diagnosing stroke, assessing lesion size, and determining the severity. However, the accurate segmentation and early detection of stroke lesions in CT images remain challenging. Thus, a Jaccard_Residual SqueezeNet is proposed for predicting stroke from CT images with the integration of the Internet of Things (IoT). The Jaccard_Residual SqueezeNet is the integration of the Jaccard index in Residual SqueezeNet. Firstly, the brain CT image is routed to the Base Station (BS) using the Fractional Jellyfish Search Pelican Optimization Algorithm (FJSPOA) and preprocessing is accomplished by median filter. Then, the skull segmentation is accomplished by ENet and then feature extraction is done. Lastly, Stroke is detected using the Jaccard_Residual SqueezeNet. The values of throughput, energy, distance, trust, and delay determined in terms of routing are 72.172 Mbps, 0.580J, 22.243 m, 0.915, and 0.083S. Also, the accuracy, sensitivity, precision, and F1-score for stroke detection are 0.902, 0.896, 0.916, and 0.906. These findings suggest that Jaccard_Residual SqueezeNet offers a robust and efficient platform for stroke detection.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-40"},"PeriodicalIF":1.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143076490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized deep maxout for crowd anomaly detection: A hybrid optimization-based model. 用于人群异常检测的优化深度最大值:基于优化的混合模型
IF 1.6 3区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-01 Epub Date: 2024-09-20 DOI: 10.1080/0954898X.2024.2392772
Rashmi Chaudhary, Manoj Kumar

Monitoring Surveillance video is really time-consuming, and the complexity of typical crowd behaviour in crowded situations makes this even more challenging. This has sparked a curiosity about computer vision-based anomaly detection. This study introduces a new crowd anomaly detection method with two main steps: Visual Attention Detection and Anomaly Detection. The Visual Attention Detection phase uses an Enhanced Bilateral Texture-Based Methodology to pinpoint crucial areas in crowded scenes, improving anomaly detection precision. Next, the Anomaly Detection phase employs Optimized Deep Maxout Network to robustly identify unusual behaviours. This network's deep learning capabilities are essential for detecting complex patterns in diverse crowd scenarios. To enhance accuracy, the model is trained using the innovative Battle Royale Coalesced Atom Search Optimization (BRCASO) algorithm, which fine-tunes optimal weights for superior performance, ensuring heightened detection accuracy and reliability. Lastly, using various performance metrics, the suggested work's effectiveness will be contrasted with that of the other traditional approaches. The proposed crowd anomaly detection is implemented in Python. On observing the result showed that the suggested model attains a detection accuracy of 97.28% at a learning rate of 90%, which is much superior than the detection accuracy of other models, including ASO = 90.56%, BMO = 91.39%, BES = 88.63%, BRO = 86.98%, and FFLY = 89.59%.

监控监控视频非常耗时,而拥挤环境中典型人群行为的复杂性使监控工作更具挑战性。这引发了人们对基于计算机视觉的异常检测的好奇。本研究介绍了一种新的人群异常检测方法,主要包括两个步骤:视觉注意力检测和异常检测。视觉注意力检测阶段采用基于增强双边纹理的方法,在人群密集的场景中精确定位关键区域,从而提高异常检测的精度。接下来,异常检测阶段采用优化的深度 Maxout 网络来稳健地识别异常行为。该网络的深度学习能力对于检测不同人群场景中的复杂模式至关重要。为提高准确性,该模型采用创新的大逃杀原子搜索优化算法(BRCASO)进行训练,该算法可微调最佳权重以获得卓越性能,从而确保提高检测准确性和可靠性。最后,将使用各种性能指标,对建议的工作效果与其他传统方法进行对比。建议的人群异常检测是用 Python 实现的。观察结果表明,在学习率为 90% 的情况下,建议模型的检测准确率达到 97.28%,远高于其他模型的检测准确率,包括 ASO = 90.56%、BMO = 91.39%、BES = 88.63%、BRO = 86.98%、FFLY = 89.59%。
{"title":"Optimized deep maxout for crowd anomaly detection: A hybrid optimization-based model.","authors":"Rashmi Chaudhary, Manoj Kumar","doi":"10.1080/0954898X.2024.2392772","DOIUrl":"10.1080/0954898X.2024.2392772","url":null,"abstract":"<p><p>Monitoring Surveillance video is really time-consuming, and the complexity of typical crowd behaviour in crowded situations makes this even more challenging. This has sparked a curiosity about computer vision-based anomaly detection. This study introduces a new crowd anomaly detection method with two main steps: Visual Attention Detection and Anomaly Detection. The Visual Attention Detection phase uses an Enhanced Bilateral Texture-Based Methodology to pinpoint crucial areas in crowded scenes, improving anomaly detection precision. Next, the Anomaly Detection phase employs Optimized Deep Maxout Network to robustly identify unusual behaviours. This network's deep learning capabilities are essential for detecting complex patterns in diverse crowd scenarios. To enhance accuracy, the model is trained using the innovative Battle Royale Coalesced Atom Search Optimization (BRCASO) algorithm, which fine-tunes optimal weights for superior performance, ensuring heightened detection accuracy and reliability. Lastly, using various performance metrics, the suggested work's effectiveness will be contrasted with that of the other traditional approaches. The proposed crowd anomaly detection is implemented in Python. On observing the result showed that the suggested model attains a detection accuracy of 97.28% at a learning rate of 90%, which is much superior than the detection accuracy of other models, including ASO = 90.56%, BMO = 91.39%, BES = 88.63%, BRO = 86.98%, and FFLY = 89.59%.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"148-173"},"PeriodicalIF":1.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142301214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Haemorrhage diagnosis in colour fundus images using a fast-convolutional neural network based on a modified U-Net. 使用基于改进型 U-Net 的快速卷积神经网络诊断彩色眼底图像中的出血。
IF 1.6 3区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-01 Epub Date: 2024-02-12 DOI: 10.1080/0954898X.2024.2310687
Rathinavelu Sathiyaseelan, Krishnamoorthy Ranganathan, Ramesh Ramamoorthy, M Pedda Chennaiah

Retinal haemorrhage stands as an early indicator of diabetic retinopathy, necessitating accurate detection for timely diagnosis. Addressing this need, this study proposes an enhanced machine-based diagnostic test for diabetic retinopathy through an updated UNet framework, adept at scrutinizing fundus images for signs of retinal haemorrhages. The customized UNet underwent GPU training using the IDRiD database, validated against the publicly available DIARETDB1 and IDRiD datasets. Emphasizing the complexity of segmentation, the study employed preprocessing techniques, augmenting image quality and data integrity. Subsequently, the trained neural network showcased a remarkable performance boost, accurately identifying haemorrhage regions with 80% sensitivity, 99.6% specificity, and 98.6% accuracy. The experimental findings solidify the network's reliability, showcasing potential to alleviate ophthalmologists' workload significantly. Notably, achieving an Intersection over Union (IoU) of 76.61% and a Dice coefficient of 86.51% underscores the system's competence. The study's outcomes signify substantial enhancements in diagnosing critical diabetic retinal conditions, promising profound improvements in diagnostic accuracy and efficiency, thereby marking a significant advancement in automated retinal haemorrhage detection for diabetic retinopathy.

视网膜出血是糖尿病视网膜病变的早期指标,需要准确检测才能及时诊断。针对这一需求,本研究通过更新的 UNet 框架提出了一种基于机器的糖尿病视网膜病变增强诊断测试,该框架善于仔细检查眼底图像,以发现视网膜出血的迹象。定制的 UNet 使用 IDRiD 数据库进行了 GPU 训练,并与公开的 DIARETDB1 和 IDRiD 数据集进行了验证。研究强调了分割的复杂性,采用了预处理技术,提高了图像质量和数据完整性。随后,训练有素的神经网络显示出显著的性能提升,以 80% 的灵敏度、99.6% 的特异度和 98.6% 的准确度准确识别出血区域。实验结果证实了该网络的可靠性,并展示了极大减轻眼科医生工作量的潜力。值得注意的是,该系统的联合交叉率(IoU)达到 76.61%,骰子系数(Dice coefficient)达到 86.51%,这都彰显了该系统的能力。研究结果表明,该系统在诊断糖尿病视网膜病变方面有了显著提高,有望大幅改善诊断准确性和效率,从而标志着糖尿病视网膜病变视网膜出血自动检测技术的重大进步。
{"title":"Haemorrhage diagnosis in colour fundus images using a fast-convolutional neural network based on a modified U-Net.","authors":"Rathinavelu Sathiyaseelan, Krishnamoorthy Ranganathan, Ramesh Ramamoorthy, M Pedda Chennaiah","doi":"10.1080/0954898X.2024.2310687","DOIUrl":"10.1080/0954898X.2024.2310687","url":null,"abstract":"<p><p>Retinal haemorrhage stands as an early indicator of diabetic retinopathy, necessitating accurate detection for timely diagnosis. Addressing this need, this study proposes an enhanced machine-based diagnostic test for diabetic retinopathy through an updated UNet framework, adept at scrutinizing fundus images for signs of retinal haemorrhages. The customized UNet underwent GPU training using the IDRiD database, validated against the publicly available DIARETDB1 and IDRiD datasets. Emphasizing the complexity of segmentation, the study employed preprocessing techniques, augmenting image quality and data integrity. Subsequently, the trained neural network showcased a remarkable performance boost, accurately identifying haemorrhage regions with 80% sensitivity, 99.6% specificity, and 98.6% accuracy. The experimental findings solidify the network's reliability, showcasing potential to alleviate ophthalmologists' workload significantly. Notably, achieving an Intersection over Union (IoU) of 76.61% and a Dice coefficient of 86.51% underscores the system's competence. The study's outcomes signify substantial enhancements in diagnosing critical diabetic retinal conditions, promising profound improvements in diagnostic accuracy and efficiency, thereby marking a significant advancement in automated retinal haemorrhage detection for diabetic retinopathy.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"198-219"},"PeriodicalIF":1.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139725025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning and optimization enabled multi-objective for task scheduling in cloud computing. 云计算任务调度的深度学习和优化多目标。
IF 1.6 3区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-01 Epub Date: 2024-08-20 DOI: 10.1080/0954898X.2024.2391395
Dinesh Komarasamy, Siva Malar Ramaganthan, Dharani Molapalayam Kandaswamy, Gokuldhev Mony

In cloud computing (CC), task scheduling allocates the task to best suitable resource for execution. This article proposes a model for task scheduling utilizing the multi-objective optimization and deep learning (DL) model. Initially, the multi-objective task scheduling is carried out by the incoming user utilizing the proposed hybrid fractional flamingo beetle optimization (FFBO) which is formed by integrating dung beetle optimization (DBO), flamingo search algorithm (FSA) and fractional calculus (FC). Here, the fitness function depends on reliability, cost, predicted energy, and makespan, the predicted energy is forecasted by a deep residual network (DRN). Thereafter, task scheduling is accomplished based on DL using the proposed deep feedforward neural network fused long short-term memory (DFNN-LSTM), which is the combination of DFNN and LSTM. Moreover, when scheduling the workflow, the task parameters and the virtual machine's (VM) live parameters are taken into consideration. Task parameters are earliest finish time (EFT), earliest start time (EST), task length, task priority, and actual task running time, whereas VM parameters include memory utilization, bandwidth utilization, capacity, and central processing unit (CPU). The proposed model DFNN-LSTM+FFBO has achieved superior makespan, energy, and resource utilization of 0.188, 0.950J, and 0.238, respectively.

在云计算(CC)中,任务调度将任务分配给最合适的资源执行。本文提出了一种利用多目标优化和深度学习(DL)模型的任务调度模型。最初,多目标任务调度是由接收用户利用提出的混合分数火烈鸟优化(FFBO)进行的,该算法由蜣螂优化(DBO)、火烈鸟搜索算法(FSA)和分数微积分(FC)集成而成。其中,适应度函数取决于可靠性、成本、预测能量和工期,预测能量由深度残差网络(DRN)预测。之后,利用所提出的融合了长短期记忆(DFNN-LSTM)的深度前馈神经网络(DFNN-LSTM),即 DFNN 和 LSTM 的组合,在 DL 的基础上完成任务调度。此外,在调度工作流时,还要考虑任务参数和虚拟机(VM)的实时参数。任务参数包括最早完成时间(EFT)、最早开始时间(EST)、任务长度、任务优先级和实际任务运行时间,而虚拟机参数包括内存利用率、带宽利用率、容量和中央处理器(CPU)。所提出的模型 DFNN-LSTM+FFBO 在时间跨度、能量和资源利用率方面分别达到了 0.188、0.950J 和 0.238 的优异成绩。
{"title":"Deep learning and optimization enabled multi-objective for task scheduling in cloud computing.","authors":"Dinesh Komarasamy, Siva Malar Ramaganthan, Dharani Molapalayam Kandaswamy, Gokuldhev Mony","doi":"10.1080/0954898X.2024.2391395","DOIUrl":"10.1080/0954898X.2024.2391395","url":null,"abstract":"<p><p>In cloud computing (CC), task scheduling allocates the task to best suitable resource for execution. This article proposes a model for task scheduling utilizing the multi-objective optimization and deep learning (DL) model. Initially, the multi-objective task scheduling is carried out by the incoming user utilizing the proposed hybrid fractional flamingo beetle optimization (FFBO) which is formed by integrating dung beetle optimization (DBO), flamingo search algorithm (FSA) and fractional calculus (FC). Here, the fitness function depends on reliability, cost, predicted energy, and makespan, the predicted energy is forecasted by a deep residual network (DRN). Thereafter, task scheduling is accomplished based on DL using the proposed deep feedforward neural network fused long short-term memory (DFNN-LSTM), which is the combination of DFNN and LSTM. Moreover, when scheduling the workflow, the task parameters and the virtual machine's (VM) live parameters are taken into consideration. Task parameters are earliest finish time (EFT), earliest start time (EST), task length, task priority, and actual task running time, whereas VM parameters include memory utilization, bandwidth utilization, capacity, and central processing unit (CPU). The proposed model DFNN-LSTM+FFBO has achieved superior makespan, energy, and resource utilization of 0.188, 0.950J, and 0.238, respectively.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"79-108"},"PeriodicalIF":1.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142009908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HCAR-AM ground nut leaf net: Hybrid convolution-based adaptive ResNet with attention mechanism for detecting ground nut leaf diseases with adaptive segmentation. HCAR-AM 坚果叶网:基于混合卷积的自适应 ResNet,采用注意力机制,通过自适应分割检测坚花叶病。
IF 1.6 3区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-01 Epub Date: 2024-11-17 DOI: 10.1080/0954898X.2024.2424248
Annamalai Thiruvengadam Madhavi, Kamal Basha Rahimunnisa

Estimating the optimal answer is expensive for huge data resources that decrease the functionality of the system. To solve these issues, the latest groundnut leaf disorder identification model by deep learning techniques is implemented. The images are collected from traditional databases, and then they are given to the pre-processing stage. Then, relevant features are drawn out from the preprocessed images in two stages. In the first stage, the preprocessed image is segmented using adaptive TransResunet++, where the variables are tuned with the help of designed Hybrid Position of Beluga Whale and Cuttle Fish (HP-BWCF) and finally get the feature set 1 using Kaze Feature Points and Binary Descriptors. In the second stage, the same Kaze feature points and the binary descriptors are extracted from the preprocessed image separately, and then obtain feature set 2. Then, the extracted feature sets 1 and 2 are concatenated and given to the Hybrid Convolution-based Adaptive Resnet with Attention Mechanism (HCAR-AM) to detect the ground nut leaf diseases very effectively. The parameters from this HCAR-AM are tuned via the same HP-BWCF. The experimental outcome is analysed over various recently developed ground nut leaf disease detection approaches in accordance with various performance measures.

估计最佳答案需要耗费大量数据资源,从而降低了系统的功能。为了解决这些问题,我们利用深度学习技术实现了最新的花生叶片紊乱识别模型。首先,从传统数据库中收集图像,然后对图像进行预处理。然后,分两个阶段从预处理后的图像中提取相关特征。在第一阶段,使用自适应 TransResunet++ 对预处理后的图像进行分割,在此过程中,借助设计的白鲸和墨鱼混合位置(HP-BWCF)对变量进行调整,最后使用 Kaze 特征点和二进制描述符得到特征集 1。在第二阶段,分别从预处理后的图像中提取相同的 Kaze 特征点和二进制描述符,然后得到特征集 2。然后,将提取的特征集 1 和特征集 2 合并,并将其交给具有注意机制的基于混合卷积的自适应 Resnet(HCAR-AM),从而非常有效地检测出土坚果叶片病害。HCAR-AM 的参数通过相同的 HP-BWCF 进行调整。实验结果根据各种性能指标对最近开发的各种坚果叶病检测方法进行了分析。
{"title":"HCAR-AM ground nut leaf net: Hybrid convolution-based adaptive ResNet with attention mechanism for detecting ground nut leaf diseases with adaptive segmentation.","authors":"Annamalai Thiruvengadam Madhavi, Kamal Basha Rahimunnisa","doi":"10.1080/0954898X.2024.2424248","DOIUrl":"10.1080/0954898X.2024.2424248","url":null,"abstract":"<p><p>Estimating the optimal answer is expensive for huge data resources that decrease the functionality of the system. To solve these issues, the latest groundnut leaf disorder identification model by deep learning techniques is implemented. The images are collected from traditional databases, and then they are given to the pre-processing stage. Then, relevant features are drawn out from the preprocessed images in two stages. In the first stage, the preprocessed image is segmented using adaptive TransResunet++, where the variables are tuned with the help of designed Hybrid Position of Beluga Whale and Cuttle Fish (HP-BWCF) and finally get the feature set 1 using Kaze Feature Points and Binary Descriptors. In the second stage, the same Kaze feature points and the binary descriptors are extracted from the preprocessed image separately, and then obtain feature set 2. Then, the extracted feature sets 1 and 2 are concatenated and given to the Hybrid Convolution-based Adaptive Resnet with Attention Mechanism (HCAR-AM) to detect the ground nut leaf diseases very effectively. The parameters from this HCAR-AM are tuned via the same HP-BWCF. The experimental outcome is analysed over various recently developed ground nut leaf disease detection approaches in accordance with various performance measures.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"38-78"},"PeriodicalIF":1.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142649538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review on real time implementation of soft computing techniques in thermal power plant. 火力发电厂软计算技术实时应用综述。
IF 1.6 3区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-01 Epub Date: 2024-11-27 DOI: 10.1080/0954898X.2024.2429721
Love Kumar Thawait, Mukesh Kumar Singh

Thermal Power Plant is a common power plant that generates power by fuel-burning to produce electricity. Being a significant component of the energy sector, the Thermal Power Plant faces several issues that lead to reduced productivity. Conventional researchers have tried using different mechanisms for improvising the production of Thermal Power Plants in varied dimensions. Due to the diverse dimensions considered by existing works, the present review endeavours to afford a comprehensive summary of these works. To achieve this, the study reviews articles in the range (2019-2023) that are allied with the utility of SC methodologies (encompassing AI-ML (Machine Learning) and DL (Deep Learning) in enhancing the productivity of Thermal Power Plants by various dimensions. The conventional AI-based approaches are comparatively evaluated for effective contribution in improvising Thermal Power Plant production. Following this, a critical assessment encompasses the year-wise distribution and varied dimensions focussed by traditional studies in this area. This would support future researchers in determining the dimensions that have attained limited and high focus based on which appropriate research works can be performed. Finally, future suggestions and research gaps are included to offer new stimulus for further investigation of AI in Thermal Power Plants.

火力发电厂是一种通过燃烧燃料发电的普通发电厂。作为能源行业的重要组成部分,火力发电厂面临着导致生产率降低的若干问题。传统的研究人员尝试使用不同的机制,从不同的维度提高火力发电厂的生产效率。由于现有著作考虑了不同的方面,本综述试图对这些著作进行全面总结。为实现这一目标,本研究回顾了(2019-2023 年)范围内与 SC 方法(包括 AI-ML(机器学习)和 DL(深度学习))在不同维度提高火力发电厂生产率方面的实用性相关的文章。对基于人工智能的传统方法进行比较评估,以确定其在提高火力发电厂生产率方面的有效贡献。随后,对该领域传统研究的年度分布和关注的不同维度进行了批判性评估。这将有助于未来的研究人员确定重点有限和重点突出的方面,并据此开展适当的研究工作。最后,还提出了未来建议和研究缺口,为进一步研究火力发电厂中的人工智能提供新的动力。
{"title":"A review on real time implementation of soft computing techniques in thermal power plant.","authors":"Love Kumar Thawait, Mukesh Kumar Singh","doi":"10.1080/0954898X.2024.2429721","DOIUrl":"10.1080/0954898X.2024.2429721","url":null,"abstract":"<p><p>Thermal Power Plant is a common power plant that generates power by fuel-burning to produce electricity. Being a significant component of the energy sector, the Thermal Power Plant faces several issues that lead to reduced productivity. Conventional researchers have tried using different mechanisms for improvising the production of Thermal Power Plants in varied dimensions. Due to the diverse dimensions considered by existing works, the present review endeavours to afford a comprehensive summary of these works. To achieve this, the study reviews articles in the range (2019-2023) that are allied with the utility of SC methodologies (encompassing AI-ML (Machine Learning) and DL (Deep Learning) in enhancing the productivity of Thermal Power Plants by various dimensions. The conventional AI-based approaches are comparatively evaluated for effective contribution in improvising Thermal Power Plant production. Following this, a critical assessment encompasses the year-wise distribution and varied dimensions focussed by traditional studies in this area. This would support future researchers in determining the dimensions that have attained limited and high focus based on which appropriate research works can be performed. Finally, future suggestions and research gaps are included to offer new stimulus for further investigation of AI in Thermal Power Plants.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-37"},"PeriodicalIF":1.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142734747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Q-learning and fuzzy logic multi-tier multi-access edge clustering for 5g v2x communication. 用于 5g v2x 通信的 Q-learning 和模糊逻辑多层多接入边缘聚类。
IF 1.6 3区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-01 Epub Date: 2024-03-06 DOI: 10.1080/0954898X.2024.2309947
Sangeetha Alagumani, Uma Maheswari Natarajan

The 5th generation (5 G) network is required to meet the growing demand for fast data speeds and the expanding number of customers. Apart from offering higher speeds, 5 G will be employed in other industries such as the Internet of Things, broadcast services, and so on. Energy efficiency, scalability, resiliency, interoperability, and high data rate/low delay are the primary requirements and obstacles of 5 G cellular networks. Due to IEEE 802.11p's constraints, such as limited coverage, inability to handle dense vehicle networks, signal congestion, and connectivity outages, efficient data distribution is a big challenge (MAC contention problem). In this research, vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I) and vehicle-to-pedestrian (V2P) services are used to overcome bandwidth constraints in very dense network communications from cellular tool to everything (C-V2X). Clustering is done through multi-layered multi-access edge clustering, which helps reduce vehicle contention. Fuzzy logic and Q-learning and intelligence are used for a multi-hop route selection system. The proposed protocol adjusts the number of cluster-head nodes using a Q-learning algorithm, allowing it to quickly adapt to a range of scenarios with varying bandwidths and vehicle densities.

第五代(5 G)网络需要满足日益增长的高速数据需求和不断扩大的客户数量。除了提供更高的速度,5 G 还将应用于其他行业,如物联网、广播服务等。能源效率、可扩展性、弹性、互操作性和高数据速率/低延迟是 5 G 蜂窝网络的主要要求和障碍。由于 IEEE 802.11p 的限制,如有限的覆盖范围、无法处理密集的车辆网络、信号拥塞和连接中断,高效的数据分发是一个巨大的挑战(MAC 竞争问题)。在这项研究中,车对车(V2V)、车对基础设施(V2I)和车对行人(V2P)服务被用来克服从蜂窝工具到万物(C-V2X)的高密度网络通信中的带宽限制。聚类是通过多层多接入边缘聚类完成的,这有助于减少车辆争用。多跳路由选择系统采用了模糊逻辑和 Q 学习与智能。提议的协议使用 Q-learning 算法调整簇头节点的数量,使其能够快速适应带宽和车辆密度不同的各种情况。
{"title":"Q-learning and fuzzy logic multi-tier multi-access edge clustering for 5g v2x communication.","authors":"Sangeetha Alagumani, Uma Maheswari Natarajan","doi":"10.1080/0954898X.2024.2309947","DOIUrl":"10.1080/0954898X.2024.2309947","url":null,"abstract":"<p><p>The 5th generation (5 G) network is required to meet the growing demand for fast data speeds and the expanding number of customers. Apart from offering higher speeds, 5 G will be employed in other industries such as the Internet of Things, broadcast services, and so on. Energy efficiency, scalability, resiliency, interoperability, and high data rate/low delay are the primary requirements and obstacles of 5 G cellular networks. Due to IEEE 802.11p's constraints, such as limited coverage, inability to handle dense vehicle networks, signal congestion, and connectivity outages, efficient data distribution is a big challenge (MAC contention problem). In this research, vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I) and vehicle-to-pedestrian (V2P) services are used to overcome bandwidth constraints in very dense network communications from cellular tool to everything (C-V2X). Clustering is done through multi-layered multi-access edge clustering, which helps reduce vehicle contention. Fuzzy logic and Q-learning and intelligence are used for a multi-hop route selection system. The proposed protocol adjusts the number of cluster-head nodes using a Q-learning algorithm, allowing it to quickly adapt to a range of scenarios with varying bandwidths and vehicle densities.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"174-197"},"PeriodicalIF":1.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance analysis of image retrieval system using deep learning techniques. 基于深度学习技术的图像检索系统性能分析。
IF 1.1 3区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-20 DOI: 10.1080/0954898X.2025.2451388
Selvalakshmi B, Hemalatha K, Kumarganesh S, Vijayalakshmi P

The image retrieval is the process of retrieving the relevant images to the query image with minimal searching time in internet. The problem of the conventional Content-Based Image Retrieval (CBIR) system is that they produce retrieval results for either colour images or grey scale images alone. Moreover, the CBIR system is more complex which consumes more time period for producing the significant retrieval results. These problems are overcome through the proposed methodologies stated in this work. In this paper, the General Image (GI) and Medical Image (MI) are retrieved using deep learning architecture. The proposed system is designed with feature computation module, Retrieval Convolutional Neural Network (RETCNN) module, and Distance computation algorithm. The distance computation algorithm is used to compute the distances between the query image and the images in the datasets and produces the retrieval results. The average precision and recall for the proposed RETCNN-based CBIRS is 98.98% and 99.15% respectively for GI category, and the average precision and recall for the proposed RETCNN-based CBIRS are 99.04% and 98.89% respectively for MI category. The significance of these experimental results is used to produce the higher image retrieval rate of the proposed system.

图像检索是在网络上以最小的搜索时间检索到所查询图像的相关图像的过程。传统的基于内容的图像检索(CBIR)系统的问题是,它们只能对彩色图像或灰度图像产生检索结果。此外,CBIR系统比较复杂,要产生有意义的检索结果需要耗费更多的时间。这些问题是通过在这项工作中提出的方法来克服的。本文采用深度学习架构对通用图像(GI)和医学图像(MI)进行检索。该系统由特征计算模块、检索卷积神经网络(RETCNN)模块和距离计算算法组成。距离计算算法用于计算查询图像与数据集中图像之间的距离,并产生检索结果。基于retcnn的CBIRS在GI分类上的平均准确率和召回率分别为98.98%和99.15%,在MI分类上的平均准确率和召回率分别为99.04%和98.89%。利用这些实验结果的显著性,使所提出的系统具有较高的图像检索率。
{"title":"Performance analysis of image retrieval system using deep learning techniques.","authors":"Selvalakshmi B, Hemalatha K, Kumarganesh S, Vijayalakshmi P","doi":"10.1080/0954898X.2025.2451388","DOIUrl":"https://doi.org/10.1080/0954898X.2025.2451388","url":null,"abstract":"<p><p>The image retrieval is the process of retrieving the relevant images to the query image with minimal searching time in internet. The problem of the conventional Content-Based Image Retrieval (CBIR) system is that they produce retrieval results for either colour images or grey scale images alone. Moreover, the CBIR system is more complex which consumes more time period for producing the significant retrieval results. These problems are overcome through the proposed methodologies stated in this work. In this paper, the General Image (GI) and Medical Image (MI) are retrieved using deep learning architecture. The proposed system is designed with feature computation module, Retrieval Convolutional Neural Network (RETCNN) module, and Distance computation algorithm. The distance computation algorithm is used to compute the distances between the query image and the images in the datasets and produces the retrieval results. The average precision and recall for the proposed RETCNN-based CBIRS is 98.98% and 99.15% respectively for GI category, and the average precision and recall for the proposed RETCNN-based CBIRS are 99.04% and 98.89% respectively for MI category. The significance of these experimental results is used to produce the higher image retrieval rate of the proposed system.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-21"},"PeriodicalIF":1.1,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143016857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel efficient data storage and data auditing in cloud environment using enhanced child drawing development optimization strategy. 采用增强的儿童绘图开发优化策略,在云环境中实现了一种新的高效数据存储和数据审计。
IF 1.1 3区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-17 DOI: 10.1080/0954898X.2024.2443622
Aruna Kari Balakrishnan, Arunachalaperumal Chellaperumal, Sudha Lakshmanan, Sureka Vijayakumar

The optimization on the cloud-based data structures is carried out using Adaptive Level and Skill Rate-based Child Drawing Development Optimization algorithm (ALSR-CDDO). Also, the overall cost required in computing and communicating is reduced by optimally selecting these data structures by the ALSR-CDDO algorithm. The storage of the data in the cloud platform is performed using the Divide and Conquer Table (D&CT). The location table and the information table are generated using the D&CT method. The details, such as the file information, file ID, version number, and user ID, are all present in the information table. Every time data is deleted or updated, and its version number is modified. Whenever an update takes place using D&CT, the location table also gets upgraded. The information regarding the location of a file in the Cloud Service Provider (CSP) is given in the location table. Once the data is stored in the CSP, the auditing of the data is then performed on the stored data. Both dynamic and batch auditing are carried out on the stored data, even if it gets updated dynamically in the CSP. The security offered by the executed scheme is verified by contrasting it with other existing auditing schemes.

采用基于自适应水平和技能率的儿童绘画发展优化算法(ALSR-CDDO)对基于云的数据结构进行优化。此外,通过ALSR-CDDO算法对这些数据结构进行优化选择,降低了计算和通信所需的总成本。使用分治表(D&CT)在云平台中存储数据。位置表和信息表采用D&CT法生成。详细信息,如文件信息、文件ID、版本号和用户ID,都显示在信息表中。每次删除或更新数据时,都会修改其版本号。每当使用D&CT进行更新时,位置表也会得到升级。有关文件在云服务提供商(CSP)中的位置的信息在位置表中给出。一旦数据存储在CSP中,就会对存储的数据执行数据审计。对存储的数据执行动态和批处理审计,即使它在CSP中得到动态更新。通过将所执行的方案与其他现有的审计方案进行对比,验证其提供的安全性。
{"title":"A novel efficient data storage and data auditing in cloud environment using enhanced child drawing development optimization strategy.","authors":"Aruna Kari Balakrishnan, Arunachalaperumal Chellaperumal, Sudha Lakshmanan, Sureka Vijayakumar","doi":"10.1080/0954898X.2024.2443622","DOIUrl":"https://doi.org/10.1080/0954898X.2024.2443622","url":null,"abstract":"<p><p>The optimization on the cloud-based data structures is carried out using Adaptive Level and Skill Rate-based Child Drawing Development Optimization algorithm (ALSR-CDDO). Also, the overall cost required in computing and communicating is reduced by optimally selecting these data structures by the ALSR-CDDO algorithm. The storage of the data in the cloud platform is performed using the Divide and Conquer Table (D&CT). The location table and the information table are generated using the D&CT method. The details, such as the file information, file ID, version number, and user ID, are all present in the information table. Every time data is deleted or updated, and its version number is modified. Whenever an update takes place using D&CT, the location table also gets upgraded. The information regarding the location of a file in the Cloud Service Provider (CSP) is given in the location table. Once the data is stored in the CSP, the auditing of the data is then performed on the stored data. Both dynamic and batch auditing are carried out on the stored data, even if it gets updated dynamically in the CSP. The security offered by the executed scheme is verified by contrasting it with other existing auditing schemes.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-41"},"PeriodicalIF":1.1,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143016854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Network-Computation in Neural Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1