首页 > 最新文献

Machine learning with applications最新文献

英文 中文
Ensemble prediction of RRC session duration in real-world NR/LTE networks 真实世界 NR/LTE 网络中 RRC 会话持续时间的集合预测
Pub Date : 2024-06-06 DOI: 10.1016/j.mlwa.2024.100564
Roopesh Kumar Polaganga , Qilian Liang

In the rapidly evolving realm of telecommunications, Machine Learning (ML) stands as a key driver for intelligent 6 G networks, leveraging diverse datasets to optimize real-time network parameters. This transition seamlessly extends from 4 G LTE and 5 G NR to 6 G, with ML insights from existing networks, specifically in predicting RRC session durations. This work introduces a novel use of weighted ensemble approach using AutoGluon library, employing multiple base models for accurate prediction of user session durations in real-world LTE and NR networks. Comparative analysis reveals superior accuracy in LTE, with 'Data Volume' as a crucial feature due to its direct impact on network load and user experience. Notably, NR sessions, marked by extended durations, reflect unique patterns attributed to Fixed Wireless Access (FWA) devices. An ablation study underscores the weighted ensemble's superior performance. This study highlights the need for techniques like data categorization to enhance prediction accuracies for evolving technologies, providing insights for enhanced adaptability in ML-based prediction models for the next network generation.

在快速发展的电信领域,机器学习(ML)是智能 6 G 网络的关键驱动力,可利用各种数据集优化实时网络参数。这一过渡从 4 G LTE 和 5 G NR 无缝延伸到 6 G,从现有网络中获得 ML 见解,特别是在预测 RRC 会话持续时间方面。这项研究利用 AutoGluon 库引入了一种新颖的加权集合方法,采用多个基本模型来准确预测实际 LTE 和 NR 网络中的用户会话持续时间。对比分析表明,LTE 的准确度更高,其中 "数据量 "是一个关键特征,因为它对网络负载和用户体验有直接影响。值得注意的是,NR 会话以持续时间长为特点,反映了固定无线接入 (FWA) 设备的独特模式。一项消融研究强调了加权合集的卓越性能。这项研究强调了对数据分类等技术的需求,以提高不断发展的技术的预测准确性,为下一代网络中基于 ML 的预测模型增强适应性提供了启示。
{"title":"Ensemble prediction of RRC session duration in real-world NR/LTE networks","authors":"Roopesh Kumar Polaganga ,&nbsp;Qilian Liang","doi":"10.1016/j.mlwa.2024.100564","DOIUrl":"https://doi.org/10.1016/j.mlwa.2024.100564","url":null,"abstract":"<div><p>In the rapidly evolving realm of telecommunications, Machine Learning (ML) stands as a key driver for intelligent 6 G networks, leveraging diverse datasets to optimize real-time network parameters. This transition seamlessly extends from 4 G LTE and 5 G NR to 6 G, with ML insights from existing networks, specifically in predicting RRC session durations. This work introduces a novel use of weighted ensemble approach using AutoGluon library, employing multiple base models for accurate prediction of user session durations in real-world LTE and NR networks. Comparative analysis reveals superior accuracy in LTE, with 'Data Volume' as a crucial feature due to its direct impact on network load and user experience. Notably, NR sessions, marked by extended durations, reflect unique patterns attributed to Fixed Wireless Access (FWA) devices. An ablation study underscores the weighted ensemble's superior performance. This study highlights the need for techniques like data categorization to enhance prediction accuracies for evolving technologies, providing insights for enhanced adaptability in ML-based prediction models for the next network generation.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"17 ","pages":"Article 100564"},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000409/pdfft?md5=ae11da30368beabe226dc0a04234ea0b&pid=1-s2.0-S2666827024000409-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141324369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying losers: Automatic identification of growth-stunted salmon in aquaculture using computer vision 识别失败者:利用计算机视觉自动识别水产养殖中生长发育迟缓的鲑鱼
Pub Date : 2024-06-01 DOI: 10.1016/j.mlwa.2024.100562
Kana Banno , Filipe Marcel Fernandes Gonçalves , Clara Sauphar , Marianna Anichini , Aline Hazelaar , Linda Helen Sperre , Christian Stolz , Grete Hansen Aas , Lars Christian Gansel , Ricardo da Silva Torres

During the production of salmonids in aquaculture, it is common to observe growth-stunted individuals. The cause for the so-called “loser fish syndrome” is unclear, which needs further investigation. Here, we present and compare computer vision systems for the automatic detection and classification of loser fish in Atlantic salmon images taken in sea cages. We evaluated two end-to-end approaches (combined detection and classification) based on YoloV5 and YoloV7, and a two-stage approach based on transfer learning for detection and an ensemble of classifiers (e.g., linear perception, Adaline, C-support vector, K-nearest neighbours, and multi-layer perceptron) for classification. To our knowledge, the use of an ensemble of classifiers, considering consolidated classifiers proposed in the literature, has not been applied to this problem before. Classification entailed the assigning of every fish to a healthy and a loser class. The results of the automatic classification were compared to the reliability of human classification. The best-performing computer vision approach was based on YoloV7, which reached a precision score of 86.30%, a recall score of 71.75%, and an F1 score of 78.35%. YoloV5 presented a precision of 79.7%, while the two-stage approach reached a precision of 66.05%. Human classification had a substantial agreement strength (Fleiss’ Kappa score of 0.68), highlighting that evaluation by a human is subjective. Our proposed automatic detection and classification system will enable farmers and researchers to follow the abundance of losers throughout the production period. We provide our dataset of annotated salmon images for further research.

在水产养殖过程中,经常会观察到生长发育迟缓的鲑鱼个体。所谓 "败鱼综合征 "的原因尚不清楚,需要进一步研究。在此,我们介绍并比较了用于自动检测和分类网箱中大西洋鲑鱼图像中的败鱼的计算机视觉系统。我们评估了基于 YoloV5 和 YoloV7 的两种端到端方法(组合检测和分类),以及一种基于迁移学习的两阶段方法(用于检测)和一种组合分类器(如线性感知、Adaline、C 支持向量、K 近邻和多层感知器)(用于分类)。据我们所知,考虑到文献中提出的综合分类器,使用分类器集合以前从未应用于这一问题。分类需要将每条鱼分别归入健康类和失败类。自动分类的结果与人工分类的可靠性进行了比较。表现最好的计算机视觉方法是 YoloV7,其精确度达到 86.30%,召回率为 71.75%,F1 分数为 78.35%。YoloV5 的精确度为 79.7%,而两阶段方法的精确度为 66.05%。人工分类具有相当高的一致性(Fleiss' Kappa 分数为 0.68),这说明人工评价是主观的。我们提出的自动检测和分类系统将使农民和研究人员能够在整个生产期间跟踪失败者的丰产情况。我们提供了三文鱼图像注释数据集,供进一步研究。
{"title":"Identifying losers: Automatic identification of growth-stunted salmon in aquaculture using computer vision","authors":"Kana Banno ,&nbsp;Filipe Marcel Fernandes Gonçalves ,&nbsp;Clara Sauphar ,&nbsp;Marianna Anichini ,&nbsp;Aline Hazelaar ,&nbsp;Linda Helen Sperre ,&nbsp;Christian Stolz ,&nbsp;Grete Hansen Aas ,&nbsp;Lars Christian Gansel ,&nbsp;Ricardo da Silva Torres","doi":"10.1016/j.mlwa.2024.100562","DOIUrl":"10.1016/j.mlwa.2024.100562","url":null,"abstract":"<div><p>During the production of salmonids in aquaculture, it is common to observe growth-stunted individuals. The cause for the so-called “loser fish syndrome” is unclear, which needs further investigation. Here, we present and compare computer vision systems for the automatic detection and classification of loser fish in Atlantic salmon images taken in sea cages. We evaluated two <em>end-to-end approaches</em> (combined detection and classification) based on YoloV5 and YoloV7, and a <em>two-stage approach</em> based on transfer learning for detection and an ensemble of classifiers (e.g., linear perception, Adaline, C-support vector, K-nearest neighbours, and multi-layer perceptron) for classification. To our knowledge, the use of an ensemble of classifiers, considering consolidated classifiers proposed in the literature, has not been applied to this problem before. Classification entailed the assigning of every fish to a healthy and a loser class. The results of the automatic classification were compared to the reliability of human classification. The best-performing computer vision approach was based on YoloV7, which reached a precision score of 86.30%, a recall score of 71.75%, and an F1 score of 78.35%. YoloV5 presented a precision of 79.7%, while the <em>two-stage approach</em> reached a precision of 66.05%. Human classification had a substantial agreement strength (Fleiss’ Kappa score of 0.68), highlighting that evaluation by a human is subjective. Our proposed automatic detection and classification system will enable farmers and researchers to follow the abundance of losers throughout the production period. We provide our dataset of annotated salmon images for further research.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100562"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000380/pdfft?md5=98890f0d3d0ca2bae4b005f262aea422&pid=1-s2.0-S2666827024000380-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141137209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empirical loss weight optimization for PINN modeling laser bio-effects on human skin for the 1D heat equation 针对一维热方程的 PINN 模型激光对人体皮肤的生物效应进行经验损失权重优化
Pub Date : 2024-06-01 DOI: 10.1016/j.mlwa.2024.100563
Jenny Farmer , Chad A. Oian , Brett A. Bowman , Taufiquar Khan

The application of deep neural networks towards solving problems in science and engineering has demonstrated encouraging results with the recent formulation of physics-informed neural networks (PINNs). Through the development of refined machine learning techniques, the high computational cost of obtaining numerical solutions for partial differential equations governing complicated physical systems can be mitigated. However, solutions are not guaranteed to be unique, and are subject to uncertainty caused by the choice of network model parameters. For critical systems with significant consequences for errors, assessing and quantifying this model uncertainty is essential. In this paper, an application of PINN for laser bio-effects with limited training data is provided for uncertainty quantification analysis. Additionally, an efficacy study is performed to investigate the impact of the relative weights of the loss components of the PINN and how the uncertainty in the predictions depends on these weights. Network ensembles are constructed to empirically investigate the diversity of solutions across an extensive sweep of hyper-parameters to determine the model that consistently reproduces a high-fidelity numerical simulation.

深度神经网络在解决科学和工程问题方面的应用,在最近提出的物理信息神经网络(PINNs)中取得了令人鼓舞的成果。通过开发精炼的机器学习技术,可以降低获取管理复杂物理系统的偏微分方程数值解的高计算成本。然而,解法并不能保证是唯一的,而且会受到网络模型参数选择造成的不确定性的影响。对于误差后果严重的关键系统,评估和量化模型的不确定性至关重要。本文提供了一个针对激光生物效应的 PINN 应用,在训练数据有限的情况下进行不确定性量化分析。此外,还进行了一项功效研究,以探讨 PINN 损失分量相对权重的影响,以及预测的不确定性如何取决于这些权重。通过构建网络集合,对各种超参数的解决方案的多样性进行了实证研究,以确定能够始终如一地再现高保真数值模拟的模型。
{"title":"Empirical loss weight optimization for PINN modeling laser bio-effects on human skin for the 1D heat equation","authors":"Jenny Farmer ,&nbsp;Chad A. Oian ,&nbsp;Brett A. Bowman ,&nbsp;Taufiquar Khan","doi":"10.1016/j.mlwa.2024.100563","DOIUrl":"https://doi.org/10.1016/j.mlwa.2024.100563","url":null,"abstract":"<div><p>The application of deep neural networks towards solving problems in science and engineering has demonstrated encouraging results with the recent formulation of physics-informed neural networks (PINNs). Through the development of refined machine learning techniques, the high computational cost of obtaining numerical solutions for partial differential equations governing complicated physical systems can be mitigated. However, solutions are not guaranteed to be unique, and are subject to uncertainty caused by the choice of network model parameters. For critical systems with significant consequences for errors, assessing and quantifying this model uncertainty is essential. In this paper, an application of PINN for laser bio-effects with limited training data is provided for uncertainty quantification analysis. Additionally, an efficacy study is performed to investigate the impact of the relative weights of the loss components of the PINN and how the uncertainty in the predictions depends on these weights. Network ensembles are constructed to empirically investigate the diversity of solutions across an extensive sweep of hyper-parameters to determine the model that consistently reproduces a high-fidelity numerical simulation.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100563"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000392/pdfft?md5=283004f05817debae277d850bbc84d0a&pid=1-s2.0-S2666827024000392-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141291144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Applications of machine learning in surge prediction for vehicle turbochargers 机器学习在汽车涡轮增压器浪涌预测中的应用
Pub Date : 2024-05-16 DOI: 10.1016/j.mlwa.2024.100560
Hiroki Saito , Dai Kanzaki , Kazuo Yonekura

Surging in vehicle turbochargers is an important phenomenon that can damage the compressor and its peripheral equipment due to pressure fluctuations and vibration, so it is essential to understand the operating points where surging occurs. In this paper, we constructed a Neural Network (NN) that can predict these operating points, using as explanatory variables the geometry parameters of the vehicle turbocharger and one-dimensional predictions of the flow rates at surge. Our contribution is the use of machine learning to enable fast and low-cost prediction of surge points, which is usually only available through experiments or calculation-intensive Computational Fluid Dynamics (CFD). Evaluations conducted on the test data revealed that prediction accuracy was poor for some turbocharger geometries and operating conditions, and that this was associated with the relatively small data quantity included in the training data. Expanding the appropriate data offers some prospect of improving prediction accuracy.

车辆涡轮增压器中的浪涌是一种重要现象,可能会因压力波动和振动而损坏压缩机及其外围设备,因此了解发生浪涌的工作点至关重要。在本文中,我们构建了一个神经网络 (NN),利用车辆涡轮增压器的几何参数和浪涌时流量的一维预测值作为解释变量,可以预测这些工作点。我们的贡献在于利用机器学习实现了快速、低成本的浪涌点预测,而这通常只能通过实验或计算密集型计算流体动力学(CFD)来实现。对测试数据进行的评估显示,某些涡轮增压器几何形状和工作条件下的预测准确性较差,这与训练数据中包含的数据量相对较少有关。扩大适当的数据量有望提高预测精度。
{"title":"Applications of machine learning in surge prediction for vehicle turbochargers","authors":"Hiroki Saito ,&nbsp;Dai Kanzaki ,&nbsp;Kazuo Yonekura","doi":"10.1016/j.mlwa.2024.100560","DOIUrl":"10.1016/j.mlwa.2024.100560","url":null,"abstract":"<div><p>Surging in vehicle turbochargers is an important phenomenon that can damage the compressor and its peripheral equipment due to pressure fluctuations and vibration, so it is essential to understand the operating points where surging occurs. In this paper, we constructed a Neural Network (NN) that can predict these operating points, using as explanatory variables the geometry parameters of the vehicle turbocharger and one-dimensional predictions of the flow rates at surge. Our contribution is the use of machine learning to enable fast and low-cost prediction of surge points, which is usually only available through experiments or calculation-intensive Computational Fluid Dynamics (CFD). Evaluations conducted on the test data revealed that prediction accuracy was poor for some turbocharger geometries and operating conditions, and that this was associated with the relatively small data quantity included in the training data. Expanding the appropriate data offers some prospect of improving prediction accuracy.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100560"},"PeriodicalIF":0.0,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000367/pdfft?md5=7cefda1eb7f0f688680b98ed0f4e260c&pid=1-s2.0-S2666827024000367-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141026003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning feature importance selection for predicting aboveground biomass in African savannah with landsat 8 and ALOS PALSAR data 利用 Landsat 8 和 ALOS PALSAR 数据预测非洲大草原地上生物量的机器学习特征重要性选择。
Pub Date : 2024-05-16 DOI: 10.1016/j.mlwa.2024.100561
Sa'ad Ibrahim , Heiko Balzter , Kevin Tansey

In remote sensing, multiple input bands are derived from various sensors covering different regions of the electromagnetic spectrum. Each spectral band plays a unique role in land use/land cover characterization. For example, while integrating multiple sensors for predicting aboveground biomass (AGB) is important for achieving high accuracy, reducing the dataset size by eliminating redundant and irrelevant spectral features is essential for enhancing the performance of machine learning algorithms. This accelerates the learning process, thereby developing simpler and more efficient models. Our results indicate that compared individual sensor datasets, the random forest (RF) classification approach using recursive feature elimination (RFE) increased the accuracy based on F score by 82.86 % and 26.19 respectively. The mutual information regression (MIR) method shows a slight increase in accuracy when considering individual sensor datasets, but its accuracy decreases when all features are taken into account for all models. Overall, the combination of features from the Landsat 8, ALOS PALSAR backscatter, and elevation data selected based on RFE provided the best AGB estimation for the RF and XGBoost models. In contrast to the k-nearest neighbors (KNN) and support vector machines (SVM), no significant improvement in AGB estimation was detected even when RFE and MIR were used. The effect of parameter optimization was found to be more significant for RF than for all the other methods. The AGB maps show patterns of AGB estimates consistent with those of the reference dataset. This study shows how prediction errors can be minimized based on feature selection using different ML classifiers.

在遥感技术中,多个输入波段来自不同的传感器,覆盖电磁波谱的不同区域。每个光谱波段在土地利用/土地覆被特征描述中都发挥着独特的作用。例如,虽然整合多个传感器来预测地上生物量(AGB)对实现高精度非常重要,但通过消除冗余和不相关的光谱特征来减少数据集的大小,对提高机器学习算法的性能至关重要。这可以加速学习过程,从而开发出更简单、更高效的模型。我们的研究结果表明,与单个传感器数据集相比,使用递归特征消除(RFE)的随机森林(RF)分类方法提高了基于 F 分数的准确率,分别提高了 82.86 % 和 26.19 %。在考虑单个传感器数据集时,互信息回归(MIR)方法的准确率略有提高,但在所有模型都考虑所有特征时,其准确率则有所下降。总体而言,基于 RFE 选定的 Landsat 8、ALOS PALSAR 后向散射和高程数据的特征组合为 RF 和 XGBoost 模型提供了最佳的 AGB 估计。与 k-nearest neighbors(KNN)和支持向量机(SVM)相比,即使使用 RFE 和 MIR,也没有发现对 AGB 估计的显著改进。与所有其他方法相比,参数优化对 RF 的影响更为显著。AGB 地图显示的 AGB 估计模式与参考数据集的模式一致。这项研究显示了如何通过使用不同的多级分类器进行特征选择,将预测误差降到最低。
{"title":"Machine learning feature importance selection for predicting aboveground biomass in African savannah with landsat 8 and ALOS PALSAR data","authors":"Sa'ad Ibrahim ,&nbsp;Heiko Balzter ,&nbsp;Kevin Tansey","doi":"10.1016/j.mlwa.2024.100561","DOIUrl":"10.1016/j.mlwa.2024.100561","url":null,"abstract":"<div><p>In remote sensing, multiple input bands are derived from various sensors covering different regions of the electromagnetic spectrum. Each spectral band plays a unique role in land use/land cover characterization. For example, while integrating multiple sensors for predicting aboveground biomass (AGB) is important for achieving high accuracy, reducing the dataset size by eliminating redundant and irrelevant spectral features is essential for enhancing the performance of machine learning algorithms. This accelerates the learning process, thereby developing simpler and more efficient models. Our results indicate that compared individual sensor datasets, the random forest (RF) classification approach using recursive feature elimination (RFE) increased the accuracy based on F score by 82.86 % and 26.19 respectively. The mutual information regression (MIR) method shows a slight increase in accuracy when considering individual sensor datasets, but its accuracy decreases when all features are taken into account for all models. Overall, the combination of features from the Landsat 8, ALOS PALSAR backscatter, and elevation data selected based on RFE provided the best AGB estimation for the RF and XGBoost models. In contrast to the k-nearest neighbors (KNN) and support vector machines (SVM), no significant improvement in AGB estimation was detected even when RFE and MIR were used. The effect of parameter optimization was found to be more significant for RF than for all the other methods. The AGB maps show patterns of AGB estimates consistent with those of the reference dataset. This study shows how prediction errors can be minimized based on feature selection using different ML classifiers.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100561"},"PeriodicalIF":0.0,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000379/pdfft?md5=eaa2c37c10a3e2753bcd07c6a3fa9373&pid=1-s2.0-S2666827024000379-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141054840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explaining vulnerabilities of heart rate biometric models securing IoT wearables 解释确保物联网可穿戴设备安全的心率生物识别模型的漏洞
Pub Date : 2024-05-13 DOI: 10.1016/j.mlwa.2024.100559
Chi-Wei Lien , Sudip Vhaduri , Sayanton V. Dibbo , Maliha Shaheed

In the field of health informatics, extensive research has been conducted to predict diseases and extract valuable insights from patient data. However, a significant gap exists in addressing privacy concerns associated with data collection. Therefore, there is an urgent need to develop a machine-learning authentication model to secure the patients’ data seamlessly and continuously, as well as to find potential explanations when the model may fail. To address this challenge, we propose a unique approach to secure patients’ data using novel eigenheart features calculated from coarse-grained heart rate data. Various statistical and visualization techniques are utilized to explain the potential vulnerabilities of the model. Though it is feasible to develop continuous user authentication models from readily available heart rate data with reasonable performance, they are affected by factors such as age and Body Mass Index (BMI). These factors will be crucial for developing a more robust authentication model in the future.

在健康信息学领域,已经开展了大量研究,以预测疾病并从患者数据中提取有价值的见解。然而,在解决与数据收集相关的隐私问题方面还存在很大差距。因此,迫切需要开发一种机器学习认证模型,以无缝、持续地确保患者数据的安全,并在模型可能失效时找到潜在的解释。为了应对这一挑战,我们提出了一种独特的方法,利用从粗粒度心率数据中计算出的新颖特征来确保患者数据的安全。我们利用各种统计和可视化技术来解释模型的潜在漏洞。虽然利用现成的心率数据开发性能合理的连续用户身份验证模型是可行的,但这些模型会受到年龄和体重指数(BMI)等因素的影响。这些因素对未来开发更强大的身份验证模型至关重要。
{"title":"Explaining vulnerabilities of heart rate biometric models securing IoT wearables","authors":"Chi-Wei Lien ,&nbsp;Sudip Vhaduri ,&nbsp;Sayanton V. Dibbo ,&nbsp;Maliha Shaheed","doi":"10.1016/j.mlwa.2024.100559","DOIUrl":"https://doi.org/10.1016/j.mlwa.2024.100559","url":null,"abstract":"<div><p>In the field of health informatics, extensive research has been conducted to predict diseases and extract valuable insights from patient data. However, a significant gap exists in addressing privacy concerns associated with data collection. Therefore, there is an urgent need to develop a machine-learning authentication model to secure the patients’ data seamlessly and continuously, as well as to find potential explanations when the model may fail. To address this challenge, we propose a unique approach to secure patients’ data using novel <em>eigenheart</em> features calculated from coarse-grained heart rate data. Various statistical and visualization techniques are utilized to explain the potential vulnerabilities of the model. Though it is feasible to develop continuous user authentication models from readily available heart rate data with reasonable performance, they are affected by factors such as age and Body Mass Index (BMI). These factors will be crucial for developing a more robust authentication model in the future.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100559"},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000355/pdfft?md5=49d6dff59b0bf14c46b5801d5d2b0451&pid=1-s2.0-S2666827024000355-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141066939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TeenyTinyLlama: Open-source tiny language models trained in Brazilian Portuguese TeenyTinyLlama:以巴西葡萄牙语训练的开源微小语言模型
Pub Date : 2024-05-10 DOI: 10.1016/j.mlwa.2024.100558
Nicholas Kluge Corrêa , Sophia Falk , Shiza Fatimah , Aniket Sen , Nythamar De Oliveira

Large language models (LLMs) have significantly advanced natural language processing, but their progress has yet to be equal across languages. While most LLMs are trained in high-resource languages like English, multilingual models generally underperform monolingual ones. Additionally, aspects of their multilingual foundation sometimes restrict the byproducts they produce, like computational demands and licensing regimes. In this study, we document the development of open-foundation models tailored for use in low-resource settings, their limitations, and their benefits. This is the TeenyTinyLlama pair: two compact models for Brazilian Portuguese text generation. We release them under the permissive Apache 2.0 license on GitHub and Hugging Face for community use and further development.

大型语言模型(LLMs)极大地推动了自然语言处理的发展,但其在不同语言间的进展却不尽相同。虽然大多数 LLM 都是在英语等高资源语言中训练出来的,但多语言模型的表现通常不如单语言模型。此外,多语言基础有时会限制其产生的副产品,如计算要求和许可制度。在本研究中,我们记录了专为在低资源环境中使用而开发的开放基础模型、其局限性及其优势。这就是 TeenyTinyLlama 对:两个用于巴西葡萄牙语文本生成的紧凑模型。我们在 GitHub 和 Hugging Face 上以 Apache 2.0 许可发布了这两个模型,供社区使用和进一步开发。
{"title":"TeenyTinyLlama: Open-source tiny language models trained in Brazilian Portuguese","authors":"Nicholas Kluge Corrêa ,&nbsp;Sophia Falk ,&nbsp;Shiza Fatimah ,&nbsp;Aniket Sen ,&nbsp;Nythamar De Oliveira","doi":"10.1016/j.mlwa.2024.100558","DOIUrl":"https://doi.org/10.1016/j.mlwa.2024.100558","url":null,"abstract":"<div><p>Large language models (LLMs) have significantly advanced natural language processing, but their progress has yet to be equal across languages. While most LLMs are trained in high-resource languages like English, multilingual models generally underperform monolingual ones. Additionally, aspects of their multilingual foundation sometimes restrict the byproducts they produce, like computational demands and licensing regimes. In this study, we document the development of open-foundation models tailored for use in low-resource settings, their limitations, and their benefits. This is the <em>TeenyTinyLlama</em> pair: two compact models for Brazilian Portuguese text generation. We release them under the permissive Apache 2.0 license on <span>GitHub</span><svg><path></path></svg> and <span>Hugging Face</span><svg><path></path></svg> for community use and further development.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100558"},"PeriodicalIF":0.0,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000343/pdfft?md5=ca3df301a069c8298b65dcd69855e4ac&pid=1-s2.0-S2666827024000343-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141066938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using ChatGPT to annotate a dataset: A case study in intelligent tutoring systems 使用 ChatGPT 对数据集进行注释:智能辅导系统案例研究
Pub Date : 2024-05-09 DOI: 10.1016/j.mlwa.2024.100557
Aleksandar Vujinović, Nikola Luburić, Jelena Slivka, Aleksandar Kovačević

Large language models like ChatGPT can learn in-context (ICL) from examples. Studies showed that, due to ICL, ChatGPT achieves impressive performance in various natural language processing tasks. However, to the best of our knowledge, this is the first study that assesses ChatGPT's effectiveness in annotating a dataset for training instructor models in intelligent tutoring systems (ITSs). The task of an ITS instructor model is to automatically provide effective tutoring instruction given a student's state, mimicking human instructors. These models are typically implemented as hardcoded rules, requiring expertise, and limiting their ability to generalize and personalize instructions. These problems could be mitigated by utilizing machine learning (ML). However, developing ML models requires a large dataset of student states annotated by corresponding tutoring instructions. Using human experts to annotate such a dataset is expensive, time-consuming, and requires pedagogical expertise. Thus, this study explores ChatGPT's potential to act as a pedagogy expert annotator. Using prompt engineering, we created a list of instructions a tutor could recommend to a student. We manually filtered this list and instructed ChatGPT to select the appropriate instruction from the list for the given student's state. We manually analyzed ChatGPT's responses that could be considered incorrectly annotated. Our results indicate that using ChatGPT as an annotator is an effective alternative to human experts. The contributions of our work are (1) a novel dataset annotation methodology for the ITS, (2) a publicly available dataset of student states annotated with tutoring instructions, and (3) a list of possible tutoring instructions.

像 ChatGPT 这样的大型语言模型可以从示例中学习上下文(ICL)。研究表明,由于有了 ICL,ChatGPT 在各种自然语言处理任务中都取得了令人瞩目的成绩。然而,据我们所知,这是第一项评估 ChatGPT 在为智能辅导系统(ITS)中的教师模型训练数据集注释时的有效性的研究。智能辅导系统教师模型的任务是模仿人类教师,根据学生的状态自动提供有效的辅导指导。这些模型通常是以硬编码规则的形式实现的,需要专业知识,而且限制了其概括和个性化指导的能力。利用机器学习(ML)可以缓解这些问题。然而,开发 ML 模型需要一个由相应辅导指令注释的大型学生状态数据集。使用人类专家来注释这样一个数据集既昂贵又耗时,而且还需要教学方面的专业知识。因此,本研究探索了 ChatGPT 作为教学法专家注释器的潜力。通过使用提示工程,我们创建了一份导师可向学生推荐的说明列表。我们手动筛选了这个列表,并指示 ChatGPT 从列表中为给定的学生状态选择合适的指令。我们手动分析了 ChatGPT 的回复中可能存在的错误注释。我们的结果表明,使用 ChatGPT 作为注释器可以有效替代人类专家。我们工作的贡献在于:(1) 为智能学习系统提供了一种新颖的数据集注释方法;(2) 提供了一个公开的学生状态数据集,其中注释了辅导说明;(3) 提供了一个可能的辅导说明列表。
{"title":"Using ChatGPT to annotate a dataset: A case study in intelligent tutoring systems","authors":"Aleksandar Vujinović,&nbsp;Nikola Luburić,&nbsp;Jelena Slivka,&nbsp;Aleksandar Kovačević","doi":"10.1016/j.mlwa.2024.100557","DOIUrl":"10.1016/j.mlwa.2024.100557","url":null,"abstract":"<div><p>Large language models like ChatGPT can learn in-context (ICL) from examples. Studies showed that, due to ICL, ChatGPT achieves impressive performance in various natural language processing tasks. However, to the best of our knowledge, this is the first study that assesses ChatGPT's effectiveness in annotating a dataset for training instructor models in intelligent tutoring systems (ITSs). The task of an ITS instructor model is to automatically provide effective tutoring instruction given a student's state, mimicking human instructors. These models are typically implemented as hardcoded rules, requiring expertise, and limiting their ability to generalize and personalize instructions. These problems could be mitigated by utilizing machine learning (ML). However, developing ML models requires a large dataset of student states annotated by corresponding tutoring instructions. Using human experts to annotate such a dataset is expensive, time-consuming, and requires pedagogical expertise. Thus, this study explores ChatGPT's potential to act as a pedagogy expert annotator. Using prompt engineering, we created a list of instructions a tutor could recommend to a student. We manually filtered this list and instructed ChatGPT to select the appropriate instruction from the list for the given student's state. We manually analyzed ChatGPT's responses that could be considered incorrectly annotated. Our results indicate that using ChatGPT as an annotator is an effective alternative to human experts. The contributions of our work are (1) a novel dataset annotation methodology for the ITS, (2) a publicly available dataset of student states annotated with tutoring instructions, and (3) a list of possible tutoring instructions.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100557"},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000331/pdfft?md5=3322a1226bc15e9303a8f45ef791c421&pid=1-s2.0-S2666827024000331-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141051011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning approach for Maize Lethal Necrosis and Maize Streak Virus disease detection 用于检测玉米致命坏死病和玉米条斑病毒病的深度学习方法
Pub Date : 2024-05-07 DOI: 10.1016/j.mlwa.2024.100556
Tony O’Halloran , George Obaido , Bunmi Otegbade , Ibomoiye Domor Mienye

Maize is an important crop cultivated in Sub-Saharan Africa, essential for food security. However, its cultivation faces significant challenges due to debilitating diseases such as Maize Lethal Necrosis (MLN) and Maize Streak Virus (MSV), which can lead to severe yield losses. Traditional plant disease diagnosis methods are often time-consuming and prone to errors, necessitating more efficient approaches. This study explores the application of deep learning, specifically Convolutional Neural Networks (CNNs), in the automatic detection and classification of maize diseases. We investigate six architectures: Basic CNN, EfficientNet V2 B0 and B1, LeNet-5, VGG-16, and ResNet50, using a dataset of 15344 images comprising MSV, MLN, and healthy maize leaves. Additionally, We performed hyperparameter tuning to improve the performance of the models and Gradient-weighted Class Activation Mapping (Grad-CAM) for model interpretability. Our results show that the EfficientNet V2 B0 model demonstrated an accuracy of 99.99% in distinguishing between healthy and disease-infected plants. The results of this study contribute to the advancement of AI applications in agriculture, particularly in diagnosing maize diseases within Sub-Saharan Africa.

玉米是撒哈拉以南非洲地区种植的重要作物,对粮食安全至关重要。然而,由于玉米致命坏死病(MLN)和玉米条斑病毒(MSV)等会导致严重减产的病害,玉米种植面临着巨大挑战。传统的植物病害诊断方法往往耗时且容易出错,因此需要更高效的方法。本研究探索了深度学习,特别是卷积神经网络(CNN)在玉米病害自动检测和分类中的应用。我们研究了六种架构:基本 CNN、EfficientNet V2 B0 和 B1、LeNet-5、VGG-16 和 ResNet50,使用由 MSV、MLN 和健康玉米叶片组成的 15344 张图像数据集。此外,我们还进行了超参数调整以提高模型的性能,并使用梯度加权类激活映射(Gradient-weighted Class Activation Mapping,Grad-CAM)来提高模型的可解释性。研究结果表明,高效网络 V2 B0 模型在区分健康植物和受疾病感染植物方面的准确率高达 99.99%。这项研究的结果有助于促进人工智能在农业领域的应用,尤其是在撒哈拉以南非洲地区诊断玉米疾病方面。
{"title":"A deep learning approach for Maize Lethal Necrosis and Maize Streak Virus disease detection","authors":"Tony O’Halloran ,&nbsp;George Obaido ,&nbsp;Bunmi Otegbade ,&nbsp;Ibomoiye Domor Mienye","doi":"10.1016/j.mlwa.2024.100556","DOIUrl":"https://doi.org/10.1016/j.mlwa.2024.100556","url":null,"abstract":"<div><p>Maize is an important crop cultivated in Sub-Saharan Africa, essential for food security. However, its cultivation faces significant challenges due to debilitating diseases such as Maize Lethal Necrosis (MLN) and Maize Streak Virus (MSV), which can lead to severe yield losses. Traditional plant disease diagnosis methods are often time-consuming and prone to errors, necessitating more efficient approaches. This study explores the application of deep learning, specifically Convolutional Neural Networks (CNNs), in the automatic detection and classification of maize diseases. We investigate six architectures: Basic CNN, EfficientNet V2 B0 and B1, LeNet-5, VGG-16, and ResNet50, using a dataset of 15344 images comprising MSV, MLN, and healthy maize leaves. Additionally, We performed hyperparameter tuning to improve the performance of the models and Gradient-weighted Class Activation Mapping (Grad-CAM) for model interpretability. Our results show that the EfficientNet V2 B0 model demonstrated an accuracy of 99.99% in distinguishing between healthy and disease-infected plants. The results of this study contribute to the advancement of AI applications in agriculture, particularly in diagnosing maize diseases within Sub-Saharan Africa.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100556"},"PeriodicalIF":0.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266682702400032X/pdfft?md5=63e258fda2023d11e907699e71790fd7&pid=1-s2.0-S266682702400032X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140901864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing breast cancer segmentation and classification: An Ensemble Deep Convolutional Neural Network and U-net approach on ultrasound images 增强乳腺癌的分割和分类:超声图像上的深度卷积神经网络和 U-net 组合方法
Pub Date : 2024-05-01 DOI: 10.1016/j.mlwa.2024.100555
Md Rakibul Islam , Md Mahbubur Rahman , Md Shahin Ali , Abdullah Al Nomaan Nafi , Md Shahariar Alam , Tapan Kumar Godder , Md Sipon Miah , Md Khairul Islam

Breast cancer is a condition where the irregular growth of breast cells occurs uncontrollably, leading to the formation of tumors. It poses a significant threat to women’s lives globally, emphasizing the need for enhanced methods of detecting and categorizing the disease. In this work, we propose an Ensemble Deep Convolutional Neural Network (EDCNN) model that exhibits superior accuracy compared to several transfer learning models and the Vision Transformer model. Our EDCNN model integrates the strengths of the MobileNet and Xception models to improve its performance in breast cancer detection and classification. We employ various preprocessing techniques, including image resizing, data normalization, and data augmentation, to prepare the data for analysis. By following these measures, the formatting is optimized, and the model’s capacity to make generalizations is improved. We trained and evaluated our proposed EDCNN model using ultrasound images, a widely available modality for breast cancer imaging. The outcomes of our experiments illustrate that the EDCNN model attains an exceptional accuracy of 87.82% on Dataset 1 and 85.69% on Dataset 2, surpassing the performance of several well-known transfer learning models and the Vision Transformer model. Furthermore, an AUC value of 0.91 on Dataset 1 highlights the robustness and effectiveness of our proposed model. Moreover, we highlight the incorporation of the Grad-CAM Explainable Artificial Intelligence (XAI) technique to improve the interpretability and transparency of our proposed model. Additionally, we performed image segmentation using the U-Net segmentation technique on the input ultrasound images. This segmentation process allowed for the identification and isolation of specific regions of interest, facilitating a more comprehensive analysis of breast cancer characteristics. In conclusion, the study presents a creative approach to detecting and categorizing breast cancer, demonstrating the superior performance of the EDCNN model compared to well-established transfer learning models. Through advanced deep learning techniques and image segmentation, this study contributes to improving diagnosis and treatment outcomes in breast cancer.

乳腺癌是乳腺细胞不受控制地不规则生长,从而形成肿瘤的一种疾病。它对全球妇女的生命构成了重大威胁,因此需要加强对该疾病的检测和分类方法。在这项工作中,我们提出了一种集合深度卷积神经网络(EDCNN)模型,与几种迁移学习模型和 Vision Transformer 模型相比,该模型表现出更高的准确性。我们的 EDCNN 模型整合了 MobileNet 和 Xception 模型的优势,提高了其在乳腺癌检测和分类方面的性能。我们采用了各种预处理技术,包括图像大小调整、数据归一化和数据增强,为分析做好数据准备。通过这些措施,格式得到了优化,模型的泛化能力也得到了提高。我们使用超声波图像(一种广泛用于乳腺癌成像的模式)对所提出的 EDCNN 模型进行了训练和评估。实验结果表明,EDCNN 模型在数据集 1 和数据集 2 上的准确率分别达到了 87.82% 和 85.69%,超过了几个著名的迁移学习模型和 Vision Transformer 模型。此外,数据集 1 上的 AUC 值为 0.91,这凸显了我们提出的模型的鲁棒性和有效性。此外,我们还强调了 Grad-CAM Explainable Artificial Intelligence(XAI)技术的融入,以提高我们所提模型的可解释性和透明度。此外,我们还使用 U-Net 分割技术对输入的超声图像进行了图像分割。这一分割过程可以识别和隔离特定的感兴趣区域,从而有助于对乳腺癌特征进行更全面的分析。总之,本研究提出了一种检测和分类乳腺癌的创新方法,证明了 EDCNN 模型与成熟的迁移学习模型相比具有更优越的性能。通过先进的深度学习技术和图像分割,这项研究有助于改善乳腺癌的诊断和治疗效果。
{"title":"Enhancing breast cancer segmentation and classification: An Ensemble Deep Convolutional Neural Network and U-net approach on ultrasound images","authors":"Md Rakibul Islam ,&nbsp;Md Mahbubur Rahman ,&nbsp;Md Shahin Ali ,&nbsp;Abdullah Al Nomaan Nafi ,&nbsp;Md Shahariar Alam ,&nbsp;Tapan Kumar Godder ,&nbsp;Md Sipon Miah ,&nbsp;Md Khairul Islam","doi":"10.1016/j.mlwa.2024.100555","DOIUrl":"https://doi.org/10.1016/j.mlwa.2024.100555","url":null,"abstract":"<div><p>Breast cancer is a condition where the irregular growth of breast cells occurs uncontrollably, leading to the formation of tumors. It poses a significant threat to women’s lives globally, emphasizing the need for enhanced methods of detecting and categorizing the disease. In this work, we propose an Ensemble Deep Convolutional Neural Network (EDCNN) model that exhibits superior accuracy compared to several transfer learning models and the Vision Transformer model. Our EDCNN model integrates the strengths of the MobileNet and Xception models to improve its performance in breast cancer detection and classification. We employ various preprocessing techniques, including image resizing, data normalization, and data augmentation, to prepare the data for analysis. By following these measures, the formatting is optimized, and the model’s capacity to make generalizations is improved. We trained and evaluated our proposed EDCNN model using ultrasound images, a widely available modality for breast cancer imaging. The outcomes of our experiments illustrate that the EDCNN model attains an exceptional accuracy of 87.82% on Dataset 1 and 85.69% on Dataset 2, surpassing the performance of several well-known transfer learning models and the Vision Transformer model. Furthermore, an AUC value of 0.91 on Dataset 1 highlights the robustness and effectiveness of our proposed model. Moreover, we highlight the incorporation of the Grad-CAM Explainable Artificial Intelligence (XAI) technique to improve the interpretability and transparency of our proposed model. Additionally, we performed image segmentation using the U-Net segmentation technique on the input ultrasound images. This segmentation process allowed for the identification and isolation of specific regions of interest, facilitating a more comprehensive analysis of breast cancer characteristics. In conclusion, the study presents a creative approach to detecting and categorizing breast cancer, demonstrating the superior performance of the EDCNN model compared to well-established transfer learning models. Through advanced deep learning techniques and image segmentation, this study contributes to improving diagnosis and treatment outcomes in breast cancer.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100555"},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000318/pdfft?md5=bd8495c4192aeafbc922477585a1e7f6&pid=1-s2.0-S2666827024000318-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140843003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Machine learning with applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1