首页 > 最新文献

Algorithms最新文献

英文 中文
Using Markov Random Field and Analytic Hierarchy Process to Account for Interdependent Criteria 利用马尔可夫随机场和层次分析法来解释相互依存的标准
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-19 DOI: 10.3390/a17010001
Jih-Jeng Huang, Chin-Yi Chen
The Analytic Hierarchy Process (AHP) has been a widely used multi-criteria decision-making (MCDM) method since the 1980s because of its simplicity and rationality. However, the conventional AHP assumes criteria independence, which is not always accurate in realistic scenarios where interdependencies between criteria exist. Several methods have been proposed to relax the postulation of the independent criteria in the AHP, e.g., the Analytic Network Process (ANP). However, these methods usually need a number of pairwise comparison matrices (PCMs) and make it hard to apply to a complicated and large-scale problem. This paper presents a groundbreaking approach to address this issue by incorporating discrete Markov Random Fields (MRFs) into the AHP framework. Our method enhances decision making by effectively and sensibly capturing interdependencies among criteria, reflecting actual weights. Moreover, we showcase a numerical example to illustrate the proposed method and compare the results with the conventional AHP and Fuzzy Cognitive Map (FCM). The findings highlight our method’s ability to influence global priority values and the ranking of alternatives when considering interdependencies between criteria. These results suggest that the introduced method provides a flexible and adaptable framework for modeling interdependencies between criteria, ultimately leading to more accurate and reliable decision-making outcomes.
自 20 世纪 80 年代以来,层次分析法(AHP)因其简单合理而成为一种广泛使用的多标准决策(MCDM)方法。然而,传统的 AHP 假设标准是独立的,这在标准之间存在相互依赖关系的现实情况下并不总是准确的。为了放宽 AHP 中独立标准的假设,人们提出了一些方法,如分析网络过程(ANP)。然而,这些方法通常需要大量的成对比较矩阵(PCM),因此很难应用于复杂的大规模问题。本文提出了一种开创性的方法,通过将离散马尔可夫随机场(MRF)纳入 AHP 框架来解决这一问题。我们的方法能有效、合理地捕捉标准之间的相互依存关系,反映实际权重,从而提高决策水平。此外,我们展示了一个数值示例来说明所提出的方法,并将结果与传统的 AHP 和模糊认知图(FCM)进行了比较。研究结果凸显了我们的方法在考虑标准间相互依存关系时影响全局优先值和备选方案排序的能力。这些结果表明,所引入的方法为标准之间的相互依存关系建模提供了一个灵活、可调整的框架,最终可带来更准确、更可靠的决策结果。
{"title":"Using Markov Random Field and Analytic Hierarchy Process to Account for Interdependent Criteria","authors":"Jih-Jeng Huang, Chin-Yi Chen","doi":"10.3390/a17010001","DOIUrl":"https://doi.org/10.3390/a17010001","url":null,"abstract":"The Analytic Hierarchy Process (AHP) has been a widely used multi-criteria decision-making (MCDM) method since the 1980s because of its simplicity and rationality. However, the conventional AHP assumes criteria independence, which is not always accurate in realistic scenarios where interdependencies between criteria exist. Several methods have been proposed to relax the postulation of the independent criteria in the AHP, e.g., the Analytic Network Process (ANP). However, these methods usually need a number of pairwise comparison matrices (PCMs) and make it hard to apply to a complicated and large-scale problem. This paper presents a groundbreaking approach to address this issue by incorporating discrete Markov Random Fields (MRFs) into the AHP framework. Our method enhances decision making by effectively and sensibly capturing interdependencies among criteria, reflecting actual weights. Moreover, we showcase a numerical example to illustrate the proposed method and compare the results with the conventional AHP and Fuzzy Cognitive Map (FCM). The findings highlight our method’s ability to influence global priority values and the ranking of alternatives when considering interdependencies between criteria. These results suggest that the introduced method provides a flexible and adaptable framework for modeling interdependencies between criteria, ultimately leading to more accurate and reliable decision-making outcomes.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":" November","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138960424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Velocity Estimation Using Time-Differenced Carrier Phase and Doppler Shift with Different Grades of Devices: From Smartphones to Professional Receivers 使用不同等级设备的时差载波相位和多普勒频移进行速度估计:从智能手机到专业接收器
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-19 DOI: 10.3390/a17010002
A. Angrisano, Giovanni Cappello, S. Gaglione, C. Gioia
Velocity estimation has a key role in several applications; for instance, velocity estimation in navigation or in mobile mapping systems and GNSSs is currently a common way to achieve reliable and accurate velocity. Two approaches are mainly used to obtain velocity based on GNSS measurements, i.e., Doppler observations and carrier phases differenced in time (that is, TDCP). In a benign environment, Doppler-based velocity can be estimated accurately to within a few cm/s, while TDCP-based velocity can be estimated accurately to within a few mm/s. On the other hand, the TDCP technique is more prone to availability shortage and the presence of blunders. In this work, the two mentioned approaches are tested, using three devices of different grades: a high-grade geodetic receiver, a high-sensitivity receiver, and a GNSS chip mounted on a smartphone. The measurements of geodetic receivers are inherently cleaner, providing an accurate solution, while the remaining two receivers provide worse results. The case of smartphone GNSS chips can be particularly critical owing to the equipped antenna, which makes the measurements noisy and largely affected by blunders. The GNSSs are considered separately in order to assess the performance of the single systems. The analysis carried out in this research confirms the previous considerations about receiver grades and processing techniques. Additionally, the obtained results highlight the necessity of adopting a diagnostic approach to the measurements, such as RAIM-FDE, especially for low-grade receivers.
速度估计在一些应用中起着关键作用;例如,导航或移动测绘系统和全球导航卫星系统中的速度估计是目前实现可靠和精确速度的常用方法。根据全球导航卫星系统的测量结果获取速度主要采用两种方法,即多普勒观测和载波相位差时(即 TDCP)。在良性环境中,基于多普勒的速度估算可精确到几厘米/秒以内,而基于 TDCP 的速度估算可精确到几毫米/秒以内。另一方面,TDCP 技术更容易受到可用性不足和失误的影响。在这项工作中,使用三种不同等级的设备对上述两种方法进行了测试:高等级大地测量接收器、高灵敏度接收器和安装在智能手机上的全球导航卫星系统芯片。大地测量接收器的测量结果本质上更纯净,能提供精确的解决方案,而其余两种接收器的测量结果较差。智能手机全球导航卫星系统芯片的情况可能尤为关键,因为其配备的天线会使测量产生噪音,并在很大程度上受到误差的影响。为了评估单一系统的性能,对全球导航卫星系统进行了单独考虑。本研究进行的分析证实了之前对接收器等级和处理技术的考虑。此外,获得的结果突出表明,有必要采用 RAIM-FDE 等诊断方法进行测量,特别是对于低等级接收器。
{"title":"Velocity Estimation Using Time-Differenced Carrier Phase and Doppler Shift with Different Grades of Devices: From Smartphones to Professional Receivers","authors":"A. Angrisano, Giovanni Cappello, S. Gaglione, C. Gioia","doi":"10.3390/a17010002","DOIUrl":"https://doi.org/10.3390/a17010002","url":null,"abstract":"Velocity estimation has a key role in several applications; for instance, velocity estimation in navigation or in mobile mapping systems and GNSSs is currently a common way to achieve reliable and accurate velocity. Two approaches are mainly used to obtain velocity based on GNSS measurements, i.e., Doppler observations and carrier phases differenced in time (that is, TDCP). In a benign environment, Doppler-based velocity can be estimated accurately to within a few cm/s, while TDCP-based velocity can be estimated accurately to within a few mm/s. On the other hand, the TDCP technique is more prone to availability shortage and the presence of blunders. In this work, the two mentioned approaches are tested, using three devices of different grades: a high-grade geodetic receiver, a high-sensitivity receiver, and a GNSS chip mounted on a smartphone. The measurements of geodetic receivers are inherently cleaner, providing an accurate solution, while the remaining two receivers provide worse results. The case of smartphone GNSS chips can be particularly critical owing to the equipped antenna, which makes the measurements noisy and largely affected by blunders. The GNSSs are considered separately in order to assess the performance of the single systems. The analysis carried out in this research confirms the previous considerations about receiver grades and processing techniques. Additionally, the obtained results highlight the necessity of adopting a diagnostic approach to the measurements, such as RAIM-FDE, especially for low-grade receivers.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"114 7","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138959646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Deblurring Based on Convex Non-Convex Sparse Regularization and Plug-and-Play Algorithm 基于凸非凸稀疏正则化和即插即用算法的图像去毛刺技术
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-18 DOI: 10.3390/a16120574
Yi Wang, Yating Xu, Tianjian Li, Tao Zhang, Jian Zou
Image deblurring based on sparse regularization has garnered significant attention, but there are still certain limitations that need to be addressed. For instance, convex sparse regularization tends to exhibit biased estimation, which can adversely impact the deblurring performance, while non-convex sparse regularization poses challenges in terms of solving techniques. Furthermore, the performance of the traditional iterative algorithm also needs to be improved. In this paper, we propose an image deblurring method based on convex non-convex (CNC) sparse regularization and a plug-and-play (PnP) algorithm. The utilization of CNC sparse regularization not only mitigates estimation bias but also guarantees the overall convexity of the image deblurring model. The PnP algorithm is an advanced learning-based optimization algorithm that surpasses traditional optimization algorithms in terms of efficiency and performance by utilizing the state-of-the-art denoiser to replace the proximal operator. Numerical experiments verify the performance of our proposed algorithm in image deblurring.
基于稀疏正则化的图像去模糊技术备受关注,但仍有一些局限性需要解决。例如,凸稀疏正则化倾向于表现出偏差估计,这会对去毛刺性能产生不利影响,而非凸稀疏正则化在求解技术方面也带来了挑战。此外,传统迭代算法的性能也有待提高。本文提出了一种基于凸非凸(CNC)稀疏正则化和即插即用(PnP)算法的图像去模糊方法。利用 CNC 稀疏正则化不仅能减轻估计偏差,还能保证图像去模糊模型的整体凸性。PnP 算法是一种先进的基于学习的优化算法,它利用最先进的去噪器替代近算子,在效率和性能方面超越了传统的优化算法。数值实验验证了我们提出的算法在图像去模糊方面的性能。
{"title":"Image Deblurring Based on Convex Non-Convex Sparse Regularization and Plug-and-Play Algorithm","authors":"Yi Wang, Yating Xu, Tianjian Li, Tao Zhang, Jian Zou","doi":"10.3390/a16120574","DOIUrl":"https://doi.org/10.3390/a16120574","url":null,"abstract":"Image deblurring based on sparse regularization has garnered significant attention, but there are still certain limitations that need to be addressed. For instance, convex sparse regularization tends to exhibit biased estimation, which can adversely impact the deblurring performance, while non-convex sparse regularization poses challenges in terms of solving techniques. Furthermore, the performance of the traditional iterative algorithm also needs to be improved. In this paper, we propose an image deblurring method based on convex non-convex (CNC) sparse regularization and a plug-and-play (PnP) algorithm. The utilization of CNC sparse regularization not only mitigates estimation bias but also guarantees the overall convexity of the image deblurring model. The PnP algorithm is an advanced learning-based optimization algorithm that surpasses traditional optimization algorithms in terms of efficiency and performance by utilizing the state-of-the-art denoiser to replace the proximal operator. Numerical experiments verify the performance of our proposed algorithm in image deblurring.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"5 2","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138995564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Development of Descriptor-Based Machine Learning Models for Thermodynamic Properties: Part 2—Applicability Domain and Outliers 基于描述符的热力学特性机器学习模型的开发:第 2 部分--适用领域和异常值
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-18 DOI: 10.3390/a16120573
Cindy Trinh, Silvia Lasala, O. Herbinet, Dimitrios Meimaroglou
This article investigates the applicability domain (AD) of machine learning (ML) models trained on high-dimensional data, for the prediction of the ideal gas enthalpy of formation and entropy of molecules via descriptors. The AD is crucial as it describes the space of chemical characteristics in which the model can make predictions with a given reliability. This work studies the AD definition of a ML model throughout its development procedure: during data preprocessing, model construction and model deployment. Three AD definition methods, commonly used for outlier detection in high-dimensional problems, are compared: isolation forest (iForest), random forest prediction confidence (RF confidence) and k-nearest neighbors in the 2D projection of descriptor space obtained via t-distributed stochastic neighbor embedding (tSNE2D/kNN). These methods compute an anomaly score that can be used instead of the distance metrics of classical low-dimension AD definition methods, the latter being generally unsuitable for high-dimensional problems. Typically, in low- (high-) dimensional problems, a molecule is considered to lie within the AD if its distance from the training domain (anomaly score) is below a given threshold. During data preprocessing, the three AD definition methods are used to identify outlier molecules and the effect of their removal is investigated. A more significant improvement of model performance is observed when outliers identified with RF confidence are removed (e.g., for a removal of 30% of outliers, the MAE (Mean Absolute Error) of the test dataset is divided by 2.5, 1.6 and 1.1 for RF confidence, iForest and tSNE2D/kNN, respectively). While these three methods identify X-outliers, the effect of other types of outliers, namely Model-outliers and y-outliers, is also investigated. In particular, the elimination of X-outliers followed by that of Model-outliers enables us to divide MAE and RMSE (Root Mean Square Error) by 2 and 3, respectively, while reducing overfitting. The elimination of y-outliers does not display a significant effect on the model performance. During model construction and deployment, the AD serves to verify the position of the test data and of different categories of molecules with respect to the training data and associate this position with their prediction accuracy. For the data that are found to be close to the training data, according to RF confidence, and display high prediction errors, tSNE 2D representations are deployed to identify the possible sources of these errors (e.g., representation of the chemical information in the training data).
本文研究了在高维数据上训练的机器学习(ML)模型的适用域(AD),以通过描述符预测理想气体的形成焓和分子熵。AD至关重要,因为它描述了模型能以给定的可靠性进行预测的化学特征空间。这项工作研究了 ML 模型在整个开发过程中的 AD 定义:数据预处理、模型构建和模型部署。比较了常用于高维问题离群点检测的三种 AD 定义方法:隔离森林 (iForest)、随机森林预测置信度 (RF 置信度) 和通过 t 分布随机邻域嵌入 (tSNE2D/kNN) 获得的描述符空间 2D 投影中的 k 近邻。这些方法计算出的异常得分可用于替代经典低维 AD 定义方法的距离度量,后者通常不适合高维问题。通常情况下,在低(高)维问题中,如果分子与训练域的距离(异常得分)低于给定的阈值,则认为该分子位于 AD 范围内。在数据预处理过程中,使用三种 AD 定义方法来识别离群分子,并研究去除离群分子的效果。当移除用 RF 置信度识别出的异常值时,模型性能会有更明显的提高(例如,移除 30% 的异常值后,RF 置信度、iForest 和 tSNE2D/kNN 测试数据集的 MAE(平均绝对误差)分别除以 2.5、1.6 和 1.1)。在这三种方法识别 X 离群值的同时,还研究了其他类型离群值(即模型离群值和 y 离群值)的影响。特别是,先消除 X 离群值,再消除模型离群值,可使 MAE 和 RMSE(均方根误差)分别降低 2 和 3,同时减少过拟合。消除 y 离群值对模型性能的影响不大。在模型构建和部署过程中,AD 的作用是验证测试数据和不同类别分子相对于训练数据的位置,并将这一位置与其预测准确性联系起来。对于根据射频置信度发现与训练数据接近但预测误差较大的数据,则采用 tSNE 2D 表示法来识别这些误差的可能来源(例如,训练数据中化学信息的表示)。
{"title":"On the Development of Descriptor-Based Machine Learning Models for Thermodynamic Properties: Part 2—Applicability Domain and Outliers","authors":"Cindy Trinh, Silvia Lasala, O. Herbinet, Dimitrios Meimaroglou","doi":"10.3390/a16120573","DOIUrl":"https://doi.org/10.3390/a16120573","url":null,"abstract":"This article investigates the applicability domain (AD) of machine learning (ML) models trained on high-dimensional data, for the prediction of the ideal gas enthalpy of formation and entropy of molecules via descriptors. The AD is crucial as it describes the space of chemical characteristics in which the model can make predictions with a given reliability. This work studies the AD definition of a ML model throughout its development procedure: during data preprocessing, model construction and model deployment. Three AD definition methods, commonly used for outlier detection in high-dimensional problems, are compared: isolation forest (iForest), random forest prediction confidence (RF confidence) and k-nearest neighbors in the 2D projection of descriptor space obtained via t-distributed stochastic neighbor embedding (tSNE2D/kNN). These methods compute an anomaly score that can be used instead of the distance metrics of classical low-dimension AD definition methods, the latter being generally unsuitable for high-dimensional problems. Typically, in low- (high-) dimensional problems, a molecule is considered to lie within the AD if its distance from the training domain (anomaly score) is below a given threshold. During data preprocessing, the three AD definition methods are used to identify outlier molecules and the effect of their removal is investigated. A more significant improvement of model performance is observed when outliers identified with RF confidence are removed (e.g., for a removal of 30% of outliers, the MAE (Mean Absolute Error) of the test dataset is divided by 2.5, 1.6 and 1.1 for RF confidence, iForest and tSNE2D/kNN, respectively). While these three methods identify X-outliers, the effect of other types of outliers, namely Model-outliers and y-outliers, is also investigated. In particular, the elimination of X-outliers followed by that of Model-outliers enables us to divide MAE and RMSE (Root Mean Square Error) by 2 and 3, respectively, while reducing overfitting. The elimination of y-outliers does not display a significant effect on the model performance. During model construction and deployment, the AD serves to verify the position of the test data and of different categories of molecules with respect to the training data and associate this position with their prediction accuracy. For the data that are found to be close to the training data, according to RF confidence, and display high prediction errors, tSNE 2D representations are deployed to identify the possible sources of these errors (e.g., representation of the chemical information in the training data).","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"41 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138965324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improving Clustering Accuracy of K-Means and Random Swap by an Evolutionary Technique Based on Careful Seeding 通过基于仔细播种的进化技术提高 K-Means 和随机交换的聚类精度
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-17 DOI: 10.3390/a16120572
L. Nigro, F. Cicirelli
K-Means is a “de facto” standard clustering algorithm due to its simplicity and efficiency. K-Means, though, strongly depends on the initialization of the centroids (seeding method) and often gets stuck in a local sub-optimal solution. K-Means, in fact, mainly acts as a local refiner of the centroids, and it is unable to move centroids all over the data space. Random Swap was defined to go beyond K-Means, and its modus operandi integrates K-Means in a global strategy of centroids management, which can often generate a clustering solution close to the global optimum. This paper proposes an approach which extends both K-Means and Random Swap and improves the clustering accuracy through an evolutionary technique and careful seeding. Two new algorithms are proposed: the Population-Based K-Means (PB-KM) and the Population-Based Random Swap (PB-RS). Both algorithms consist of two steps: first, a population of J candidate solutions is built, and then the candidate centroids are repeatedly recombined toward a final accurate solution. The paper motivates the design of PB-KM and PB-RS, outlines their current implementation in Java based on parallel streams, and demonstrates the achievable clustering accuracy using both synthetic and real-world datasets.
K-Means 算法简单高效,是 "事实上 "的标准聚类算法。不过,K-Means 严重依赖于中心点的初始化(播种方法),经常会陷入局部次优解。事实上,K-Means 主要充当中心点的局部细化器,无法将中心点移动到整个数据空间。随机交换的定义超越了 K-Means,其工作方式是将 K-Means 整合到中心点管理的全局策略中,这通常能产生接近全局最优的聚类解决方案。本文提出了一种扩展 K-Means 和随机交换的方法,并通过进化技术和精心播种提高了聚类精度。本文提出了两种新算法:基于种群的 K-Means 算法(PB-KM)和基于种群的随机交换算法(PB-RS)。这两种算法都包括两个步骤:首先,建立一个由 J 个候选解组成的群体,然后对候选中心点进行反复重组,以获得最终的精确解。论文介绍了 PB-KM 和 PB-RS 的设计动机,概述了它们目前基于并行流的 Java 实现,并使用合成数据集和真实数据集演示了可实现的聚类精度。
{"title":"Improving Clustering Accuracy of K-Means and Random Swap by an Evolutionary Technique Based on Careful Seeding","authors":"L. Nigro, F. Cicirelli","doi":"10.3390/a16120572","DOIUrl":"https://doi.org/10.3390/a16120572","url":null,"abstract":"K-Means is a “de facto” standard clustering algorithm due to its simplicity and efficiency. K-Means, though, strongly depends on the initialization of the centroids (seeding method) and often gets stuck in a local sub-optimal solution. K-Means, in fact, mainly acts as a local refiner of the centroids, and it is unable to move centroids all over the data space. Random Swap was defined to go beyond K-Means, and its modus operandi integrates K-Means in a global strategy of centroids management, which can often generate a clustering solution close to the global optimum. This paper proposes an approach which extends both K-Means and Random Swap and improves the clustering accuracy through an evolutionary technique and careful seeding. Two new algorithms are proposed: the Population-Based K-Means (PB-KM) and the Population-Based Random Swap (PB-RS). Both algorithms consist of two steps: first, a population of J candidate solutions is built, and then the candidate centroids are repeatedly recombined toward a final accurate solution. The paper motivates the design of PB-KM and PB-RS, outlines their current implementation in Java based on parallel streams, and demonstrates the achievable clustering accuracy using both synthetic and real-world datasets.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"6 12","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138966533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evolutionary Algorithms in a Bacterial Consortium of Synthetic Bacteria 合成细菌联盟中的进化算法
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-17 DOI: 10.3390/a16120571
Sara Lledó Villaescusa, Rafael Lahoz-Beltra
At present, synthetic biology applications are based on the programming of synthetic bacteria with custom-designed genetic circuits through the application of a top-down strategy. These genetic circuits are the programs that implement a certain algorithm, the bacterium being the agent or shell responsible for the execution of the program in a given environment. In this work, we study the possibility that instead of programming synthesized bacteria through a custom-designed genetic circuit, it is the circuit itself which emerges as a result of the evolution simulated through an evolutionary algorithm. This study is conducted by performing in silico experiments in a community composed of synthetic bacteria in which one species or strain behaves as pathogenic bacteria against the rest of the non-pathogenic bacteria that are also part of the bacterial consortium. The goal is the eradication of the pathogenic strain through the evolutionary programming of the agents or synthetic bacteria. The results obtained suggest the plausibility of the evolutionary design of the appropriate genetic circuit resulting from the application of a bottom-up strategy and therefore the experimental feasibility of the evolutionary programming of synthetic bacteria.
目前,合成生物学应用的基础是通过应用自上而下的策略,用定制设计的基因电路对合成细菌进行编程。这些基因电路是实现某种算法的程序,细菌是在给定环境中负责执行程序的代理或外壳。在这项工作中,我们研究了一种可能性,即通过进化算法模拟的进化结果出现的电路本身,而不是通过定制设计的基因电路对合成细菌进行编程。这项研究是在一个由合成细菌组成的群落中进行硅学实验,在这个群落中,一个物种或菌株作为致病菌对抗同为细菌群落一部分的其他非致病菌。其目的是通过制剂或合成细菌的进化程序来消灭致病菌株。研究结果表明,应用自下而上的策略进化设计出适当的基因回路是可行的,因此合成细菌的进化编程在实验上是可行的。
{"title":"Evolutionary Algorithms in a Bacterial Consortium of Synthetic Bacteria","authors":"Sara Lledó Villaescusa, Rafael Lahoz-Beltra","doi":"10.3390/a16120571","DOIUrl":"https://doi.org/10.3390/a16120571","url":null,"abstract":"At present, synthetic biology applications are based on the programming of synthetic bacteria with custom-designed genetic circuits through the application of a top-down strategy. These genetic circuits are the programs that implement a certain algorithm, the bacterium being the agent or shell responsible for the execution of the program in a given environment. In this work, we study the possibility that instead of programming synthesized bacteria through a custom-designed genetic circuit, it is the circuit itself which emerges as a result of the evolution simulated through an evolutionary algorithm. This study is conducted by performing in silico experiments in a community composed of synthetic bacteria in which one species or strain behaves as pathogenic bacteria against the rest of the non-pathogenic bacteria that are also part of the bacterial consortium. The goal is the eradication of the pathogenic strain through the evolutionary programming of the agents or synthetic bacteria. The results obtained suggest the plausibility of the evolutionary design of the appropriate genetic circuit resulting from the application of a bottom-up strategy and therefore the experimental feasibility of the evolutionary programming of synthetic bacteria.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"9 4","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138966227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Solving NP-Hard Challenges in Logistics and Transportation under General Uncertainty Scenarios Using Fuzzy Simheuristics 利用模糊模拟法解决一般不确定性情景下物流和运输中的 NP-Hard 挑战
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-16 DOI: 10.3390/a16120570
Angel A. Juan, Markus Rabe, Majsa Ammouriova, Javier Panadero, David Peidro, Daniel Riera
In the field of logistics and transportation (L&T), this paper reviews the utilization of simheuristic algorithms to address NP-hard optimization problems under stochastic uncertainty. Then, the paper explores an extension of the simheuristics concept by introducing a fuzzy layer to tackle complex optimization problems involving both stochastic and fuzzy uncertainties. The hybrid approach combines simulation, metaheuristics, and fuzzy logic, offering a feasible methodology to solve large-scale NP-hard problems under general uncertainty scenarios. These scenarios are commonly encountered in L&T optimization challenges, such as the vehicle routing problem or the team orienteering problem, among many others. The proposed methodology allows for modeling various problem components—including travel times, service times, customers’ demands, or the duration of electric batteries—as deterministic, stochastic, or fuzzy items. A cross-problem analysis of several computational experiments is conducted to validate the effectiveness of the fuzzy simheuristic methodology. Being a flexible methodology that allows us to tackle NP-hard challenges under general uncertainty scenarios, fuzzy simheuristics can also be applied in fields other than L&T.
在物流与运输(L&T)领域,本文回顾了利用模拟算法解决随机不确定性条件下的 NP 难优化问题的情况。然后,本文通过引入模糊层来探索模拟概念的扩展,以解决涉及随机和模糊不确定性的复杂优化问题。这种混合方法结合了模拟、元启发式和模糊逻辑,为解决一般不确定性情况下的大规模 NP 难问题提供了一种可行的方法。这些情景通常会在 L&T 优化挑战中遇到,如车辆路由问题或团队定向越野问题等。所提出的方法允许将各种问题组件(包括旅行时间、服务时间、客户需求或电动电池的持续时间)建模为确定项、随机项或模糊项。我们对几个计算实验进行了跨问题分析,以验证模糊模拟法的有效性。作为一种灵活的方法,它允许我们在一般不确定性情景下解决 NP 难度的挑战,模糊模拟法也可应用于 L&T 以外的领域。
{"title":"Solving NP-Hard Challenges in Logistics and Transportation under General Uncertainty Scenarios Using Fuzzy Simheuristics","authors":"Angel A. Juan, Markus Rabe, Majsa Ammouriova, Javier Panadero, David Peidro, Daniel Riera","doi":"10.3390/a16120570","DOIUrl":"https://doi.org/10.3390/a16120570","url":null,"abstract":"In the field of logistics and transportation (L&T), this paper reviews the utilization of simheuristic algorithms to address NP-hard optimization problems under stochastic uncertainty. Then, the paper explores an extension of the simheuristics concept by introducing a fuzzy layer to tackle complex optimization problems involving both stochastic and fuzzy uncertainties. The hybrid approach combines simulation, metaheuristics, and fuzzy logic, offering a feasible methodology to solve large-scale NP-hard problems under general uncertainty scenarios. These scenarios are commonly encountered in L&T optimization challenges, such as the vehicle routing problem or the team orienteering problem, among many others. The proposed methodology allows for modeling various problem components—including travel times, service times, customers’ demands, or the duration of electric batteries—as deterministic, stochastic, or fuzzy items. A cross-problem analysis of several computational experiments is conducted to validate the effectiveness of the fuzzy simheuristic methodology. Being a flexible methodology that allows us to tackle NP-hard challenges under general uncertainty scenarios, fuzzy simheuristics can also be applied in fields other than L&T.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"61 8","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138967831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generator of Fuzzy Implications 模糊含义生成器
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-15 DOI: 10.3390/a16120569
Athina Daniilidou, A. Konguetsof, Georgios Souliotis, Basil Papadopoulos
In this research paper, a generator of fuzzy methods based on theorems and axioms of fuzzy logic is derived, analyzed and applied. The family presented generates fuzzy implications according to the value of a selected parameter. The obtained fuzzy implications should satisfy a number of axioms, and the conditions of satisfying the maximum number of axioms are denoted. New theorems are stated and proven based on the rule that the fuzzy function of fuzzy implication, which is strong, leads to fuzzy negation. In this work, the data taken were fuzzified for the application of the new formulae. The fuzzification of the data was undertaken using four kinds of membership degree functions. The new fuzzy functions were compared based on the results obtained after a number of repetitions. The new proposed methodology presents a new family of fuzzy implications, and also an algorithm is shown that produces fuzzy implications so as to be able to select the optimal method of the generator according to the value of a free parameter.
本研究论文基于模糊逻辑的定理和公理,推导、分析和应用了一种模糊方法生成器。该系列根据所选参数的值生成模糊含义。所获得的模糊含义应满足若干公理,并指出了满足最大公理数的条件。根据模糊蕴涵的模糊函数强导致模糊否定的规则,阐述并证明了新的定理。在这项工作中,为了应用新公式,对所采集的数据进行了模糊化处理。数据的模糊化使用了四种成员度函数。根据多次重复后得到的结果,对新的模糊函数进行了比较。新提出的方法提出了一个新的模糊含义系列,还展示了一种产生模糊含义的算法,以便能够根据自由参数值选择最佳的生成器方法。
{"title":"Generator of Fuzzy Implications","authors":"Athina Daniilidou, A. Konguetsof, Georgios Souliotis, Basil Papadopoulos","doi":"10.3390/a16120569","DOIUrl":"https://doi.org/10.3390/a16120569","url":null,"abstract":"In this research paper, a generator of fuzzy methods based on theorems and axioms of fuzzy logic is derived, analyzed and applied. The family presented generates fuzzy implications according to the value of a selected parameter. The obtained fuzzy implications should satisfy a number of axioms, and the conditions of satisfying the maximum number of axioms are denoted. New theorems are stated and proven based on the rule that the fuzzy function of fuzzy implication, which is strong, leads to fuzzy negation. In this work, the data taken were fuzzified for the application of the new formulae. The fuzzification of the data was undertaken using four kinds of membership degree functions. The new fuzzy functions were compared based on the results obtained after a number of repetitions. The new proposed methodology presents a new family of fuzzy implications, and also an algorithm is shown that produces fuzzy implications so as to be able to select the optimal method of the generator according to the value of a free parameter.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"13 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139000374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Visual Complexity Analysis of Electroencephalography Time-Frequency Images: Can It Localize the Epileptogenic Zone in the Brain? 基于深度学习的脑电图时频图像视觉复杂性分析:它能定位大脑中的致痫区吗?
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-15 DOI: 10.3390/a16120567
N. Makaram, Sarvagya Gupta, M. Pesce, J. Bolton, Scellig Stone, Daniel Haehn, Marc Pomplun, Christos Papadelis, Phillip L Pearl, Alexander Rotenberg, P. E. Grant, Eleonora Tamilia
In drug-resistant epilepsy, a visual inspection of intracranial electroencephalography (iEEG) signals is often needed to localize the epileptogenic zone (EZ) and guide neurosurgery. The visual assessment of iEEG time-frequency (TF) images is an alternative to signal inspection, but subtle variations may escape the human eye. Here, we propose a deep learning-based metric of visual complexity to interpret TF images extracted from iEEG data and aim to assess its ability to identify the EZ in the brain. We analyzed interictal iEEG data from 1928 contacts recorded from 20 children with drug-resistant epilepsy who became seizure-free after neurosurgery. We localized each iEEG contact in the MRI, created TF images (1–70 Hz) for each contact, and used a pre-trained VGG16 network to measure their visual complexity by extracting unsupervised activation energy (UAE) from 13 convolutional layers. We identified points of interest in the brain using the UAE values via patient- and layer-specific thresholds (based on extreme value distribution) and using a support vector machine classifier. Results show that contacts inside the seizure onset zone exhibit lower UAE than outside, with larger differences in deep layers (L10, L12, and L13: p < 0.001). Furthermore, the points of interest identified using the support vector machine, localized the EZ with 7 mm accuracy. In conclusion, we presented a pre-surgical computerized tool that facilitates the EZ localization in the patient’s MRI without requiring long-term iEEG inspection.
在耐药性癫痫患者中,通常需要通过目测颅内脑电图(iEEG)信号来定位致痫区(EZ)并指导神经外科手术。对 iEEG 时频(TF)图像进行视觉评估是信号检查的一种替代方法,但细微的变化可能会逃过人眼的眼睛。在此,我们提出了一种基于深度学习的视觉复杂性度量来解释从 iEEG 数据中提取的 TF 图像,并旨在评估其识别大脑中 EZ 的能力。我们分析了来自 20 名耐药性癫痫患儿的 1928 个发作间期 iEEG 数据,这些患儿在接受神经外科手术后癫痫不再发作。我们在磁共振成像中定位了每个 iEEG 接触点,为每个接触点创建了 TF 图像(1-70 Hz),并使用预先训练好的 VGG16 网络,通过从 13 个卷积层中提取无监督激活能量 (UAE) 来测量其视觉复杂性。我们通过特定于患者和层的阈值(基于极值分布)并使用支持向量机分类器,利用 UAE 值确定大脑中的兴趣点。结果显示,癫痫发作区内的触点比发作区外的触点显示出更低的 UAE 值,深层(L10、L12 和 L13:p < 0.001)的差异更大。此外,使用支持向量机识别的兴趣点对 EZ 的定位精度为 7 毫米。总之,我们提出了一种手术前计算机化工具,它有助于在患者的核磁共振成像中进行 EZ 定位,而无需进行长期的 iEEG 检查。
{"title":"Deep Learning-Based Visual Complexity Analysis of Electroencephalography Time-Frequency Images: Can It Localize the Epileptogenic Zone in the Brain?","authors":"N. Makaram, Sarvagya Gupta, M. Pesce, J. Bolton, Scellig Stone, Daniel Haehn, Marc Pomplun, Christos Papadelis, Phillip L Pearl, Alexander Rotenberg, P. E. Grant, Eleonora Tamilia","doi":"10.3390/a16120567","DOIUrl":"https://doi.org/10.3390/a16120567","url":null,"abstract":"In drug-resistant epilepsy, a visual inspection of intracranial electroencephalography (iEEG) signals is often needed to localize the epileptogenic zone (EZ) and guide neurosurgery. The visual assessment of iEEG time-frequency (TF) images is an alternative to signal inspection, but subtle variations may escape the human eye. Here, we propose a deep learning-based metric of visual complexity to interpret TF images extracted from iEEG data and aim to assess its ability to identify the EZ in the brain. We analyzed interictal iEEG data from 1928 contacts recorded from 20 children with drug-resistant epilepsy who became seizure-free after neurosurgery. We localized each iEEG contact in the MRI, created TF images (1–70 Hz) for each contact, and used a pre-trained VGG16 network to measure their visual complexity by extracting unsupervised activation energy (UAE) from 13 convolutional layers. We identified points of interest in the brain using the UAE values via patient- and layer-specific thresholds (based on extreme value distribution) and using a support vector machine classifier. Results show that contacts inside the seizure onset zone exhibit lower UAE than outside, with larger differences in deep layers (L10, L12, and L13: p < 0.001). Furthermore, the points of interest identified using the support vector machine, localized the EZ with 7 mm accuracy. In conclusion, we presented a pre-surgical computerized tool that facilitates the EZ localization in the patient’s MRI without requiring long-term iEEG inspection.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"6 3","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138997990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision-Based Concrete-Crack Detection on Railway Sleepers Using Dense U-Net Model 使用密集 U-Net 模型对铁路枕木上的混凝土裂缝进行基于视觉的检测
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-15 DOI: 10.3390/a16120568
M. Khan, Seong-Hoon Kee, A. Nahid
Crack inspection in railway sleepers is crucial for ensuring rail safety and avoiding deadly accidents. Traditional methods for detecting cracks on railway sleepers are very time-consuming and lack efficiency. Therefore, nowadays, researchers are paying attention to vision-based algorithms, especially Deep Learning algorithms. In this work, we adopted the U-net for the first time for detecting cracks on a railway sleeper and proposed a modified U-net architecture named Dense U-net for segmenting the cracks. In the Dense U-net structure, we established several short connections between the encoder and decoder blocks, which enabled the architecture to obtain better pixel information flow. Thus, the model extracted the necessary information in more detail to predict the cracks. We collected images from railway sleepers, processed them in a dataset, and finally trained the model with the images. The model achieved an overall F1-score, precision, Recall, and IoU of 86.5%, 88.53%, 84.63%, and 76.31%, respectively. We compared our suggested model with the original U-net, and the results demonstrate that our model performed better than the U-net in both quantitative and qualitative results. Moreover, we considered the necessity of crack severity analysis and measured a few parameters of the cracks. The engineers must know the severity of the cracks to have an idea about the most severe locations and take the necessary steps to repair the badly affected sleepers.
铁路枕木的裂缝检测对于确保铁路安全和避免致命事故至关重要。传统的铁轨枕木裂缝检测方法耗时长、效率低。因此,如今研究人员开始关注基于视觉的算法,尤其是深度学习算法。在这项工作中,我们首次采用了 U-net 来检测铁路枕木上的裂缝,并提出了一种名为 Dense U-net 的改进型 U-net 结构来分割裂缝。在 Dense U-net 结构中,我们在编码器和解码器块之间建立了多个短连接,这使得该结构能够获得更好的像素信息流。这样,模型就能更详细地提取必要的信息来预测裂缝。我们收集了铁轨枕木的图像,并将其处理为数据集,最后利用这些图像对模型进行了训练。模型的总体 F1 分数、精确度、召回率和 IoU 分别达到了 86.5%、88.53%、84.63% 和 76.31%。我们将建议的模型与原始 U-net 进行了比较,结果表明我们的模型在定量和定性结果上都优于 U-net。此外,我们还考虑了裂缝严重性分析的必要性,并测量了裂缝的一些参数。工程师必须知道裂缝的严重程度,以便了解最严重的位置,并采取必要措施修复受严重影响的枕木。
{"title":"Vision-Based Concrete-Crack Detection on Railway Sleepers Using Dense U-Net Model","authors":"M. Khan, Seong-Hoon Kee, A. Nahid","doi":"10.3390/a16120568","DOIUrl":"https://doi.org/10.3390/a16120568","url":null,"abstract":"Crack inspection in railway sleepers is crucial for ensuring rail safety and avoiding deadly accidents. Traditional methods for detecting cracks on railway sleepers are very time-consuming and lack efficiency. Therefore, nowadays, researchers are paying attention to vision-based algorithms, especially Deep Learning algorithms. In this work, we adopted the U-net for the first time for detecting cracks on a railway sleeper and proposed a modified U-net architecture named Dense U-net for segmenting the cracks. In the Dense U-net structure, we established several short connections between the encoder and decoder blocks, which enabled the architecture to obtain better pixel information flow. Thus, the model extracted the necessary information in more detail to predict the cracks. We collected images from railway sleepers, processed them in a dataset, and finally trained the model with the images. The model achieved an overall F1-score, precision, Recall, and IoU of 86.5%, 88.53%, 84.63%, and 76.31%, respectively. We compared our suggested model with the original U-net, and the results demonstrate that our model performed better than the U-net in both quantitative and qualitative results. Moreover, we considered the necessity of crack severity analysis and measured a few parameters of the cracks. The engineers must know the severity of the cracks to have an idea about the most severe locations and take the necessary steps to repair the badly affected sleepers.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"14 17","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139001298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Algorithms
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1