首页 > 最新文献

Algorithms最新文献

英文 中文
Comparing Direct Deliveries and Automated Parcel Locker Systems with Respect to Overall CO2 Emissions for the Last Mile 比较直接递送和自动包裹柜系统在最后一英里的总体二氧化碳排放量
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-21 DOI: 10.3390/a17010004
K. Gutenschwager, Markus Rabe, Jorge Chicaiza-Vaca
Fast growing e-commerce has a significant impact both on CEP providers and public entities. While service providers have the first priority on factors such as costs and reliable service, both are increasingly focused on environmental effects, in the interest of company image and the inhabitants’ health and comfort. Significant additional factors are traffic density, pollution, and noise. While in the past direct delivery with distribution trucks from regional depots to the customers might have been justified, this is no longer valid when taking the big and growing numbers into account. Several options are followed in the literature, especially variants that introduce an additional break in the distribution chain, like local mini-hubs, mobile distribution points, or Automated Parcel Lockers (APLs). The first two options imply a “very last mile” stage, e.g., by small electrical vehicles or cargo bikes, and APLs rely on the customers to operate the very last step. The usage of this schema will significantly depend on the density of the APLs and, thus, on the density of the population within quite small regions. The relationships between the different elements of these technologies and the potential customers are studied with respect to their impact on the above-mentioned factors. A variety of scenarios is investigated, covering different options for customer behaviors. As an additional important point, reported studies with APLs only consider the section up to the APLs and the implied CO2 emission. This, however, fully neglects the potentially very relevant pollution created by the customers when fetching their parcels from the APL. Therefore, in this paper this impact is systematically estimated via a simulation-based sensitivity analysis. It can be shown that taking this very last transport step into account in the calculation significantly changes the picture, especially within areas in outer city districts.
快速发展的电子商务对社区环保服务提供商和公共实体都产生了重大影响。虽然服务提供商首先考虑的是成本和可靠服务等因素,但为了公司形象和居民的健康与舒适,两者都越来越重视环境影响。交通密度、污染和噪音是重要的附加因素。过去,用配送卡车从地区仓库直接运送到客户手中可能是合理的,但考虑到数量庞大且不断增加,这种做法已不再有效。文献中提到了几种方案,特别是在配送链中引入额外中断的变体,如本地迷你枢纽、移动配送点或自动包裹柜(APL)。前两种方案意味着 "最后一英里 "阶段,例如通过小型电动车或货运自行车,而自动包裹柜则依靠客户来操作最后一个步骤。这种模式的使用在很大程度上取决于 APL 的密度,因此也取决于相当小区域内的人口密度。我们研究了这些技术的不同要素与潜在客户之间的关系,以及它们对上述因素的影响。对各种方案进行了研究,涵盖了客户行为的不同选择。还有一点很重要,已报道的有关 APL 的研究只考虑到了 APL 和隐含的二氧化碳排放量。然而,这完全忽略了顾客从 APL 取包裹时可能造成的相关污染。因此,本文通过模拟敏感性分析系统地估算了这种影响。结果表明,在计算中将最后一个运输步骤考虑在内会大大改变情况,尤其是在城市外围地区。
{"title":"Comparing Direct Deliveries and Automated Parcel Locker Systems with Respect to Overall CO2 Emissions for the Last Mile","authors":"K. Gutenschwager, Markus Rabe, Jorge Chicaiza-Vaca","doi":"10.3390/a17010004","DOIUrl":"https://doi.org/10.3390/a17010004","url":null,"abstract":"Fast growing e-commerce has a significant impact both on CEP providers and public entities. While service providers have the first priority on factors such as costs and reliable service, both are increasingly focused on environmental effects, in the interest of company image and the inhabitants’ health and comfort. Significant additional factors are traffic density, pollution, and noise. While in the past direct delivery with distribution trucks from regional depots to the customers might have been justified, this is no longer valid when taking the big and growing numbers into account. Several options are followed in the literature, especially variants that introduce an additional break in the distribution chain, like local mini-hubs, mobile distribution points, or Automated Parcel Lockers (APLs). The first two options imply a “very last mile” stage, e.g., by small electrical vehicles or cargo bikes, and APLs rely on the customers to operate the very last step. The usage of this schema will significantly depend on the density of the APLs and, thus, on the density of the population within quite small regions. The relationships between the different elements of these technologies and the potential customers are studied with respect to their impact on the above-mentioned factors. A variety of scenarios is investigated, covering different options for customer behaviors. As an additional important point, reported studies with APLs only consider the section up to the APLs and the implied CO2 emission. This, however, fully neglects the potentially very relevant pollution created by the customers when fetching their parcels from the APL. Therefore, in this paper this impact is systematically estimated via a simulation-based sensitivity analysis. It can be shown that taking this very last transport step into account in the calculation significantly changes the picture, especially within areas in outer city districts.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"8 3","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138951031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey on Swarm Robotics for Area Coverage Problem 针对区域覆盖问题的蜂群机器人技术概览
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-20 DOI: 10.3390/a17010003
Dena Kadhim Muhsen, Ahmed T. Sadiq, Firas Abdulrazzaq Raheem
The area coverage problem solution is one of the vital research areas which can benefit from swarm robotics. The greatest challenge to the swarm robotics system is to complete the task of covering an area effectively. Many domains where area coverage is essential include exploration, surveillance, mapping, foraging, and several other applications. This paper introduces a survey of swarm robotics in area coverage research papers from 2015 to 2022 regarding the algorithms and methods used, hardware, and applications in this domain. Different types of algorithms and hardware were dealt with and analysed; according to the analysis, the characteristics and advantages of each of them were identified, and we determined their suitability for different applications in covering the area for many goals. This study demonstrates that naturally inspired algorithms have the most significant role in swarm robotics for area coverage compared to other techniques. In addition, modern hardware has more capabilities suitable for supporting swarm robotics to cover an area, even if the environment is complex and contains static or dynamic obstacles.
区域覆盖问题的解决方案是可以从蜂群机器人技术中受益的重要研究领域之一。蜂群机器人系统面临的最大挑战是完成有效覆盖区域的任务。许多领域都需要区域覆盖,包括勘探、监控、测绘、觅食和其他一些应用。本文介绍了 2015 年至 2022 年蜂群机器人在区域覆盖方面的研究论文调查,涉及该领域使用的算法和方法、硬件和应用。本文对不同类型的算法和硬件进行了处理和分析;根据分析结果,确定了每种算法和硬件的特点和优势,并确定了它们在不同应用中对多种目标区域覆盖的适用性。这项研究表明,与其他技术相比,自然启发算法在蜂群机器人技术的区域覆盖中发挥着最重要的作用。此外,即使环境复杂,包含静态或动态障碍物,现代硬件也有更多适合支持蜂群机器人覆盖区域的功能。
{"title":"A Survey on Swarm Robotics for Area Coverage Problem","authors":"Dena Kadhim Muhsen, Ahmed T. Sadiq, Firas Abdulrazzaq Raheem","doi":"10.3390/a17010003","DOIUrl":"https://doi.org/10.3390/a17010003","url":null,"abstract":"The area coverage problem solution is one of the vital research areas which can benefit from swarm robotics. The greatest challenge to the swarm robotics system is to complete the task of covering an area effectively. Many domains where area coverage is essential include exploration, surveillance, mapping, foraging, and several other applications. This paper introduces a survey of swarm robotics in area coverage research papers from 2015 to 2022 regarding the algorithms and methods used, hardware, and applications in this domain. Different types of algorithms and hardware were dealt with and analysed; according to the analysis, the characteristics and advantages of each of them were identified, and we determined their suitability for different applications in covering the area for many goals. This study demonstrates that naturally inspired algorithms have the most significant role in swarm robotics for area coverage compared to other techniques. In addition, modern hardware has more capabilities suitable for supporting swarm robotics to cover an area, even if the environment is complex and contains static or dynamic obstacles.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"13 4","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139170950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Markov Random Field and Analytic Hierarchy Process to Account for Interdependent Criteria 利用马尔可夫随机场和层次分析法来解释相互依存的标准
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-19 DOI: 10.3390/a17010001
Jih-Jeng Huang, Chin-Yi Chen
The Analytic Hierarchy Process (AHP) has been a widely used multi-criteria decision-making (MCDM) method since the 1980s because of its simplicity and rationality. However, the conventional AHP assumes criteria independence, which is not always accurate in realistic scenarios where interdependencies between criteria exist. Several methods have been proposed to relax the postulation of the independent criteria in the AHP, e.g., the Analytic Network Process (ANP). However, these methods usually need a number of pairwise comparison matrices (PCMs) and make it hard to apply to a complicated and large-scale problem. This paper presents a groundbreaking approach to address this issue by incorporating discrete Markov Random Fields (MRFs) into the AHP framework. Our method enhances decision making by effectively and sensibly capturing interdependencies among criteria, reflecting actual weights. Moreover, we showcase a numerical example to illustrate the proposed method and compare the results with the conventional AHP and Fuzzy Cognitive Map (FCM). The findings highlight our method’s ability to influence global priority values and the ranking of alternatives when considering interdependencies between criteria. These results suggest that the introduced method provides a flexible and adaptable framework for modeling interdependencies between criteria, ultimately leading to more accurate and reliable decision-making outcomes.
自 20 世纪 80 年代以来,层次分析法(AHP)因其简单合理而成为一种广泛使用的多标准决策(MCDM)方法。然而,传统的 AHP 假设标准是独立的,这在标准之间存在相互依赖关系的现实情况下并不总是准确的。为了放宽 AHP 中独立标准的假设,人们提出了一些方法,如分析网络过程(ANP)。然而,这些方法通常需要大量的成对比较矩阵(PCM),因此很难应用于复杂的大规模问题。本文提出了一种开创性的方法,通过将离散马尔可夫随机场(MRF)纳入 AHP 框架来解决这一问题。我们的方法能有效、合理地捕捉标准之间的相互依存关系,反映实际权重,从而提高决策水平。此外,我们展示了一个数值示例来说明所提出的方法,并将结果与传统的 AHP 和模糊认知图(FCM)进行了比较。研究结果凸显了我们的方法在考虑标准间相互依存关系时影响全局优先值和备选方案排序的能力。这些结果表明,所引入的方法为标准之间的相互依存关系建模提供了一个灵活、可调整的框架,最终可带来更准确、更可靠的决策结果。
{"title":"Using Markov Random Field and Analytic Hierarchy Process to Account for Interdependent Criteria","authors":"Jih-Jeng Huang, Chin-Yi Chen","doi":"10.3390/a17010001","DOIUrl":"https://doi.org/10.3390/a17010001","url":null,"abstract":"The Analytic Hierarchy Process (AHP) has been a widely used multi-criteria decision-making (MCDM) method since the 1980s because of its simplicity and rationality. However, the conventional AHP assumes criteria independence, which is not always accurate in realistic scenarios where interdependencies between criteria exist. Several methods have been proposed to relax the postulation of the independent criteria in the AHP, e.g., the Analytic Network Process (ANP). However, these methods usually need a number of pairwise comparison matrices (PCMs) and make it hard to apply to a complicated and large-scale problem. This paper presents a groundbreaking approach to address this issue by incorporating discrete Markov Random Fields (MRFs) into the AHP framework. Our method enhances decision making by effectively and sensibly capturing interdependencies among criteria, reflecting actual weights. Moreover, we showcase a numerical example to illustrate the proposed method and compare the results with the conventional AHP and Fuzzy Cognitive Map (FCM). The findings highlight our method’s ability to influence global priority values and the ranking of alternatives when considering interdependencies between criteria. These results suggest that the introduced method provides a flexible and adaptable framework for modeling interdependencies between criteria, ultimately leading to more accurate and reliable decision-making outcomes.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":" November","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138960424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Velocity Estimation Using Time-Differenced Carrier Phase and Doppler Shift with Different Grades of Devices: From Smartphones to Professional Receivers 使用不同等级设备的时差载波相位和多普勒频移进行速度估计:从智能手机到专业接收器
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-19 DOI: 10.3390/a17010002
A. Angrisano, Giovanni Cappello, S. Gaglione, C. Gioia
Velocity estimation has a key role in several applications; for instance, velocity estimation in navigation or in mobile mapping systems and GNSSs is currently a common way to achieve reliable and accurate velocity. Two approaches are mainly used to obtain velocity based on GNSS measurements, i.e., Doppler observations and carrier phases differenced in time (that is, TDCP). In a benign environment, Doppler-based velocity can be estimated accurately to within a few cm/s, while TDCP-based velocity can be estimated accurately to within a few mm/s. On the other hand, the TDCP technique is more prone to availability shortage and the presence of blunders. In this work, the two mentioned approaches are tested, using three devices of different grades: a high-grade geodetic receiver, a high-sensitivity receiver, and a GNSS chip mounted on a smartphone. The measurements of geodetic receivers are inherently cleaner, providing an accurate solution, while the remaining two receivers provide worse results. The case of smartphone GNSS chips can be particularly critical owing to the equipped antenna, which makes the measurements noisy and largely affected by blunders. The GNSSs are considered separately in order to assess the performance of the single systems. The analysis carried out in this research confirms the previous considerations about receiver grades and processing techniques. Additionally, the obtained results highlight the necessity of adopting a diagnostic approach to the measurements, such as RAIM-FDE, especially for low-grade receivers.
速度估计在一些应用中起着关键作用;例如,导航或移动测绘系统和全球导航卫星系统中的速度估计是目前实现可靠和精确速度的常用方法。根据全球导航卫星系统的测量结果获取速度主要采用两种方法,即多普勒观测和载波相位差时(即 TDCP)。在良性环境中,基于多普勒的速度估算可精确到几厘米/秒以内,而基于 TDCP 的速度估算可精确到几毫米/秒以内。另一方面,TDCP 技术更容易受到可用性不足和失误的影响。在这项工作中,使用三种不同等级的设备对上述两种方法进行了测试:高等级大地测量接收器、高灵敏度接收器和安装在智能手机上的全球导航卫星系统芯片。大地测量接收器的测量结果本质上更纯净,能提供精确的解决方案,而其余两种接收器的测量结果较差。智能手机全球导航卫星系统芯片的情况可能尤为关键,因为其配备的天线会使测量产生噪音,并在很大程度上受到误差的影响。为了评估单一系统的性能,对全球导航卫星系统进行了单独考虑。本研究进行的分析证实了之前对接收器等级和处理技术的考虑。此外,获得的结果突出表明,有必要采用 RAIM-FDE 等诊断方法进行测量,特别是对于低等级接收器。
{"title":"Velocity Estimation Using Time-Differenced Carrier Phase and Doppler Shift with Different Grades of Devices: From Smartphones to Professional Receivers","authors":"A. Angrisano, Giovanni Cappello, S. Gaglione, C. Gioia","doi":"10.3390/a17010002","DOIUrl":"https://doi.org/10.3390/a17010002","url":null,"abstract":"Velocity estimation has a key role in several applications; for instance, velocity estimation in navigation or in mobile mapping systems and GNSSs is currently a common way to achieve reliable and accurate velocity. Two approaches are mainly used to obtain velocity based on GNSS measurements, i.e., Doppler observations and carrier phases differenced in time (that is, TDCP). In a benign environment, Doppler-based velocity can be estimated accurately to within a few cm/s, while TDCP-based velocity can be estimated accurately to within a few mm/s. On the other hand, the TDCP technique is more prone to availability shortage and the presence of blunders. In this work, the two mentioned approaches are tested, using three devices of different grades: a high-grade geodetic receiver, a high-sensitivity receiver, and a GNSS chip mounted on a smartphone. The measurements of geodetic receivers are inherently cleaner, providing an accurate solution, while the remaining two receivers provide worse results. The case of smartphone GNSS chips can be particularly critical owing to the equipped antenna, which makes the measurements noisy and largely affected by blunders. The GNSSs are considered separately in order to assess the performance of the single systems. The analysis carried out in this research confirms the previous considerations about receiver grades and processing techniques. Additionally, the obtained results highlight the necessity of adopting a diagnostic approach to the measurements, such as RAIM-FDE, especially for low-grade receivers.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"114 7","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138959646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Deblurring Based on Convex Non-Convex Sparse Regularization and Plug-and-Play Algorithm 基于凸非凸稀疏正则化和即插即用算法的图像去毛刺技术
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-18 DOI: 10.3390/a16120574
Yi Wang, Yating Xu, Tianjian Li, Tao Zhang, Jian Zou
Image deblurring based on sparse regularization has garnered significant attention, but there are still certain limitations that need to be addressed. For instance, convex sparse regularization tends to exhibit biased estimation, which can adversely impact the deblurring performance, while non-convex sparse regularization poses challenges in terms of solving techniques. Furthermore, the performance of the traditional iterative algorithm also needs to be improved. In this paper, we propose an image deblurring method based on convex non-convex (CNC) sparse regularization and a plug-and-play (PnP) algorithm. The utilization of CNC sparse regularization not only mitigates estimation bias but also guarantees the overall convexity of the image deblurring model. The PnP algorithm is an advanced learning-based optimization algorithm that surpasses traditional optimization algorithms in terms of efficiency and performance by utilizing the state-of-the-art denoiser to replace the proximal operator. Numerical experiments verify the performance of our proposed algorithm in image deblurring.
基于稀疏正则化的图像去模糊技术备受关注,但仍有一些局限性需要解决。例如,凸稀疏正则化倾向于表现出偏差估计,这会对去毛刺性能产生不利影响,而非凸稀疏正则化在求解技术方面也带来了挑战。此外,传统迭代算法的性能也有待提高。本文提出了一种基于凸非凸(CNC)稀疏正则化和即插即用(PnP)算法的图像去模糊方法。利用 CNC 稀疏正则化不仅能减轻估计偏差,还能保证图像去模糊模型的整体凸性。PnP 算法是一种先进的基于学习的优化算法,它利用最先进的去噪器替代近算子,在效率和性能方面超越了传统的优化算法。数值实验验证了我们提出的算法在图像去模糊方面的性能。
{"title":"Image Deblurring Based on Convex Non-Convex Sparse Regularization and Plug-and-Play Algorithm","authors":"Yi Wang, Yating Xu, Tianjian Li, Tao Zhang, Jian Zou","doi":"10.3390/a16120574","DOIUrl":"https://doi.org/10.3390/a16120574","url":null,"abstract":"Image deblurring based on sparse regularization has garnered significant attention, but there are still certain limitations that need to be addressed. For instance, convex sparse regularization tends to exhibit biased estimation, which can adversely impact the deblurring performance, while non-convex sparse regularization poses challenges in terms of solving techniques. Furthermore, the performance of the traditional iterative algorithm also needs to be improved. In this paper, we propose an image deblurring method based on convex non-convex (CNC) sparse regularization and a plug-and-play (PnP) algorithm. The utilization of CNC sparse regularization not only mitigates estimation bias but also guarantees the overall convexity of the image deblurring model. The PnP algorithm is an advanced learning-based optimization algorithm that surpasses traditional optimization algorithms in terms of efficiency and performance by utilizing the state-of-the-art denoiser to replace the proximal operator. Numerical experiments verify the performance of our proposed algorithm in image deblurring.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"5 2","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138995564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Development of Descriptor-Based Machine Learning Models for Thermodynamic Properties: Part 2—Applicability Domain and Outliers 基于描述符的热力学特性机器学习模型的开发:第 2 部分--适用领域和异常值
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-18 DOI: 10.3390/a16120573
Cindy Trinh, Silvia Lasala, O. Herbinet, Dimitrios Meimaroglou
This article investigates the applicability domain (AD) of machine learning (ML) models trained on high-dimensional data, for the prediction of the ideal gas enthalpy of formation and entropy of molecules via descriptors. The AD is crucial as it describes the space of chemical characteristics in which the model can make predictions with a given reliability. This work studies the AD definition of a ML model throughout its development procedure: during data preprocessing, model construction and model deployment. Three AD definition methods, commonly used for outlier detection in high-dimensional problems, are compared: isolation forest (iForest), random forest prediction confidence (RF confidence) and k-nearest neighbors in the 2D projection of descriptor space obtained via t-distributed stochastic neighbor embedding (tSNE2D/kNN). These methods compute an anomaly score that can be used instead of the distance metrics of classical low-dimension AD definition methods, the latter being generally unsuitable for high-dimensional problems. Typically, in low- (high-) dimensional problems, a molecule is considered to lie within the AD if its distance from the training domain (anomaly score) is below a given threshold. During data preprocessing, the three AD definition methods are used to identify outlier molecules and the effect of their removal is investigated. A more significant improvement of model performance is observed when outliers identified with RF confidence are removed (e.g., for a removal of 30% of outliers, the MAE (Mean Absolute Error) of the test dataset is divided by 2.5, 1.6 and 1.1 for RF confidence, iForest and tSNE2D/kNN, respectively). While these three methods identify X-outliers, the effect of other types of outliers, namely Model-outliers and y-outliers, is also investigated. In particular, the elimination of X-outliers followed by that of Model-outliers enables us to divide MAE and RMSE (Root Mean Square Error) by 2 and 3, respectively, while reducing overfitting. The elimination of y-outliers does not display a significant effect on the model performance. During model construction and deployment, the AD serves to verify the position of the test data and of different categories of molecules with respect to the training data and associate this position with their prediction accuracy. For the data that are found to be close to the training data, according to RF confidence, and display high prediction errors, tSNE 2D representations are deployed to identify the possible sources of these errors (e.g., representation of the chemical information in the training data).
本文研究了在高维数据上训练的机器学习(ML)模型的适用域(AD),以通过描述符预测理想气体的形成焓和分子熵。AD至关重要,因为它描述了模型能以给定的可靠性进行预测的化学特征空间。这项工作研究了 ML 模型在整个开发过程中的 AD 定义:数据预处理、模型构建和模型部署。比较了常用于高维问题离群点检测的三种 AD 定义方法:隔离森林 (iForest)、随机森林预测置信度 (RF 置信度) 和通过 t 分布随机邻域嵌入 (tSNE2D/kNN) 获得的描述符空间 2D 投影中的 k 近邻。这些方法计算出的异常得分可用于替代经典低维 AD 定义方法的距离度量,后者通常不适合高维问题。通常情况下,在低(高)维问题中,如果分子与训练域的距离(异常得分)低于给定的阈值,则认为该分子位于 AD 范围内。在数据预处理过程中,使用三种 AD 定义方法来识别离群分子,并研究去除离群分子的效果。当移除用 RF 置信度识别出的异常值时,模型性能会有更明显的提高(例如,移除 30% 的异常值后,RF 置信度、iForest 和 tSNE2D/kNN 测试数据集的 MAE(平均绝对误差)分别除以 2.5、1.6 和 1.1)。在这三种方法识别 X 离群值的同时,还研究了其他类型离群值(即模型离群值和 y 离群值)的影响。特别是,先消除 X 离群值,再消除模型离群值,可使 MAE 和 RMSE(均方根误差)分别降低 2 和 3,同时减少过拟合。消除 y 离群值对模型性能的影响不大。在模型构建和部署过程中,AD 的作用是验证测试数据和不同类别分子相对于训练数据的位置,并将这一位置与其预测准确性联系起来。对于根据射频置信度发现与训练数据接近但预测误差较大的数据,则采用 tSNE 2D 表示法来识别这些误差的可能来源(例如,训练数据中化学信息的表示)。
{"title":"On the Development of Descriptor-Based Machine Learning Models for Thermodynamic Properties: Part 2—Applicability Domain and Outliers","authors":"Cindy Trinh, Silvia Lasala, O. Herbinet, Dimitrios Meimaroglou","doi":"10.3390/a16120573","DOIUrl":"https://doi.org/10.3390/a16120573","url":null,"abstract":"This article investigates the applicability domain (AD) of machine learning (ML) models trained on high-dimensional data, for the prediction of the ideal gas enthalpy of formation and entropy of molecules via descriptors. The AD is crucial as it describes the space of chemical characteristics in which the model can make predictions with a given reliability. This work studies the AD definition of a ML model throughout its development procedure: during data preprocessing, model construction and model deployment. Three AD definition methods, commonly used for outlier detection in high-dimensional problems, are compared: isolation forest (iForest), random forest prediction confidence (RF confidence) and k-nearest neighbors in the 2D projection of descriptor space obtained via t-distributed stochastic neighbor embedding (tSNE2D/kNN). These methods compute an anomaly score that can be used instead of the distance metrics of classical low-dimension AD definition methods, the latter being generally unsuitable for high-dimensional problems. Typically, in low- (high-) dimensional problems, a molecule is considered to lie within the AD if its distance from the training domain (anomaly score) is below a given threshold. During data preprocessing, the three AD definition methods are used to identify outlier molecules and the effect of their removal is investigated. A more significant improvement of model performance is observed when outliers identified with RF confidence are removed (e.g., for a removal of 30% of outliers, the MAE (Mean Absolute Error) of the test dataset is divided by 2.5, 1.6 and 1.1 for RF confidence, iForest and tSNE2D/kNN, respectively). While these three methods identify X-outliers, the effect of other types of outliers, namely Model-outliers and y-outliers, is also investigated. In particular, the elimination of X-outliers followed by that of Model-outliers enables us to divide MAE and RMSE (Root Mean Square Error) by 2 and 3, respectively, while reducing overfitting. The elimination of y-outliers does not display a significant effect on the model performance. During model construction and deployment, the AD serves to verify the position of the test data and of different categories of molecules with respect to the training data and associate this position with their prediction accuracy. For the data that are found to be close to the training data, according to RF confidence, and display high prediction errors, tSNE 2D representations are deployed to identify the possible sources of these errors (e.g., representation of the chemical information in the training data).","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"41 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138965324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improving Clustering Accuracy of K-Means and Random Swap by an Evolutionary Technique Based on Careful Seeding 通过基于仔细播种的进化技术提高 K-Means 和随机交换的聚类精度
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-17 DOI: 10.3390/a16120572
L. Nigro, F. Cicirelli
K-Means is a “de facto” standard clustering algorithm due to its simplicity and efficiency. K-Means, though, strongly depends on the initialization of the centroids (seeding method) and often gets stuck in a local sub-optimal solution. K-Means, in fact, mainly acts as a local refiner of the centroids, and it is unable to move centroids all over the data space. Random Swap was defined to go beyond K-Means, and its modus operandi integrates K-Means in a global strategy of centroids management, which can often generate a clustering solution close to the global optimum. This paper proposes an approach which extends both K-Means and Random Swap and improves the clustering accuracy through an evolutionary technique and careful seeding. Two new algorithms are proposed: the Population-Based K-Means (PB-KM) and the Population-Based Random Swap (PB-RS). Both algorithms consist of two steps: first, a population of J candidate solutions is built, and then the candidate centroids are repeatedly recombined toward a final accurate solution. The paper motivates the design of PB-KM and PB-RS, outlines their current implementation in Java based on parallel streams, and demonstrates the achievable clustering accuracy using both synthetic and real-world datasets.
K-Means 算法简单高效,是 "事实上 "的标准聚类算法。不过,K-Means 严重依赖于中心点的初始化(播种方法),经常会陷入局部次优解。事实上,K-Means 主要充当中心点的局部细化器,无法将中心点移动到整个数据空间。随机交换的定义超越了 K-Means,其工作方式是将 K-Means 整合到中心点管理的全局策略中,这通常能产生接近全局最优的聚类解决方案。本文提出了一种扩展 K-Means 和随机交换的方法,并通过进化技术和精心播种提高了聚类精度。本文提出了两种新算法:基于种群的 K-Means 算法(PB-KM)和基于种群的随机交换算法(PB-RS)。这两种算法都包括两个步骤:首先,建立一个由 J 个候选解组成的群体,然后对候选中心点进行反复重组,以获得最终的精确解。论文介绍了 PB-KM 和 PB-RS 的设计动机,概述了它们目前基于并行流的 Java 实现,并使用合成数据集和真实数据集演示了可实现的聚类精度。
{"title":"Improving Clustering Accuracy of K-Means and Random Swap by an Evolutionary Technique Based on Careful Seeding","authors":"L. Nigro, F. Cicirelli","doi":"10.3390/a16120572","DOIUrl":"https://doi.org/10.3390/a16120572","url":null,"abstract":"K-Means is a “de facto” standard clustering algorithm due to its simplicity and efficiency. K-Means, though, strongly depends on the initialization of the centroids (seeding method) and often gets stuck in a local sub-optimal solution. K-Means, in fact, mainly acts as a local refiner of the centroids, and it is unable to move centroids all over the data space. Random Swap was defined to go beyond K-Means, and its modus operandi integrates K-Means in a global strategy of centroids management, which can often generate a clustering solution close to the global optimum. This paper proposes an approach which extends both K-Means and Random Swap and improves the clustering accuracy through an evolutionary technique and careful seeding. Two new algorithms are proposed: the Population-Based K-Means (PB-KM) and the Population-Based Random Swap (PB-RS). Both algorithms consist of two steps: first, a population of J candidate solutions is built, and then the candidate centroids are repeatedly recombined toward a final accurate solution. The paper motivates the design of PB-KM and PB-RS, outlines their current implementation in Java based on parallel streams, and demonstrates the achievable clustering accuracy using both synthetic and real-world datasets.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"6 12","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138966533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evolutionary Algorithms in a Bacterial Consortium of Synthetic Bacteria 合成细菌联盟中的进化算法
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-17 DOI: 10.3390/a16120571
Sara Lledó Villaescusa, Rafael Lahoz-Beltra
At present, synthetic biology applications are based on the programming of synthetic bacteria with custom-designed genetic circuits through the application of a top-down strategy. These genetic circuits are the programs that implement a certain algorithm, the bacterium being the agent or shell responsible for the execution of the program in a given environment. In this work, we study the possibility that instead of programming synthesized bacteria through a custom-designed genetic circuit, it is the circuit itself which emerges as a result of the evolution simulated through an evolutionary algorithm. This study is conducted by performing in silico experiments in a community composed of synthetic bacteria in which one species or strain behaves as pathogenic bacteria against the rest of the non-pathogenic bacteria that are also part of the bacterial consortium. The goal is the eradication of the pathogenic strain through the evolutionary programming of the agents or synthetic bacteria. The results obtained suggest the plausibility of the evolutionary design of the appropriate genetic circuit resulting from the application of a bottom-up strategy and therefore the experimental feasibility of the evolutionary programming of synthetic bacteria.
目前,合成生物学应用的基础是通过应用自上而下的策略,用定制设计的基因电路对合成细菌进行编程。这些基因电路是实现某种算法的程序,细菌是在给定环境中负责执行程序的代理或外壳。在这项工作中,我们研究了一种可能性,即通过进化算法模拟的进化结果出现的电路本身,而不是通过定制设计的基因电路对合成细菌进行编程。这项研究是在一个由合成细菌组成的群落中进行硅学实验,在这个群落中,一个物种或菌株作为致病菌对抗同为细菌群落一部分的其他非致病菌。其目的是通过制剂或合成细菌的进化程序来消灭致病菌株。研究结果表明,应用自下而上的策略进化设计出适当的基因回路是可行的,因此合成细菌的进化编程在实验上是可行的。
{"title":"Evolutionary Algorithms in a Bacterial Consortium of Synthetic Bacteria","authors":"Sara Lledó Villaescusa, Rafael Lahoz-Beltra","doi":"10.3390/a16120571","DOIUrl":"https://doi.org/10.3390/a16120571","url":null,"abstract":"At present, synthetic biology applications are based on the programming of synthetic bacteria with custom-designed genetic circuits through the application of a top-down strategy. These genetic circuits are the programs that implement a certain algorithm, the bacterium being the agent or shell responsible for the execution of the program in a given environment. In this work, we study the possibility that instead of programming synthesized bacteria through a custom-designed genetic circuit, it is the circuit itself which emerges as a result of the evolution simulated through an evolutionary algorithm. This study is conducted by performing in silico experiments in a community composed of synthetic bacteria in which one species or strain behaves as pathogenic bacteria against the rest of the non-pathogenic bacteria that are also part of the bacterial consortium. The goal is the eradication of the pathogenic strain through the evolutionary programming of the agents or synthetic bacteria. The results obtained suggest the plausibility of the evolutionary design of the appropriate genetic circuit resulting from the application of a bottom-up strategy and therefore the experimental feasibility of the evolutionary programming of synthetic bacteria.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"9 4","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138966227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Solving NP-Hard Challenges in Logistics and Transportation under General Uncertainty Scenarios Using Fuzzy Simheuristics 利用模糊模拟法解决一般不确定性情景下物流和运输中的 NP-Hard 挑战
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-16 DOI: 10.3390/a16120570
Angel A. Juan, Markus Rabe, Majsa Ammouriova, Javier Panadero, David Peidro, Daniel Riera
In the field of logistics and transportation (L&T), this paper reviews the utilization of simheuristic algorithms to address NP-hard optimization problems under stochastic uncertainty. Then, the paper explores an extension of the simheuristics concept by introducing a fuzzy layer to tackle complex optimization problems involving both stochastic and fuzzy uncertainties. The hybrid approach combines simulation, metaheuristics, and fuzzy logic, offering a feasible methodology to solve large-scale NP-hard problems under general uncertainty scenarios. These scenarios are commonly encountered in L&T optimization challenges, such as the vehicle routing problem or the team orienteering problem, among many others. The proposed methodology allows for modeling various problem components—including travel times, service times, customers’ demands, or the duration of electric batteries—as deterministic, stochastic, or fuzzy items. A cross-problem analysis of several computational experiments is conducted to validate the effectiveness of the fuzzy simheuristic methodology. Being a flexible methodology that allows us to tackle NP-hard challenges under general uncertainty scenarios, fuzzy simheuristics can also be applied in fields other than L&T.
在物流与运输(L&T)领域,本文回顾了利用模拟算法解决随机不确定性条件下的 NP 难优化问题的情况。然后,本文通过引入模糊层来探索模拟概念的扩展,以解决涉及随机和模糊不确定性的复杂优化问题。这种混合方法结合了模拟、元启发式和模糊逻辑,为解决一般不确定性情况下的大规模 NP 难问题提供了一种可行的方法。这些情景通常会在 L&T 优化挑战中遇到,如车辆路由问题或团队定向越野问题等。所提出的方法允许将各种问题组件(包括旅行时间、服务时间、客户需求或电动电池的持续时间)建模为确定项、随机项或模糊项。我们对几个计算实验进行了跨问题分析,以验证模糊模拟法的有效性。作为一种灵活的方法,它允许我们在一般不确定性情景下解决 NP 难度的挑战,模糊模拟法也可应用于 L&T 以外的领域。
{"title":"Solving NP-Hard Challenges in Logistics and Transportation under General Uncertainty Scenarios Using Fuzzy Simheuristics","authors":"Angel A. Juan, Markus Rabe, Majsa Ammouriova, Javier Panadero, David Peidro, Daniel Riera","doi":"10.3390/a16120570","DOIUrl":"https://doi.org/10.3390/a16120570","url":null,"abstract":"In the field of logistics and transportation (L&T), this paper reviews the utilization of simheuristic algorithms to address NP-hard optimization problems under stochastic uncertainty. Then, the paper explores an extension of the simheuristics concept by introducing a fuzzy layer to tackle complex optimization problems involving both stochastic and fuzzy uncertainties. The hybrid approach combines simulation, metaheuristics, and fuzzy logic, offering a feasible methodology to solve large-scale NP-hard problems under general uncertainty scenarios. These scenarios are commonly encountered in L&T optimization challenges, such as the vehicle routing problem or the team orienteering problem, among many others. The proposed methodology allows for modeling various problem components—including travel times, service times, customers’ demands, or the duration of electric batteries—as deterministic, stochastic, or fuzzy items. A cross-problem analysis of several computational experiments is conducted to validate the effectiveness of the fuzzy simheuristic methodology. Being a flexible methodology that allows us to tackle NP-hard challenges under general uncertainty scenarios, fuzzy simheuristics can also be applied in fields other than L&T.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"61 8","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138967831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generator of Fuzzy Implications 模糊含义生成器
IF 2.3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-15 DOI: 10.3390/a16120569
Athina Daniilidou, A. Konguetsof, Georgios Souliotis, Basil Papadopoulos
In this research paper, a generator of fuzzy methods based on theorems and axioms of fuzzy logic is derived, analyzed and applied. The family presented generates fuzzy implications according to the value of a selected parameter. The obtained fuzzy implications should satisfy a number of axioms, and the conditions of satisfying the maximum number of axioms are denoted. New theorems are stated and proven based on the rule that the fuzzy function of fuzzy implication, which is strong, leads to fuzzy negation. In this work, the data taken were fuzzified for the application of the new formulae. The fuzzification of the data was undertaken using four kinds of membership degree functions. The new fuzzy functions were compared based on the results obtained after a number of repetitions. The new proposed methodology presents a new family of fuzzy implications, and also an algorithm is shown that produces fuzzy implications so as to be able to select the optimal method of the generator according to the value of a free parameter.
本研究论文基于模糊逻辑的定理和公理,推导、分析和应用了一种模糊方法生成器。该系列根据所选参数的值生成模糊含义。所获得的模糊含义应满足若干公理,并指出了满足最大公理数的条件。根据模糊蕴涵的模糊函数强导致模糊否定的规则,阐述并证明了新的定理。在这项工作中,为了应用新公式,对所采集的数据进行了模糊化处理。数据的模糊化使用了四种成员度函数。根据多次重复后得到的结果,对新的模糊函数进行了比较。新提出的方法提出了一个新的模糊含义系列,还展示了一种产生模糊含义的算法,以便能够根据自由参数值选择最佳的生成器方法。
{"title":"Generator of Fuzzy Implications","authors":"Athina Daniilidou, A. Konguetsof, Georgios Souliotis, Basil Papadopoulos","doi":"10.3390/a16120569","DOIUrl":"https://doi.org/10.3390/a16120569","url":null,"abstract":"In this research paper, a generator of fuzzy methods based on theorems and axioms of fuzzy logic is derived, analyzed and applied. The family presented generates fuzzy implications according to the value of a selected parameter. The obtained fuzzy implications should satisfy a number of axioms, and the conditions of satisfying the maximum number of axioms are denoted. New theorems are stated and proven based on the rule that the fuzzy function of fuzzy implication, which is strong, leads to fuzzy negation. In this work, the data taken were fuzzified for the application of the new formulae. The fuzzification of the data was undertaken using four kinds of membership degree functions. The new fuzzy functions were compared based on the results obtained after a number of repetitions. The new proposed methodology presents a new family of fuzzy implications, and also an algorithm is shown that produces fuzzy implications so as to be able to select the optimal method of the generator according to the value of a free parameter.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":"13 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139000374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Algorithms
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1