首页 > 最新文献

IET Software最新文献

英文 中文
A Data-Driven Artificial Neural Network Approach to Software Project Risk Assessment 软件项目风险评估的数据驱动型人工神经网络方法
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-12-19 DOI: 10.1049/2023/4324783
Mohammed Naif Alatawi, Saleh Alyahyan, Shariq Hussain, Abdullah Alshammari, Abdullah A. Aldaeej, Ibrahim Khalil Alali, H. Alwageed
In the realm of software project management, predicting and mitigating risks are pivotal for successful project execution. Traditional risk assessment methods have limitations in handling complex and dynamic software projects. This study presents a novel approach that leverages artificial neural networks (ANNs) to enhance risk prediction accuracy. We utilize historical project data, encompassing project complexity, financial factors, performance metrics, schedule adherence, and user-related variables, to train the ANN model. Our approach involves optimizing the ANN architecture, with various configurations tested to identify the most effective setup. We compare the performance of mean squared error (MSE) and mean absolute error (MAE) as error functions and find that MAE yields superior results. Furthermore, we demonstrate the effectiveness of our model through comprehensive risk assessment. We predict both the overall project risk and individual risk factors, providing project managers with a valuable tool for risk mitigation. Validation results confirm the robustness of our approach when applied to previously unseen data. The achieved accuracy of 97.12% (or 99.12% with uncertainty consideration) underscores the potential of ANNs in risk management. This research contributes to the software project management field by offering an innovative and highly accurate risk assessment model. It empowers project managers to make informed decisions and proactively address potential risks, ultimately enhancing project success.
在软件项目管理领域,预测和降低风险是成功执行项目的关键。传统的风险评估方法在处理复杂多变的软件项目时存在局限性。本研究提出了一种利用人工神经网络(ANN)提高风险预测准确性的新方法。我们利用历史项目数据(包括项目复杂性、财务因素、性能指标、进度遵守情况和用户相关变量)来训练 ANN 模型。我们的方法包括优化 ANN 架构,测试各种配置以确定最有效的设置。我们比较了作为误差函数的均方误差 (MSE) 和均方绝对误差 (MAE) 的性能,发现 MAE 能产生更好的结果。此外,我们还通过综合风险评估证明了模型的有效性。我们既能预测项目的整体风险,也能预测单个风险因素,为项目经理提供了一个降低风险的宝贵工具。验证结果证实了我们的方法在应用于以前未见的数据时的稳健性。所达到的 97.12% 的准确率(或考虑不确定性后的 99.12%)彰显了人工智能网络在风险管理方面的潜力。这项研究为软件项目管理领域做出了贡献,提供了一个创新的高精度风险评估模型。它使项目经理能够做出明智的决策并积极应对潜在风险,最终提高项目的成功率。
{"title":"A Data-Driven Artificial Neural Network Approach to Software Project Risk Assessment","authors":"Mohammed Naif Alatawi, Saleh Alyahyan, Shariq Hussain, Abdullah Alshammari, Abdullah A. Aldaeej, Ibrahim Khalil Alali, H. Alwageed","doi":"10.1049/2023/4324783","DOIUrl":"https://doi.org/10.1049/2023/4324783","url":null,"abstract":"In the realm of software project management, predicting and mitigating risks are pivotal for successful project execution. Traditional risk assessment methods have limitations in handling complex and dynamic software projects. This study presents a novel approach that leverages artificial neural networks (ANNs) to enhance risk prediction accuracy. We utilize historical project data, encompassing project complexity, financial factors, performance metrics, schedule adherence, and user-related variables, to train the ANN model. Our approach involves optimizing the ANN architecture, with various configurations tested to identify the most effective setup. We compare the performance of mean squared error (MSE) and mean absolute error (MAE) as error functions and find that MAE yields superior results. Furthermore, we demonstrate the effectiveness of our model through comprehensive risk assessment. We predict both the overall project risk and individual risk factors, providing project managers with a valuable tool for risk mitigation. Validation results confirm the robustness of our approach when applied to previously unseen data. The achieved accuracy of 97.12% (or 99.12% with uncertainty consideration) underscores the potential of ANNs in risk management. This research contributes to the software project management field by offering an innovative and highly accurate risk assessment model. It empowers project managers to make informed decisions and proactively address potential risks, ultimately enhancing project success.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":" 3","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138960694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Observational Study on React Native (RN) Questions on Stack Overflow (SO) 关于 Stack Overflow (SO) 上 React Native (RN) 问题的观察研究
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-11-30 DOI: 10.1049/2023/6613434
Luluh Albesher, Razan Aldossari, Reem Alfayez
Mobile applications are continuously increasing in prevalence. One of the main challenges in mobile application development is creating cross-platform applications. To facilitate developing cross-platform applications, the software engineering community created several solutions, one of which is React Native (RN), which is a popular cross-platform framework. The software engineering literature demonstrated the effectiveness of Stack Overflow (SO) in providing real-world perspectives on a variety of technical subjects. Therefore, this study aims to gain a better understanding of the stance of RN on SO. We identified and analyzed 131,620 SO RN-related questions. Moreover, we observed how the interest toward RN on SO evolves over time. Additionally, we utilized Latent Dirichlet Allocation (LDA) to identify RN-related topics that are discussed within the questions. Afterward, we utilized a number of proxy measures to estimate the popularity and difficulty of these topics. The results revealed that interest toward RN on SO was generally increasing. Moreover, RN-related questions revolve around six topics, with the topics of layout and navigation being the most popular and the topic of iOS issues being the most difficult. Software engineering researchers, practitioners, educators, and RN contributors may find the results of this study beneficial in guiding their future RN efforts.
移动应用正日益普及。移动应用程序开发的主要挑战之一是创建跨平台应用程序。为了促进跨平台应用的开发,软件工程社区创建了几种解决方案,其中之一就是 React Native(RN),它是一种流行的跨平台框架。软件工程文献表明,Stack Overflow(SO)在提供各种技术主题的现实世界观点方面非常有效。因此,本研究旨在更好地了解 RN 对 SO 的立场。我们识别并分析了 131,620 个与 SO RN 相关的问题。此外,我们还观察了RN对SO的兴趣是如何随时间演变的。此外,我们还利用 Latent Dirichlet Allocation (LDA) 来识别问题中与 RN 相关的讨论主题。之后,我们使用了一些替代指标来估算这些话题的受欢迎程度和难度。结果显示,SO 上对 RN 的兴趣普遍上升。此外,RN 相关问题围绕六个主题展开,其中布局和导航主题最受欢迎,iOS 问题则最难。软件工程研究人员、从业人员、教育工作者和 RN 贡献者可能会发现本研究的结果有助于指导他们未来的 RN 工作。
{"title":"An Observational Study on React Native (RN) Questions on Stack Overflow (SO)","authors":"Luluh Albesher, Razan Aldossari, Reem Alfayez","doi":"10.1049/2023/6613434","DOIUrl":"https://doi.org/10.1049/2023/6613434","url":null,"abstract":"Mobile applications are continuously increasing in prevalence. One of the main challenges in mobile application development is creating cross-platform applications. To facilitate developing cross-platform applications, the software engineering community created several solutions, one of which is React Native (RN), which is a popular cross-platform framework. The software engineering literature demonstrated the effectiveness of Stack Overflow (SO) in providing real-world perspectives on a variety of technical subjects. Therefore, this study aims to gain a better understanding of the stance of RN on SO. We identified and analyzed 131,620 SO RN-related questions. Moreover, we observed how the interest toward RN on SO evolves over time. Additionally, we utilized Latent Dirichlet Allocation (LDA) to identify RN-related topics that are discussed within the questions. Afterward, we utilized a number of proxy measures to estimate the popularity and difficulty of these topics. The results revealed that interest toward RN on SO was generally increasing. Moreover, RN-related questions revolve around six topics, with the topics of layout and navigation being the most popular and the topic of iOS issues being the most difficult. Software engineering researchers, practitioners, educators, and RN contributors may find the results of this study beneficial in guiding their future RN efforts.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"1 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139199214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of Emotional Deconstruction and the Role of Emotional Value for Learners in Animation Works Based on Digital Multimedia Technology 基于数字多媒体技术的动画作品中的情感解构与学习者情感价值作用分析
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-11-22 DOI: 10.1049/2023/5566781
Shilei Liang
With the rapid development of artificial intelligence and digital media technology, modern animation technology has greatly improved the creative efficiency of creators through computer-generated graphics, electronic manual painting, and other means, and its number has also experienced explosive growth. The intelligent completion of emotional expression identification within animation works holds immense significance for both animation production learners and the creation of intelligent animation works. Consequently, emotion recognition has emerged as a focal point of research attention. This paper focuses on the analysis of emotional states in animation works. First, by analyzing the characteristics of emotional expression in animation, the model data foundation for using sound and video information is determined. Subsequently, we perform individual feature extraction for these two types of information using gated recurrent unit (GRU). Finally, we employ a multiattention mechanism to fuse the multimodal information derived from audio and video sources. The experimental outcomes demonstrate that the proposed method framework attains a recognition accuracy exceeding 90% for the three distinct emotional categories. Remarkably, the recognition rate for negative emotions reaches an impressive 94.7%, significantly surpassing the performance of single-modal approaches and other feature fusion methods. This research presents invaluable insights for the training of multimedia animation production professionals, empowering them to better grasp the nuances of emotion transfer within animation and, thereby, realize productions of elevated quality, which will greatly improve the market operational efficiency of animation industry.
随着人工智能和数字媒体技术的飞速发展,现代动画技术通过计算机生成图形、电子手工绘画等手段,极大地提高了创作者的创作效率,其数量也出现了爆发式增长。智能化地完成动画作品中的情感表达识别,对于动画制作学习者和智能动画作品的创作都有着巨大的意义。因此,情感识别成为研究关注的焦点。本文重点分析动画作品中的情感状态。首先,通过分析动画中情绪表达的特点,确定了使用声音和视频信息的模型数据基础。随后,我们利用门控递归单元(GRU)对这两类信息进行单独特征提取。最后,我们采用多注意力机制来融合从音频和视频来源中获得的多模态信息。实验结果表明,所提出的方法框架对三种不同情绪类别的识别准确率超过了 90%。值得注意的是,负面情绪的识别率达到了令人印象深刻的 94.7%,大大超过了单模态方法和其他特征融合方法。这项研究为多媒体动画制作专业人员的培训提供了宝贵的启示,使他们能够更好地掌握动画中情绪传递的细微差别,从而实现高质量的动画制作,这将大大提高动画产业的市场运营效率。
{"title":"Analysis of Emotional Deconstruction and the Role of Emotional Value for Learners in Animation Works Based on Digital Multimedia Technology","authors":"Shilei Liang","doi":"10.1049/2023/5566781","DOIUrl":"https://doi.org/10.1049/2023/5566781","url":null,"abstract":"With the rapid development of artificial intelligence and digital media technology, modern animation technology has greatly improved the creative efficiency of creators through computer-generated graphics, electronic manual painting, and other means, and its number has also experienced explosive growth. The intelligent completion of emotional expression identification within animation works holds immense significance for both animation production learners and the creation of intelligent animation works. Consequently, emotion recognition has emerged as a focal point of research attention. This paper focuses on the analysis of emotional states in animation works. First, by analyzing the characteristics of emotional expression in animation, the model data foundation for using sound and video information is determined. Subsequently, we perform individual feature extraction for these two types of information using gated recurrent unit (GRU). Finally, we employ a multiattention mechanism to fuse the multimodal information derived from audio and video sources. The experimental outcomes demonstrate that the proposed method framework attains a recognition accuracy exceeding 90% for the three distinct emotional categories. Remarkably, the recognition rate for negative emotions reaches an impressive 94.7%, significantly surpassing the performance of single-modal approaches and other feature fusion methods. This research presents invaluable insights for the training of multimedia animation production professionals, empowering them to better grasp the nuances of emotion transfer within animation and, thereby, realize productions of elevated quality, which will greatly improve the market operational efficiency of animation industry.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"48 2","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139247579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the Impact of Data Transformation Techniques on the Performance and Interpretability of Software Defect Prediction Models 评估数据转换技术对软件缺陷预测模型的性能和可解释性的影响
4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-11-14 DOI: 10.1049/2023/6293074
Yu Zhao, Zhiqiu Huang, Lina Gong, Yi Zhu, Qiao Yu, Yuxiang Gao
The performance of software defect prediction (SDP) models determines the priority of test resource allocation. Researchers also use interpretability techniques to gain empirical knowledge about software quality from SDP models. However, SDP methods designed in the past research rarely consider the impact of data transformation methods, simple but commonly used preprocessing techniques, on the performance and interpretability of SDP models. Therefore, in this paper, we investigate the impact of three data transformation methods (Log, Minmax, and Z-score) on the performance and interpretability of SDP models. Through empirical research on (i) six classification techniques (random forest, decision tree, logistic regression, Naive Bayes, K-nearest neighbors, and multilayer perceptron), (ii) six performance evaluation indicators (Accuracy, Precision, Recall, F1, MCC, and AUC), (iii) two interpretable methods (permutation and SHAP), (iv) two feature importance measures (Top-k feature rank overlap and difference), and (v) three datasets (Promise, Relink, and AEEEM), our results show that the data transformation methods can significantly improve the performance of the SDP models and greatly affect the variation of the most important features. Specifically, the impact of data transformation methods on the performance and interpretability of SDP models depends on the classification techniques and evaluation indicators. We observe that log transformation improves NB model performance by 7%–61% on the other five indicators with a 5% drop in Precision. Minmax and Z-score transformation improves NB model performance by 2%–9% across all indicators. However, all three transformation methods lead to substantial changes in the Top-5 important feature ranks, with differences exceeding 2 in 40%–80% of cases (detailed results available in the main content). Based on our findings, we recommend that (1) considering the impact of data transformation methods on model performance and interpretability when designing SDP approaches as transformations can improve model accuracy, and potentially obscure important features, which lead to challenges in interpretation, (2) conducting comparative experiments with and without the transformations to validate the effectiveness of proposed methods which are designed to improve the prediction performance, and (3) tracking changes in the most important features before and after applying data transformation methods to ensure precise and traceable interpretability conclusions to gain insights. Our study reminds researchers and practitioners of the need for comprehensive considerations even when using other similar simple data processing methods.
软件缺陷预测模型的性能决定了测试资源分配的优先级。研究人员还使用可解释性技术从SDP模型中获得关于软件质量的经验知识。然而,以往研究设计的SDP方法很少考虑数据转换方法(简单但常用的预处理技术)对SDP模型性能和可解释性的影响。因此,在本文中,我们研究了三种数据转换方法(Log, Minmax和Z-score)对SDP模型的性能和可解释性的影响。通过对(i)六种分类技术(随机森林、决策树、逻辑回归、朴素贝叶斯、k近邻和多层感知器)、(ii)六种性能评价指标(准确率、精密度、召回率、F1、MCC和AUC)、(iii)两种可解释方法(置换和SHAP)、(iv)两种特征重要性度量(Top-k特征秩重叠和差异)以及(v)三个数据集(Promise、Relink和AEEEM)的实证研究,结果表明,数据转换方法可以显著提高SDP模型的性能,并对最重要特征的变化有很大影响。具体而言,数据转换方法对SDP模型性能和可解释性的影响取决于分类技术和评价指标。我们观察到,对数变换在其他五个指标上使NB模型性能提高了7%-61%,而精度下降了5%。Minmax和Z-score转换在所有指标上提高了NB模型的性能2%-9%。然而,这三种转换方法都导致了Top-5重要特征排名的实质性变化,在40%-80%的情况下差异超过2(详细结果见主要内容)。基于我们的研究结果,我们建议(1)在设计SDP方法时考虑数据转换方法对模型性能和可解释性的影响,因为转换可以提高模型精度,但可能会模糊重要特征,从而导致解释挑战;(2)进行有转换和没有转换的对比实验,以验证所提出的旨在提高预测性能的方法的有效性。(3)跟踪应用数据转换方法前后最重要特征的变化,确保结论的精确性和可追溯性,从而获得洞见。我们的研究提醒研究人员和从业者,即使使用其他类似的简单数据处理方法,也需要全面考虑。
{"title":"Evaluating the Impact of Data Transformation Techniques on the Performance and Interpretability of Software Defect Prediction Models","authors":"Yu Zhao, Zhiqiu Huang, Lina Gong, Yi Zhu, Qiao Yu, Yuxiang Gao","doi":"10.1049/2023/6293074","DOIUrl":"https://doi.org/10.1049/2023/6293074","url":null,"abstract":"The performance of software defect prediction (SDP) models determines the priority of test resource allocation. Researchers also use interpretability techniques to gain empirical knowledge about software quality from SDP models. However, SDP methods designed in the past research rarely consider the impact of data transformation methods, simple but commonly used preprocessing techniques, on the performance and interpretability of SDP models. Therefore, in this paper, we investigate the impact of three data transformation methods (Log, Minmax, and Z-score) on the performance and interpretability of SDP models. Through empirical research on (i) six classification techniques (random forest, decision tree, logistic regression, Naive Bayes, K-nearest neighbors, and multilayer perceptron), (ii) six performance evaluation indicators (Accuracy, Precision, Recall, F1, MCC, and AUC), (iii) two interpretable methods (permutation and SHAP), (iv) two feature importance measures (Top-k feature rank overlap and difference), and (v) three datasets (Promise, Relink, and AEEEM), our results show that the data transformation methods can significantly improve the performance of the SDP models and greatly affect the variation of the most important features. Specifically, the impact of data transformation methods on the performance and interpretability of SDP models depends on the classification techniques and evaluation indicators. We observe that log transformation improves NB model performance by 7%–61% on the other five indicators with a 5% drop in Precision. Minmax and Z-score transformation improves NB model performance by 2%–9% across all indicators. However, all three transformation methods lead to substantial changes in the Top-5 important feature ranks, with differences exceeding 2 in 40%–80% of cases (detailed results available in the main content). Based on our findings, we recommend that (1) considering the impact of data transformation methods on model performance and interpretability when designing SDP approaches as transformations can improve model accuracy, and potentially obscure important features, which lead to challenges in interpretation, (2) conducting comparative experiments with and without the transformations to validate the effectiveness of proposed methods which are designed to improve the prediction performance, and (3) tracking changes in the most important features before and after applying data transformation methods to ensure precise and traceable interpretability conclusions to gain insights. Our study reminds researchers and practitioners of the need for comprehensive considerations even when using other similar simple data processing methods.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"56 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134991190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beam Transmission (BTR) Software for Efficient Neutral Beam Injector Design and Tokamak Operation 用于高效中性束注入器设计和托卡马克操作的束传输(BTR)软件
4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-24 DOI: 10.3390/software2040022
Eugenia Dlougach, Margarita Kichik
BTR code (originally—“Beam Transmission and Re-ionization”, 1995) is used for Neutral Beam Injection (NBI) design; it is also applied to the injector system of ITER. In 2008, the BTR model was extended to include the beam interaction with plasmas and direct beam losses in tokamak. For many years, BTR has been widely used for various NBI designs for efficient heating and current drive in nuclear fusion devices for plasma scenario control and diagnostics. BTR analysis is especially important for ‘beam-driven’ fusion devices, such as fusion neutron source (FNS) tokamaks, since their operation depends on a high NBI input in non-inductive current drive and fusion yield. BTR calculates detailed power deposition maps and particle losses with an account of ionized beam fractions and background electromagnetic fields; these results are used for the overall NBI performance analysis. BTR code is open for public usage; it is fully interactive and supplied with an intuitive graphical user interface (GUI). The input configuration is flexibly adapted to any specific NBI geometry. High running speed and full control over the running options allow the user to perform multiple parametric runs on the fly. The paper describes the detailed physics of BTR, numerical methods, graphical user interface, and examples of BTR application. The code is still in evolution; basic support is available to all BTR users.
BTR代码(原-“光束传输和再电离”,1995年)用于中性束注入(NBI)设计;该方法也适用于ITER的注入系统。2008年,将BTR模型扩展到包括托卡马克中等离子体与束流相互作用和直接束流损失。多年来,BTR已广泛应用于各种NBI设计,用于等离子体场景控制和诊断核聚变装置的高效加热和电流驱动。BTR分析对于“束驱动”聚变装置尤其重要,例如聚变中子源(FNS)托卡马克,因为它们的运行依赖于非感应电流驱动和聚变产率的高NBI输入。BTR计算详细的功率沉积图和粒子损失与电离束分数和背景电磁场的说明;这些结果用于总体NBI性能分析。BTR代码开放给公众使用;它是完全交互式的,并提供直观的图形用户界面(GUI)。输入配置可以灵活地适应任何特定的NBI几何形状。高运行速度和运行选项的完全控制允许用户在飞行中执行多个参数运行。本文详细介绍了BTR的物理特性、数值方法、图形用户界面以及BTR的应用实例。代码仍在进化中;所有BTR用户都可以获得基本支持。
{"title":"Beam Transmission (BTR) Software for Efficient Neutral Beam Injector Design and Tokamak Operation","authors":"Eugenia Dlougach, Margarita Kichik","doi":"10.3390/software2040022","DOIUrl":"https://doi.org/10.3390/software2040022","url":null,"abstract":"BTR code (originally—“Beam Transmission and Re-ionization”, 1995) is used for Neutral Beam Injection (NBI) design; it is also applied to the injector system of ITER. In 2008, the BTR model was extended to include the beam interaction with plasmas and direct beam losses in tokamak. For many years, BTR has been widely used for various NBI designs for efficient heating and current drive in nuclear fusion devices for plasma scenario control and diagnostics. BTR analysis is especially important for ‘beam-driven’ fusion devices, such as fusion neutron source (FNS) tokamaks, since their operation depends on a high NBI input in non-inductive current drive and fusion yield. BTR calculates detailed power deposition maps and particle losses with an account of ionized beam fractions and background electromagnetic fields; these results are used for the overall NBI performance analysis. BTR code is open for public usage; it is fully interactive and supplied with an intuitive graphical user interface (GUI). The input configuration is flexibly adapted to any specific NBI geometry. High running speed and full control over the running options allow the user to perform multiple parametric runs on the fly. The paper describes the detailed physics of BTR, numerical methods, graphical user interface, and examples of BTR application. The code is still in evolution; basic support is available to all BTR users.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"42 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135266187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting Floating-Point Expression Errors Based Improved PSO Algorithm 基于改进粒子群算法的浮点表达式错误检测
4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-23 DOI: 10.1049/2023/6681267
Hongru Yang, Jinchen Xu, Jiangwei Hao, Zuoyan Zhang, Bei Zhou
The use of floating-point numbers inevitably leads to inaccurate results and, in certain cases, significant program failures. Detecting floating-point errors is critical to ensuring that floating-point programs outputs are proper. However, due to the sparsity of floating-point errors, only a limited number of inputs can cause significant floating-point errors, and determining how to detect these inputs and to selecting the appropriate search technique is critical to detecting significant errors. This paper proposes characteristic particle swarm optimization (CPSO) algorithm based on particle swarm optimization (PSO) algorithm. The floating-point expression error detection tool PSOED is implemented, which can detect significant errors in floating-point arithmetic expressions and provide corresponding input. The method presented in this paper is based on two insights: (1) treating floating-point error detection as a search problem and selecting reliable heuristic search strategies to solve the problem; (2) fully utilizing the error distribution laws of expressions and the distribution characteristics of floating-point numbers to guide the search space generation and improve the search efficiency. This paper selects 28 expressions from the FPBench standard set as test cases, uses PSOED to detect the maximum error of the expressions, and compares them to the current dynamic error detection tools S3FP and Herbie. PSOED detects the maximum error 100% better than S3FP, 68% better than Herbie, and 14% equivalent to Herbie. The results of the experiments indicate that PSOED can detect significant floating-point expression errors.
使用浮点数不可避免地会导致不准确的结果,在某些情况下,还会导致严重的程序失败。检测浮点错误对于确保浮点程序的正确输出至关重要。然而,由于浮点错误的稀疏性,只有有限数量的输入才会导致严重的浮点错误,确定如何检测这些输入并选择适当的搜索技术对于检测严重错误至关重要。本文在粒子群优化算法的基础上提出了特征粒子群优化算法。实现了浮点表达式错误检测工具PSOED,该工具能够检测出浮点算术表达式中的重大错误并提供相应的输入。本文提出的方法基于两个方面:(1)将浮点错误检测视为一个搜索问题,并选择可靠的启发式搜索策略来解决问题;(2)充分利用表达式的误差分布规律和浮点数的分布特点,指导搜索空间的生成,提高搜索效率。本文从FPBench标准集中选取28个表达式作为测试用例,使用PSOED检测表达式的最大错误,并与当前的动态错误检测工具S3FP和Herbie进行比较。PSOED检测最大错误比S3FP好100%,比Herbie好68%,相当于Herbie的14%。实验结果表明,PSOED可以检测到明显的浮点表达式错误。
{"title":"Detecting Floating-Point Expression Errors Based Improved PSO Algorithm","authors":"Hongru Yang, Jinchen Xu, Jiangwei Hao, Zuoyan Zhang, Bei Zhou","doi":"10.1049/2023/6681267","DOIUrl":"https://doi.org/10.1049/2023/6681267","url":null,"abstract":"The use of floating-point numbers inevitably leads to inaccurate results and, in certain cases, significant program failures. Detecting floating-point errors is critical to ensuring that floating-point programs outputs are proper. However, due to the sparsity of floating-point errors, only a limited number of inputs can cause significant floating-point errors, and determining how to detect these inputs and to selecting the appropriate search technique is critical to detecting significant errors. This paper proposes characteristic particle swarm optimization (CPSO) algorithm based on particle swarm optimization (PSO) algorithm. The floating-point expression error detection tool PSOED is implemented, which can detect significant errors in floating-point arithmetic expressions and provide corresponding input. The method presented in this paper is based on two insights: (1) treating floating-point error detection as a search problem and selecting reliable heuristic search strategies to solve the problem; (2) fully utilizing the error distribution laws of expressions and the distribution characteristics of floating-point numbers to guide the search space generation and improve the search efficiency. This paper selects 28 expressions from the FPBench standard set as test cases, uses PSOED to detect the maximum error of the expressions, and compares them to the current dynamic error detection tools S3FP and Herbie. PSOED detects the maximum error 100% better than S3FP, 68% better than Herbie, and 14% equivalent to Herbie. The results of the experiments indicate that PSOED can detect significant floating-point expression errors.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135413335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Systematic Mapping of the Proposition of Benchmarks in the Software Testing and Debugging Domain 软件测试与调试领域基准命题的系统映射
4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-12 DOI: 10.3390/software2040021
Deuslirio da Silva-Junior, Valdemar V. Graciano-Neto, Diogo M. de-Freitas, Plino de Sá Leitão-Junior, Mohamad Kassab
Software testing and debugging are standard practices of software quality assurance since they enable the identification and correction of failures. Benchmarks have been used in that context as a group of programs to support the comparison of different techniques according to pre-established parameters. However, the reasons that inspire researchers to propose novel benchmarks are not fully understood. This article reports the investigation, identification, classification, and externalization of the state of the art about the proposition of benchmarks on software testing and debugging domains. The study was carried out using systematic mapping procedures according to the guidelines widely followed by software engineering literature. The search identified 1674 studies, from which, 25 were selected for analysis. A list of benchmarks is provided and descriptively mapped according to their characteristics, motivations, and scope of use for their creation. The lack of data to support the comparison between available and novel software testing and debugging techniques is the main motivation for the proposition of benchmarks. Advancements in the standardization and prescription of benchmark structure and composition are still required. Establishing such a standard could foster benchmark reuse, thereby saving time and effort in the engineering of benchmarks for software testing and debugging.
软件测试和调试是软件质量保证的标准实践,因为它们能够识别和纠正故障。在这种情况下,基准被用作一组程序,以支持根据预先建立的参数对不同技术进行比较。然而,激发研究人员提出新基准的原因尚不完全清楚。本文报告了关于软件测试和调试领域的基准建议的调查、识别、分类和外部化。根据软件工程文献广泛遵循的指导方针,使用系统的映射程序进行了研究。这项研究确定了1674项研究,从中选择了25项进行分析。提供了一个基准列表,并根据它们的特征、动机和创建它们的使用范围对其进行了描述性映射。缺乏数据来支持现有的和新的软件测试和调试技术之间的比较,这是提出基准测试的主要动机。基准结构和组成的标准化和处方化仍需进一步推进。建立这样的标准可以促进基准重用,从而在软件测试和调试的基准工程中节省时间和精力。
{"title":"A Systematic Mapping of the Proposition of Benchmarks in the Software Testing and Debugging Domain","authors":"Deuslirio da Silva-Junior, Valdemar V. Graciano-Neto, Diogo M. de-Freitas, Plino de Sá Leitão-Junior, Mohamad Kassab","doi":"10.3390/software2040021","DOIUrl":"https://doi.org/10.3390/software2040021","url":null,"abstract":"Software testing and debugging are standard practices of software quality assurance since they enable the identification and correction of failures. Benchmarks have been used in that context as a group of programs to support the comparison of different techniques according to pre-established parameters. However, the reasons that inspire researchers to propose novel benchmarks are not fully understood. This article reports the investigation, identification, classification, and externalization of the state of the art about the proposition of benchmarks on software testing and debugging domains. The study was carried out using systematic mapping procedures according to the guidelines widely followed by software engineering literature. The search identified 1674 studies, from which, 25 were selected for analysis. A list of benchmarks is provided and descriptively mapped according to their characteristics, motivations, and scope of use for their creation. The lack of data to support the comparison between available and novel software testing and debugging techniques is the main motivation for the proposition of benchmarks. Advancements in the standardization and prescription of benchmark structure and composition are still required. Establishing such a standard could foster benchmark reuse, thereby saving time and effort in the engineering of benchmarks for software testing and debugging.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136013815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Differential Datalog Interpreter 差分数据解释器
4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-09-21 DOI: 10.3390/software2030020
Matthew James Stephenson
The core reasoning task for datalog engines is materialization, the evaluation of a datalog program over a database alongside its physical incorporation into the database itself. The de-facto method of computing is through the recursive application of inference rules. Due to it being a costly operation, it is a must for datalog engines to provide incremental materialization; that is, to adjust the computation to new data instead of restarting from scratch. One of the major caveats is that deleting data is notoriously more involved than adding since one has to take into account all possible data that has been entailed from what is being deleted. Differential dataflow is a computational model that provides efficient incremental maintenance, notoriously with equal performance between additions and deletions, and work distribution of iterative dataflows. In this paper, we investigate the performance of materialization with three reference datalog implementations, out of which one is built on top of a lightweight relational engine, and the two others are differential-dataflow and non-differential versions of the same rewrite algorithm with the same optimizations. Experimental results suggest that monotonic aggregation is more powerful than ascenting merely the powerset lattice.
数据记录引擎的核心推理任务是物质化,即对数据库上的数据记录程序进行评估,并将其物理地合并到数据库本身中。事实上的计算方法是通过递归应用推理规则。由于这是一个昂贵的操作,数据引擎必须提供增量物质化;也就是说,调整计算以适应新的数据,而不是从头开始。其中一个主要的注意事项是,删除数据比添加数据要复杂得多,因为必须考虑要删除的数据所包含的所有可能的数据。差分数据流是一种计算模型,它提供了高效的增量维护,在添加和删除之间具有相同的性能,以及迭代数据流的工作分布。在本文中,我们使用三个参考数据实现来研究物化的性能,其中一个是建立在轻量级关系引擎之上的,另外两个是具有相同优化的相同重写算法的差分数据流和非差分版本。实验结果表明,单调聚集比仅仅提升幂集晶格更有效。
{"title":"A Differential Datalog Interpreter","authors":"Matthew James Stephenson","doi":"10.3390/software2030020","DOIUrl":"https://doi.org/10.3390/software2030020","url":null,"abstract":"The core reasoning task for datalog engines is materialization, the evaluation of a datalog program over a database alongside its physical incorporation into the database itself. The de-facto method of computing is through the recursive application of inference rules. Due to it being a costly operation, it is a must for datalog engines to provide incremental materialization; that is, to adjust the computation to new data instead of restarting from scratch. One of the major caveats is that deleting data is notoriously more involved than adding since one has to take into account all possible data that has been entailed from what is being deleted. Differential dataflow is a computational model that provides efficient incremental maintenance, notoriously with equal performance between additions and deletions, and work distribution of iterative dataflows. In this paper, we investigate the performance of materialization with three reference datalog implementations, out of which one is built on top of a lightweight relational engine, and the two others are differential-dataflow and non-differential versions of the same rewrite algorithm with the same optimizations. Experimental results suggest that monotonic aggregation is more powerful than ascenting merely the powerset lattice.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136236435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
User Authorization in Microservice-Based Applications 基于微服务的应用程序中的用户授权
4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-09-19 DOI: 10.3390/software2030019
Niklas Sänger, Sebastian Abeck
Microservices have emerged as a prevalent architectural style in modern software development, replacing traditional monolithic architectures. The decomposition of business functionality into distributed microservices offers numerous benefits, but introduces increased complexity to the overall application. Consequently, the complexity of authorization in microservice-based applications necessitates a comprehensive approach that integrates authorization as an inherent component from the beginning. This paper presents a systematic approach for achieving fine-grained user authorization using Attribute-Based Access Control (ABAC). The proposed approach emphasizes structure preservation, facilitating traceability throughout the various phases of application development. As a result, authorization artifacts can be traced seamlessly from the initial analysis phase to the subsequent implementation phase. One significant contribution is the development of a language to formulate natural language authorization requirements and policies. These natural language authorization policies can subsequently be implemented using the policy language Rego. By leveraging the analysis of software artifacts, the proposed approach enables the creation of comprehensive and tailored authorization policies.
在现代软件开发中,微服务已经成为一种流行的架构风格,取代了传统的单片架构。将业务功能分解为分布式微服务提供了许多好处,但也增加了整个应用程序的复杂性。因此,基于微服务的应用程序中授权的复杂性需要一种全面的方法,从一开始就将授权集成为一个固有的组件。本文提出了一种使用基于属性的访问控制(ABAC)实现细粒度用户授权的系统方法。建议的方法强调结构保存,促进应用程序开发各个阶段的可追溯性。因此,授权工件可以从初始分析阶段无缝地跟踪到随后的实现阶段。一个重要的贡献是开发了一种语言来制定自然语言授权需求和策略。这些自然语言授权策略随后可以使用策略语言Rego来实现。通过利用对软件工件的分析,所建议的方法支持创建全面和定制的授权策略。
{"title":"User Authorization in Microservice-Based Applications","authors":"Niklas Sänger, Sebastian Abeck","doi":"10.3390/software2030019","DOIUrl":"https://doi.org/10.3390/software2030019","url":null,"abstract":"Microservices have emerged as a prevalent architectural style in modern software development, replacing traditional monolithic architectures. The decomposition of business functionality into distributed microservices offers numerous benefits, but introduces increased complexity to the overall application. Consequently, the complexity of authorization in microservice-based applications necessitates a comprehensive approach that integrates authorization as an inherent component from the beginning. This paper presents a systematic approach for achieving fine-grained user authorization using Attribute-Based Access Control (ABAC). The proposed approach emphasizes structure preservation, facilitating traceability throughout the various phases of application development. As a result, authorization artifacts can be traced seamlessly from the initial analysis phase to the subsequent implementation phase. One significant contribution is the development of a language to formulate natural language authorization requirements and policies. These natural language authorization policies can subsequently be implemented using the policy language Rego. By leveraging the analysis of software artifacts, the proposed approach enables the creation of comprehensive and tailored authorization policies.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135063218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Quantitative Review of the Research on Business Process Management in Digital Transformation: A Bibliometric Approach 数字化转型中业务流程管理研究的定量回顾:文献计量学方法
IF 1.6 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-09-01 DOI: 10.3390/software2030018
Bui Quang Truong, Anh Nguyen-Duc, Nguyen Thi Cam Van
In recent years, research on digital transformation (DT) and business process management (BPM) has gained significant attention in the field of business and management. This paper aims to conduct a comprehensive bibliometric analysis of global research on DT and BPM from 2007 to 2022. A total of 326 papers were selected from Web of Science and Scopus for analysis. Using bibliometric methods, we evaluated the current state and future research trends of DT and BPM. Our analysis reveals that the number of publications on DT and BPM has grown significantly over time, with the Business Process Management Journal being the most active. The countries that have contributed the most to this field are Germany (with four universities in the top 10) and the USA. The Business Process Management Journal is the most active in publishing research on digital transformation and business process management. The analysis showed that “artificial intelligence” is a technology that has been studied extensively and is increasingly asserted to influence companies’ business processes. Additionally, the study provides valuable insights from the co-citation network analysis. Based on our findings, we provide recommendations for future research directions on DT and BPM. This study contributes to a better understanding of the current state of research on DT and BPM and provides insights for future research.
近年来,数字化转型(DT)和业务流程管理(BPM)的研究受到了商业和管理领域的广泛关注。本文旨在对2007年至2022年全球关于DT和BPM的研究进行全面的文献计量分析。从Web of Science和Scopus中选取326篇论文进行分析。采用文献计量学的方法,评价了DT和BPM的研究现状和未来的研究趋势。我们的分析显示,随着时间的推移,关于DT和BPM的出版物数量显著增长,其中业务流程管理期刊(Business Process Management Journal)最为活跃。在这一领域贡献最大的国家是德国(有4所大学进入前10名)和美国。《业务流程管理期刊》是在数字化转型和业务流程管理方面发表研究最活跃的期刊。分析显示,“人工智能”是一项被广泛研究的技术,越来越多的人认为它会影响企业的业务流程。此外,本研究还从共被引网络分析中提供了有价值的见解。在此基础上,对未来的研究方向提出了建议。本研究有助于更好地了解DT和BPM的研究现状,并为未来的研究提供见解。
{"title":"A Quantitative Review of the Research on Business Process Management in Digital Transformation: A Bibliometric Approach","authors":"Bui Quang Truong, Anh Nguyen-Duc, Nguyen Thi Cam Van","doi":"10.3390/software2030018","DOIUrl":"https://doi.org/10.3390/software2030018","url":null,"abstract":"In recent years, research on digital transformation (DT) and business process management (BPM) has gained significant attention in the field of business and management. This paper aims to conduct a comprehensive bibliometric analysis of global research on DT and BPM from 2007 to 2022. A total of 326 papers were selected from Web of Science and Scopus for analysis. Using bibliometric methods, we evaluated the current state and future research trends of DT and BPM. Our analysis reveals that the number of publications on DT and BPM has grown significantly over time, with the Business Process Management Journal being the most active. The countries that have contributed the most to this field are Germany (with four universities in the top 10) and the USA. The Business Process Management Journal is the most active in publishing research on digital transformation and business process management. The analysis showed that “artificial intelligence” is a technology that has been studied extensively and is increasingly asserted to influence companies’ business processes. Additionally, the study provides valuable insights from the co-citation network analysis. Based on our findings, we provide recommendations for future research directions on DT and BPM. This study contributes to a better understanding of the current state of research on DT and BPM and provides insights for future research.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"84 2 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89314142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
IET Software
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1