首页 > 最新文献

International Journal of System Assurance Engineering and Management最新文献

英文 中文
Interpretive structural modeling of lean six sigma critical success factors in perspective of industry 4.0 for Indian manufacturing industries 工业 4.0 视角下印度制造业精益六西格玛关键成功因素的解释性结构建模
IF 2 Q2 Engineering Pub Date : 2024-05-30 DOI: 10.1007/s13198-024-02375-y
Pramod Kumar, Jaiprakash Bhamu, Sunkulp Goel, Dharmendra Singh

This paper aims to identify and analyze critical success factors (CSFs) of Lean Six Sigma (LSS) implementation in context to Industry 4.0 (I4.0) in Indian manufacturing industries. Twenty CSFs are identified from literature and expert’s opinion. A survey was conducted through administration of designed questionnaire in Indian manufacturing industries and reliability of the factors was tested calculating Cronbach’s alfa (α) value for all responses. Thereafter, out of twenty CSFs, sixteen were found reliable. Further, these sixteen factors were analyzed employing Interpretive Structural Modeling (ISM) technique and leveled as per developed model. The MICMAC analysis is employed for determining driving and dependence power of CSFs. The developed model provides a platform for the practitioners/researchers to design a framework for successful implementation of LSS in view of current manufacturing paradigm of I4.0. On analyzing the data using ISM technique, the ‘Organizational culture and belief’, ‘Effective top management commitment and attitude’ and ‘Motivated and skilled manpower’ are observed to be the most significant CSFs which drive the path for proper implementation of LSS in Indian manufacturing industries. The developed model will enable the practitioners to draw the effective strategy for proper implementation of LSS in view of Industry 4.0. The results will give an edge to the management to think strategically for improvements in this competitive environment.

本文旨在识别和分析印度制造业在工业 4.0(I4.0)背景下实施精益六西格玛(LSS)的关键成功因素(CSFs)。从文献和专家意见中确定了 20 个 CSF。通过在印度制造业发放设计好的调查问卷进行了调查,并计算了所有回答的 Cronbach's alfa (α) 值,测试了各因素的可靠性。此后,在二十个 CSF 中,有十六个被认为是可靠的。此外,还采用解释性结构建模(ISM)技术对这 16 个因子进行了分析,并根据所建立的模型进行了分级。MICMAC 分析用于确定 CSF 的驱动力和依赖力。所开发的模型为从业人员/研究人员提供了一个平台,以便根据当前的 I4.0 制造范式设计一个成功实施 LSS 的框架。通过使用 ISM 技术分析数据,我们发现 "组织文化和信念"、"高层管理者的有效承诺和态度 "以及 "有动力和有技能的员工 "是最重要的 CSF,它们是印度制造业正确实施 LSS 的驱动力。鉴于工业 4.0,所开发的模型将使从业人员能够为正确实施 LSS 制定有效的战略。研究结果将为管理层在激烈的竞争环境中进行战略思考提供优势。
{"title":"Interpretive structural modeling of lean six sigma critical success factors in perspective of industry 4.0 for Indian manufacturing industries","authors":"Pramod Kumar, Jaiprakash Bhamu, Sunkulp Goel, Dharmendra Singh","doi":"10.1007/s13198-024-02375-y","DOIUrl":"https://doi.org/10.1007/s13198-024-02375-y","url":null,"abstract":"<p>This paper aims to identify and analyze critical success factors (CSFs) of Lean Six Sigma (LSS) implementation in context to Industry 4.0 (I4.0) in Indian manufacturing industries. Twenty CSFs are identified from literature and expert’s opinion. A survey was conducted through administration of designed questionnaire in Indian manufacturing industries and reliability of the factors was tested calculating Cronbach’s alfa (α) value for all responses. Thereafter, out of twenty CSFs, sixteen were found reliable. Further, these sixteen factors were analyzed employing Interpretive Structural Modeling (ISM) technique and leveled as per developed model. The MICMAC analysis is employed for determining driving and dependence power of CSFs. The developed model provides a platform for the practitioners/researchers to design a framework for successful implementation of LSS in view of current manufacturing paradigm of I4.0. On analyzing the data using ISM technique, the ‘<i>Organizational culture and belief</i>’, ‘<i>Effective top management commitment and attitude</i>’ and ‘<i>Motivated and skilled manpower</i>’ are observed to be the most significant CSFs which drive the path for proper implementation of LSS in Indian manufacturing industries. The developed model will enable the practitioners to draw the effective strategy for proper implementation of LSS in view of Industry 4.0. The results will give an edge to the management to think strategically for improvements in this competitive environment.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141190401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An open-source MP + CNN + BiLSTM model-based hybrid model for recognizing sign language on smartphones 基于 MP + CNN + BiLSTM 模型的开源混合模型,用于识别智能手机上的手语
IF 2 Q2 Engineering Pub Date : 2024-05-30 DOI: 10.1007/s13198-024-02376-x
Hayder M. A. Ghanimi, Sudhakar Sengan, Vijaya Bhaskar Sadu, Parvinder Kaur, Manju Kaushik, Roobaea Alroobaea, Abdullah M. Baqasah, Majed Alsafyani, Pankaj Dadheech

The communication barriers experienced by deaf and hard-of-hearing individuals often lead to social isolation and limited access to essential services, underlining a critical need for effective and accessible solutions. Recognizing the unique challenges this community faces—such as the scarcity of sign language interpreters, particularly in remote areas, and the lack of real-time translation tools. This paper proposes the development of a smartphone-runnable sign language recognition model to address the communication problems faced by deaf and hard-of-hearing persons. This proposed model combines Mediapipe hand tracking with particle filtering (PF) to accurately detect and track hand movements, and a convolutional neural network (CNN) and bidirectional long short-term memory based gesture recognition model to model the temporal dynamics of Sign Language gestures. These models use a small number of layers and filters, depthwise separable convolutions, and dropout layers to minimize the computational costs and prevent overfitting, making them suitable for smartphone implementation. This article discusses the existing challenges handled by the deaf and hard-of-hearing community and explains how the proposed model could help overcome these challenges. A MediaPipe + PF model performs feature extraction from the image and data preprocessing. During training, with fewer activation functions and parameters, this proposed model performed better to other CNN with RNN variant models (CNN + LSTM, CNN + GRU) used in the experiments of convergence speed and learning efficiency.

聋人和重听者所经历的交流障碍往往导致他们与社会隔绝,获得基本服务的机会有限,这凸显了对有效和无障碍解决方案的迫切需求。认识到这一群体面临的独特挑战,如手语翻译人员稀缺(尤其是在偏远地区)和缺乏实时翻译工具。本文提出开发一种可在智能手机上运行的手语识别模型,以解决聋人和重听者面临的交流问题。该模型将 Mediapipe 手部跟踪与粒子滤波(PF)相结合,以准确检测和跟踪手部动作,并采用基于卷积神经网络(CNN)和双向长短期记忆的手势识别模型来模拟手语手势的时间动态。这些模型使用了少量的层和滤波器、深度可分离卷积和剔除层,以最大限度地降低计算成本并防止过度拟合,从而使其适用于智能手机的实施。本文讨论了聋人和重听者群体所面临的现有挑战,并解释了所提出的模型可如何帮助克服这些挑战。MediaPipe + PF 模型从图像和数据预处理中进行特征提取。在训练过程中,由于使用了较少的激活函数和参数,该模型在收敛速度和学习效率方面的表现优于实验中使用的其他带有 RNN 变体的 CNN 模型(CNN + LSTM、CNN + GRU)。
{"title":"An open-source MP + CNN + BiLSTM model-based hybrid model for recognizing sign language on smartphones","authors":"Hayder M. A. Ghanimi, Sudhakar Sengan, Vijaya Bhaskar Sadu, Parvinder Kaur, Manju Kaushik, Roobaea Alroobaea, Abdullah M. Baqasah, Majed Alsafyani, Pankaj Dadheech","doi":"10.1007/s13198-024-02376-x","DOIUrl":"https://doi.org/10.1007/s13198-024-02376-x","url":null,"abstract":"<p>The communication barriers experienced by deaf and hard-of-hearing individuals often lead to social isolation and limited access to essential services, underlining a critical need for effective and accessible solutions. Recognizing the unique challenges this community faces—such as the scarcity of sign language interpreters, particularly in remote areas, and the lack of real-time translation tools. This paper proposes the development of a smartphone-runnable sign language recognition model to address the communication problems faced by deaf and hard-of-hearing persons. This proposed model combines Mediapipe hand tracking with particle filtering (PF) to accurately detect and track hand movements, and a convolutional neural network (CNN) and bidirectional long short-term memory based gesture recognition model to model the temporal dynamics of Sign Language gestures. These models use a small number of layers and filters, depthwise separable convolutions, and dropout layers to minimize the computational costs and prevent overfitting, making them suitable for smartphone implementation. This article discusses the existing challenges handled by the deaf and hard-of-hearing community and explains how the proposed model could help overcome these challenges. A MediaPipe + PF model performs feature extraction from the image and data preprocessing. During training, with fewer activation functions and parameters, this proposed model performed better to other CNN with RNN variant models (CNN + LSTM, CNN + GRU) used in the experiments of convergence speed and learning efficiency.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141190403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model to reduce DevOps pipeline execution time using SAST 使用 SAST 缩短 DevOps 管道执行时间的模型
IF 2 Q2 Engineering Pub Date : 2024-05-30 DOI: 10.1007/s13198-024-02262-6
Shobhit Kumar Saurabh, Deepak Kumar

Static code analysis (SAST is a well-known concept) to identify security flaws in the code to improve software product quality. A SAST tool called SonarQube which can scan source code of an application and identify the vulnerabilities present in software. It can also find the RCA of the vulnerabilities found in software products. it helps in rehabilitating the securities flaws found in analysis of the software products. SAST tools analyses upside-down for an application. It does not need s system to be in running state to perform analysis. The scan provides instant feedback to developers in terms of reducing security risks for an application. It helps to resolve issues which was present during development and helps developers to increase their knowledge. As a result, developers become competent about knowledge of security for software product. The sonar analysis report provides on demand access to all recommendations. The user can navigate to line-of-code which have vulnerabilities and they can do faster discovery and auditing. And hence the developers can write more code which is less vulnerable. This way they have more secure and quality product delivered. To conduct static analysis, the Authors have used SonarQube as a tool, which compile and measure the code quality for the code kept in repositories. The Authors observed SAST is important step in conductingsecurity and vulnerabilities scan for software product, it was also observed that most of the organisationconduct this SAST at later stage in DevOps/DevSecOps Phase which actually increases pipeline execution time. This motivated Authors topropose a better Model to reduce the build pipeline execution time. As Devops/DevSecOps standards, SonarQube is used to do SASTin DevSecOps pipelines which normally increases the build pipeline execution time. This increases the effort and time to complete the build pipeline and hence it also impacts overall budget of the software product. In the proposed solution, the Authors tried to reduce build pipeline execution time by conducting static analysis early in DevSecOps phases using shift left. Proposed solution uses GitHub open-source project written in C#.NET language, Azure Devops, dotnet sonar scanner tool and SonarQube to conduct static analysis and testing. The authors(s) tried to enhance the software quality in early Devops phases which will be helpful in reducing the build time and cost. Proposed Model will be helpful in increasing reliability, efficiency, and performance of software product.

静态代码分析(SAST 是一个众所周知的概念)可识别代码中的安全漏洞,从而提高软件产品的质量。一种名为 SonarQube 的 SAST 工具可以扫描应用程序的源代码,识别软件中存在的漏洞。它还能找到软件产品中发现的漏洞的 RCA,帮助修复在分析软件产品时发现的安全漏洞。SAST 工具对应用程序进行颠倒分析。它不需要系统处于运行状态就能执行分析。扫描可为开发人员提供即时反馈,降低应用程序的安全风险。它有助于解决开发过程中出现的问题,帮助开发人员增长知识。因此,开发人员能够掌握软件产品的安全知识。声纳分析报告可按需提供所有建议。用户可以浏览存在漏洞的代码行,并能更快地发现和审计漏洞。这样,开发人员就能编写出更多减少漏洞的代码。这样,他们就能交付更安全、更优质的产品。为了进行静态分析,作者使用了 SonarQube 作为工具,它可以编译和测量代码库中保存的代码质量。作者发现,SAST 是对软件产品进行安全和漏洞扫描的重要步骤,同时还发现,大多数组织在 DevOps/DevSecOps 阶段的后期进行 SAST,这实际上增加了流水线的执行时间。这促使作者提出了一个更好的模型来减少构建管道的执行时间。作为 Devops/DevSecOps 标准,SonarQube 被用于在 DevSecOps 管道中执行 SAST,这通常会增加构建管道的执行时间。这增加了完成构建管道的工作量和时间,因此也影响了软件产品的总体预算。在提出的解决方案中,作者尝试在 DevSecOps 阶段的早期使用左移法进行静态分析,以减少构建管道的执行时间。拟议解决方案使用以 C#.NET 语言编写的 GitHub 开源项目、Azure Devops、dotnet sonar 扫描仪工具和 SonarQube 来进行静态分析和测试。作者试图在 Devops 早期阶段提高软件质量,这将有助于减少构建时间和成本。所提出的模型将有助于提高软件产品的可靠性、效率和性能。
{"title":"Model to reduce DevOps pipeline execution time using SAST","authors":"Shobhit Kumar Saurabh, Deepak Kumar","doi":"10.1007/s13198-024-02262-6","DOIUrl":"https://doi.org/10.1007/s13198-024-02262-6","url":null,"abstract":"<p>Static code analysis (SAST is a well-known concept) to identify security flaws in the code to improve software product quality. A SAST tool called SonarQube which can scan source code of an application and identify the vulnerabilities present in software. It can also find the RCA of the vulnerabilities found in software products. it helps in rehabilitating the securities flaws found in analysis of the software products. SAST tools analyses upside-down for an application. It does not need s system to be in running state to perform analysis. The scan provides instant feedback to developers in terms of reducing security risks for an application. It helps to resolve issues which was present during development and helps developers to increase their knowledge. As a result, developers become competent about knowledge of security for software product. The sonar analysis report provides on demand access to all recommendations. The user can navigate to line-of-code which have vulnerabilities and they can do faster discovery and auditing. And hence the developers can write more code which is less vulnerable. This way they have more secure and quality product delivered. To conduct static analysis, the Authors have used SonarQube as a tool, which compile and measure the code quality for the code kept in repositories. The Authors observed SAST is important step in conductingsecurity and vulnerabilities scan for software product, it was also observed that most of the organisationconduct this SAST at later stage in DevOps/DevSecOps Phase which actually increases pipeline execution time. This motivated Authors topropose a better Model to reduce the build pipeline execution time. As Devops/DevSecOps standards, SonarQube is used to do SASTin DevSecOps pipelines which normally increases the build pipeline execution time. This increases the effort and time to complete the build pipeline and hence it also impacts overall budget of the software product. In the proposed solution, the Authors tried to reduce build pipeline execution time by conducting static analysis early in DevSecOps phases using shift left. Proposed solution uses GitHub open-source project written in C#.NET language, Azure Devops, dotnet sonar scanner tool and SonarQube to conduct static analysis and testing. The authors(s) tried to enhance the software quality in early Devops phases which will be helpful in reducing the build time and cost. Proposed Model will be helpful in increasing reliability, efficiency, and performance of software product.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141190712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive joint source coding LDPC for energy efficient communication in wireless network on chip 自适应联合信源编码 LDPC 用于片上无线网络的高能效通信
IF 2 Q2 Engineering Pub Date : 2024-05-29 DOI: 10.1007/s13198-024-02370-3
Anupama Sindgi, U. B. Mahadevaswamy

The Wireless Network-on-Chip (WiNoC) technology has emerged as a promising approach to overcome the growing communication constraints present in multi-core systems. Nevertheless, a significant obstacle is presented by WiNoCs’ steadily rising energy consumption. In this article, we present a novel method for addressing this issue by combining adaptive joint source coding with low-density parity-check (LDPC) encoding. This strategy is presented as an innovative way to handle the problem. Two key modifications are involved in the implementation of our method: firstly, the accurate tuning of the transform coding threshold in compressive sensing to achieve effective data compression, and secondly, the intelligent control of the number of parity checks in LDPC coding to reduce both energy consumption and latency. These adaptive techniques are tailored to meet the signal-to-noise ratio estimates and the dependability standards unique to the application. Our findings demonstrate a substantial accomplishment, with a remarkable 4.2% reduction in power consumption compared to other methods currently in use. This achievement highlights the vast potential for achieving significant energy savings in real-world applications and is a pioneering contribution to the development of energy-efficient communication systems.

无线片上网络(WiNoC)技术已成为克服多核系统中日益增长的通信限制的一种有前途的方法。然而,WiNoC 持续上升的能耗带来了巨大的障碍。在本文中,我们将自适应联合源编码与低密度奇偶校验(LDPC)编码相结合,提出了一种解决这一问题的新方法。这一策略是处理这一问题的创新方法。我们的方法在实施过程中涉及两个关键的修改:首先,在压缩传感中精确调整变换编码阈值,以实现有效的数据压缩;其次,在 LDPC 编码中智能控制奇偶校验的数量,以降低能耗和延迟。这些自适应技术是为满足信噪比估计值和应用特有的可靠性标准而量身定制的。我们的研究结果表明我们取得了巨大成就,与目前使用的其他方法相比,功耗显著降低了 4.2%。这一成就彰显了在实际应用中实现显著节能的巨大潜力,是对高能效通信系统开发的开创性贡献。
{"title":"Adaptive joint source coding LDPC for energy efficient communication in wireless network on chip","authors":"Anupama Sindgi, U. B. Mahadevaswamy","doi":"10.1007/s13198-024-02370-3","DOIUrl":"https://doi.org/10.1007/s13198-024-02370-3","url":null,"abstract":"<p>The Wireless Network-on-Chip (WiNoC) technology has emerged as a promising approach to overcome the growing communication constraints present in multi-core systems. Nevertheless, a significant obstacle is presented by WiNoCs’ steadily rising energy consumption. In this article, we present a novel method for addressing this issue by combining adaptive joint source coding with low-density parity-check (LDPC) encoding. This strategy is presented as an innovative way to handle the problem. Two key modifications are involved in the implementation of our method: firstly, the accurate tuning of the transform coding threshold in compressive sensing to achieve effective data compression, and secondly, the intelligent control of the number of parity checks in LDPC coding to reduce both energy consumption and latency. These adaptive techniques are tailored to meet the signal-to-noise ratio estimates and the dependability standards unique to the application. Our findings demonstrate a substantial accomplishment, with a remarkable 4.2% reduction in power consumption compared to other methods currently in use. This achievement highlights the vast potential for achieving significant energy savings in real-world applications and is a pioneering contribution to the development of energy-efficient communication systems.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141190524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliability assessment of emergency safety barriers based on an intuitionistic fuzzy sets aggregation procedure and subjective safety analysis: a case study 基于直观模糊集汇总程序和主观安全分析的紧急安全屏障可靠性评估:案例研究
IF 2 Q2 Engineering Pub Date : 2024-05-29 DOI: 10.1007/s13198-024-02365-0
Samia Daas, Fares Innal

The Emergency safety barrier is one of the active technical barriers related to the safety of liquefied petroleum gas storage tanks. However, this study assesses the reliability of emergency safety barriers to help decision-makers understand how they can support decisions to reduce the risks associated with LPG storage. This paper aims to develop an integrated approach that uses an intuitionistic fuzzy sets aggregation procedure, subjective safety analysis, and emergency event tree analysis to handle uncertainty in the reliability assessment of emergency safety barriers. In addition, a case study on the reliability assessment of the emergency safety barriers of the LPG plant in Algeria based on the proposed methodology is provided and carried out to illustrate its effectiveness and feasibility. The results demonstrated the ability of intuitionistic fuzzy sets aggregation procedure and subjective safety analysis to provide highly reliable results and evaluate the reliability of emergency safety barriers. However, the classical event tree analysis does not consider the possibility of assessing the emergency consequences of different accident scenarios. Consequently, it only allows you to estimate the occurrence probability of accident scenarios. The results of this study show that the reliability of emergency safety barriers can be used to estimate the probability of emergency consequences under different accident scenarios, improve reliability, and help prioritize emergency improvement measures. The study provides scientific and operational references for analyzing the emergency consequences of various accident scenarios.

紧急安全屏障是与液化石油气储罐安全相关的主动技术屏障之一。然而,本研究对紧急安全屏障的可靠性进行评估,以帮助决策者了解它们如何支持决策,从而降低与液化石油气储存相关的风险。本文旨在开发一种综合方法,利用直觉模糊集聚合程序、主观安全分析和紧急事件树分析来处理紧急安全屏障可靠性评估中的不确定性。此外,还根据所提出的方法对阿尔及利亚液化石油气厂应急安全屏障的可靠性评估进行了案例研究,以说明该方法的有效性和可行性。研究结果表明,直觉模糊集合聚合程序和主观安全分析能够提供高度可靠的结果,并对紧急安全屏障的可靠性进行评估。然而,经典的事件树分析法并不考虑评估不同事故情景的紧急后果的可能性。因此,它只能估算事故情景的发生概率。本研究的结果表明,应急安全屏障的可靠性可用于估算不同事故情景下的紧急后果概率,提高可靠性,并有助于确定应急改进措施的优先次序。该研究为分析各种事故情景下的紧急后果提供了科学和操作参考。
{"title":"Reliability assessment of emergency safety barriers based on an intuitionistic fuzzy sets aggregation procedure and subjective safety analysis: a case study","authors":"Samia Daas, Fares Innal","doi":"10.1007/s13198-024-02365-0","DOIUrl":"https://doi.org/10.1007/s13198-024-02365-0","url":null,"abstract":"<p>The Emergency safety barrier is one of the active technical barriers related to the safety of liquefied petroleum gas storage tanks. However, this study assesses the reliability of emergency safety barriers to help decision-makers understand how they can support decisions to reduce the risks associated with LPG storage. This paper aims to develop an integrated approach that uses an intuitionistic fuzzy sets aggregation procedure, subjective safety analysis, and emergency event tree analysis to handle uncertainty in the reliability assessment of emergency safety barriers. In addition, a case study on the reliability assessment of the emergency safety barriers of the LPG plant in Algeria based on the proposed methodology is provided and carried out to illustrate its effectiveness and feasibility. The results demonstrated the ability of intuitionistic fuzzy sets aggregation procedure and subjective safety analysis to provide highly reliable results and evaluate the reliability of emergency safety barriers. However, the classical event tree analysis does not consider the possibility of assessing the emergency consequences of different accident scenarios. Consequently, it only allows you to estimate the occurrence probability of accident scenarios. The results of this study show that the reliability of emergency safety barriers can be used to estimate the probability of emergency consequences under different accident scenarios, improve reliability, and help prioritize emergency improvement measures. The study provides scientific and operational references for analyzing the emergency consequences of various accident scenarios.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141190859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Load balanced and optimal clustering in WSNs using grey wolf optimizer 利用灰狼优化器实现 WSN 负载均衡和优化聚类
IF 2 Q2 Engineering Pub Date : 2024-05-29 DOI: 10.1007/s13198-024-02306-x
Lekhraj, Alok Kumar, Anoj Kumar

A network of wireless sensors (WSN) is an outstanding technology that can aid in the various applications. Batteries run the sensor nodes those are used in WSN. The battery is impossible to charge or repair, so the most valuable resource for wireless sensor networks is power. Over the years, several strategies have been invented and used to preserve this precious WSN resource. One of the most successful approach for this purpose has turned out to be clustering. The aim of this paper is to suggest an effective technique for choosing cluster heads in WSNs to increase the lifetime of the network. To accomplish this task, Grey Wolf Optimizer (GWO) technique has been used. The general GWO was updated in this paper to meet the particular purpose of cluster head selection in WSNs. In this article, we have considered eleven attributes in the fitness function for the proposed algorithm. The simulation is carried out under different conditions. The results obtained show that the proposed protocol is superior in terms of energy consumption and network lifetime by evaluating the proposed protocol (i.e. CH-GWO protocol) with some well-existing cluster protocols. The suggested protocol forms energy-efficient and scalable clusters.

无线传感器网络(WSN)是一项杰出的技术,可为各种应用提供帮助。WSN 中使用的传感器节点由电池驱动。电池无法充电或维修,因此无线传感器网络最宝贵的资源就是电力。多年来,人们发明并使用了多种策略来保护这一宝贵的 WSN 资源。为此,最成功的方法之一就是聚类。本文旨在提出一种在 WSN 中选择簇头的有效技术,以延长网络的使用寿命。为完成这一任务,本文使用了灰狼优化器(GWO)技术。本文对一般的 GWO 进行了更新,以满足 WSN 中簇头选择的特殊目的。在本文中,我们在拟议算法的拟合函数中考虑了 11 个属性。在不同条件下进行了仿真。通过对提出的协议(即 CH-GWO 协议)和一些现有的簇协议进行评估,得出的结果表明,提出的协议在能量消耗和网络寿命方面更胜一筹。建议的协议形成了高能效和可扩展的簇。
{"title":"Load balanced and optimal clustering in WSNs using grey wolf optimizer","authors":"Lekhraj, Alok Kumar, Anoj Kumar","doi":"10.1007/s13198-024-02306-x","DOIUrl":"https://doi.org/10.1007/s13198-024-02306-x","url":null,"abstract":"<p>A network of wireless sensors (WSN) is an outstanding technology that can aid in the various applications. Batteries run the sensor nodes those are used in WSN. The battery is impossible to charge or repair, so the most valuable resource for wireless sensor networks is power. Over the years, several strategies have been invented and used to preserve this precious WSN resource. One of the most successful approach for this purpose has turned out to be clustering. The aim of this paper is to suggest an effective technique for choosing cluster heads in WSNs to increase the lifetime of the network. To accomplish this task, Grey Wolf Optimizer (GWO) technique has been used. The general GWO was updated in this paper to meet the particular purpose of cluster head selection in WSNs. In this article, we have considered eleven attributes in the fitness function for the proposed algorithm. The simulation is carried out under different conditions. The results obtained show that the proposed protocol is superior in terms of energy consumption and network lifetime by evaluating the proposed protocol (i.e. CH-GWO protocol) with some well-existing cluster protocols. The suggested protocol forms energy-efficient and scalable clusters.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141190679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review on the applications of Bayesian network in web service 贝叶斯网络在网络服务中的应用综述
IF 2 Q2 Engineering Pub Date : 2024-05-27 DOI: 10.1007/s13198-024-02367-y
Kouami A. Guinhouya

Web services (WS) are the preferred approach in realizing the service-oriented computing paradigm. However, this comes with challenges such as complexity and uncertainty that hinder their practical application. Bayesian networks (BNs) are one of the techniques used to address these challenges. The objective of this mapping study was to determine what is known about the use of Bayesian networks in web services research. To do this, we identified and selected rigorously 69 articles (out of the 532 identified) published on the subject in 2001–2021. We then classified and analyzed these articles by Web service themes (Service Composition, Service Management, Service Engineering), Objectives (Description, Prediction, Prescription), Types of BN (Basic, Combined, Extended), and Evaluation methods (Proof of concept, Experiment, No evaluation). In doing so, we hope to provide a clear understanding of the subject. We also identify and suggest avenues for future research. Thus, the review results can help researchers and practitioners interested by the application of BNs in WS research.

网络服务(WS)是实现面向服务计算模式的首选方法。然而,这也带来了复杂性和不确定性等挑战,阻碍了它们的实际应用。贝叶斯网络(BN)是用于应对这些挑战的技术之一。本制图研究的目的是确定贝叶斯网络在网络服务研究中的应用情况。为此,我们从 2001-2021 年间发表的 532 篇文章中严格筛选出 69 篇。然后,我们按照网络服务主题(服务组合、服务管理、服务工程)、目标(描述、预测、处方)、贝叶斯网络类型(基本、组合、扩展)和评估方法(概念验证、实验、无评估)对这些文章进行了分类和分析。在此过程中,我们希望对这一主题有一个清晰的认识。我们还确定并提出了未来的研究方向。因此,综述结果可以帮助对在 WS 研究中应用生物网络感兴趣的研究人员和从业人员。
{"title":"A review on the applications of Bayesian network in web service","authors":"Kouami A. Guinhouya","doi":"10.1007/s13198-024-02367-y","DOIUrl":"https://doi.org/10.1007/s13198-024-02367-y","url":null,"abstract":"<p>Web services (WS) are the preferred approach in realizing the service-oriented computing paradigm. However, this comes with challenges such as complexity and uncertainty that hinder their practical application. Bayesian networks (BNs) are one of the techniques used to address these challenges. The objective of this mapping study was to determine what is known about the use of Bayesian networks in web services research. To do this, we identified and selected rigorously 69 articles (out of the 532 identified) published on the subject in 2001–2021. We then classified and analyzed these articles by <b>Web service themes</b> (Service Composition, Service Management, Service Engineering), <b>Objectives</b> (Description, Prediction, Prescription), <b>Types of BN</b> (Basic, Combined, Extended), and <b>Evaluation methods</b> (Proof of concept, Experiment, No evaluation). In doing so, we hope to provide a clear understanding of the subject. We also identify and suggest avenues for future research. Thus, the review results can help researchers and practitioners interested by the application of BNs in WS research.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141172257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An optimized dual attention-based network for brain tumor classification 基于双重注意力的脑肿瘤分类优化网络
IF 2 Q2 Engineering Pub Date : 2024-05-26 DOI: 10.1007/s13198-024-02300-3
Babak Masoudi

Brain tumors are one of the leading causes of death worldwide. Different types of brain tumors are known, so the choice of treatment depends directly on the type of tumor. The classification of brain tumors is very important as a complex and challenging problem in the field of image processing. Today, deep learning methods are used to classify brain tumors. In addition to being able to detect and automatically classify all types of brain tumors, these methods significantly reduce the diagnosis time and increase accuracy. In this paper, a deep learning-based model is proposed to classify brain tumors into three classes: glioma, meningioma, and pituitary tumor. In the first phase, the pre-trained network ResNet50 is used to extract features from MRI images. In the second phase, by proposing two attention mechanisms (depth-separable convolution-based channel attention mechanism and an innovative multi-head-attention mechanism), the most effective spatial and channel features are extracted and integrated. Finally, the classification phase is performed. Evaluations on the Figshare dataset showed an accuracy of 99.32%, which performs better than existing models. Therefore, the proposed model can accurately classify brain tumors and help neurologists and physicians make accurate diagnostic decisions.

脑肿瘤是导致全球死亡的主要原因之一。目前已知的脑肿瘤有多种类型,因此治疗方法的选择直接取决于肿瘤的类型。脑肿瘤的分类非常重要,是图像处理领域一个复杂而具有挑战性的问题。如今,深度学习方法已被用于对脑肿瘤进行分类。这些方法除了能够检测和自动分类所有类型的脑肿瘤外,还大大缩短了诊断时间并提高了准确率。本文提出了一种基于深度学习的模型,将脑肿瘤分为三类:胶质瘤、脑膜瘤和垂体瘤。在第一阶段,使用预训练网络 ResNet50 从核磁共振图像中提取特征。在第二阶段,通过提出两种注意机制(基于深度分离卷积的通道注意机制和创新的多头注意机制),提取并整合了最有效的空间和通道特征。最后是分类阶段。在 Figshare 数据集上进行的评估显示,准确率为 99.32%,优于现有模型。因此,所提出的模型可以准确地对脑肿瘤进行分类,帮助神经学家和医生做出准确的诊断决定。
{"title":"An optimized dual attention-based network for brain tumor classification","authors":"Babak Masoudi","doi":"10.1007/s13198-024-02300-3","DOIUrl":"https://doi.org/10.1007/s13198-024-02300-3","url":null,"abstract":"<p>Brain tumors are one of the leading causes of death worldwide. Different types of brain tumors are known, so the choice of treatment depends directly on the type of tumor. The classification of brain tumors is very important as a complex and challenging problem in the field of image processing. Today, deep learning methods are used to classify brain tumors. In addition to being able to detect and automatically classify all types of brain tumors, these methods significantly reduce the diagnosis time and increase accuracy. In this paper, a deep learning-based model is proposed to classify brain tumors into three classes: glioma, meningioma, and pituitary tumor. In the first phase, the pre-trained network ResNet50 is used to extract features from MRI images. In the second phase, by proposing two attention mechanisms (depth-separable convolution-based channel attention mechanism and an innovative multi-head-attention mechanism), the most effective spatial and channel features are extracted and integrated. Finally, the classification phase is performed. Evaluations on the Figshare dataset showed an accuracy of 99.32%, which performs better than existing models. Therefore, the proposed model can accurately classify brain tumors and help neurologists and physicians make accurate diagnostic decisions.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141172330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Securing cloud-based medical data: an optimal dual kernal support vector approach for enhanced EHR management 保护云端医疗数据:加强电子病历管理的最佳双内核支持向量方法
IF 2 Q2 Engineering Pub Date : 2024-05-25 DOI: 10.1007/s13198-024-02356-1
M. L. Sworna Kokila, E. Fenil, N. P. Ponnuviji, G. Nirmala

Cloud computing is one of the advanced technologies to process rapidly growing data. At the same instant, the necessity of storage space for the voluminous digital medical data has been amplified thanks to the mounting electronic health records. It influences the employment of cloud outsourcing methodology. Data outsourced to the cloud space must be highly secured. For this, the paper presents a DKS-CWH algorithm that is based on a dual kernal support vector (DKS) and crossover-based wild horse optimization algorithm. In this paper, the input grayscale images are gathered from the medical MINST dataset which includes 58,954 images comprising six classes of CXR (chest X-ray), breast MRI, abdomen CT, chest CT, hand (hand X-ray), and head CT. The classification and feature extraction processes are performed at the cloud layer using the DKS-CWH algorithm. The hyperparameters of the DKS approach are optimized with the crossover-based WHO algorithm. The performance evaluation involves analyzing its effectiveness according to prominent metrics such as precision, accuracy, recall, and F1-score and comparing the outputs with the other competent methods. The results showed the DKS-CWH model offered robust performance with 97% accuracy.

云计算是处理快速增长的数据的先进技术之一。同时,由于电子病历的不断增加,大量数字医疗数据的存储空间需求也在不断扩大。这影响了云计算外包方法的应用。外包到云空间的数据必须高度安全。为此,本文提出了一种基于双核支持向量(DKS)和基于交叉的野马优化算法的 DKS-CWH 算法。本文的输入灰度图像来自医学 MINST 数据集,该数据集包括 58 954 张图像,由 CXR(胸部 X 光)、乳腺 MRI、腹部 CT、胸部 CT、手部(手部 X 光)和头部 CT 六类图像组成。使用 DKS-CWH 算法在云层执行分类和特征提取过程。DKS 方法的超参数采用基于交叉的 WHO 算法进行优化。性能评估包括根据精确度、准确度、召回率和 F1 分数等重要指标分析其有效性,并将输出结果与其他有效方法进行比较。结果表明,DKS-CWH 模型的准确率高达 97%,性能稳定。
{"title":"Securing cloud-based medical data: an optimal dual kernal support vector approach for enhanced EHR management","authors":"M. L. Sworna Kokila, E. Fenil, N. P. Ponnuviji, G. Nirmala","doi":"10.1007/s13198-024-02356-1","DOIUrl":"https://doi.org/10.1007/s13198-024-02356-1","url":null,"abstract":"<p>Cloud computing is one of the advanced technologies to process rapidly growing data. At the same instant, the necessity of storage space for the voluminous digital medical data has been amplified thanks to the mounting electronic health records. It influences the employment of cloud outsourcing methodology. Data outsourced to the cloud space must be highly secured. For this, the paper presents a DKS-CWH algorithm that is based on a dual kernal support vector (DKS) and crossover-based wild horse optimization algorithm. In this paper, the input grayscale images are gathered from the medical MINST dataset which includes 58,954 images comprising six classes of CXR (chest X-ray), breast MRI, abdomen CT, chest CT, hand (hand X-ray), and head CT. The classification and feature extraction processes are performed at the cloud layer using the DKS-CWH algorithm. The hyperparameters of the DKS approach are optimized with the crossover-based WHO algorithm. The performance evaluation involves analyzing its effectiveness according to prominent metrics such as precision, accuracy, recall, and F1-score and comparing the outputs with the other competent methods. The results showed the DKS-CWH model offered robust performance with 97% accuracy.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141172259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Applying machine learning models on blockchain platform selection 在区块链平台选择中应用机器学习模型
IF 2 Q2 Engineering Pub Date : 2024-05-25 DOI: 10.1007/s13198-024-02363-2
Chhaya Dubey, Dharmendra Kumar, Ashutosh Kumar Singh, Vijay Kumar Dwivedi

Recently, technology like Blockchain is gaining attention all over the world today, because it provides a secure, decentralized framework for all types of commercial interactions. When choosing the optimal blockchain platform, one needs to consider its usefulness, adaptability, and compatibility with existing software. Because novice software engineers and developers are not experts in every discipline, they should seek advice from outside experts or educate themselves. As the number of decision-makers, choices, and criteria grows, the decision-making process becomes increasingly complicated. The success of Bitcoin has spiked the demand for blockchain-based solutions in different domains in the sector such as health, education, energy, etc. Organizations, researchers, government bodies, etc. are moving towards more secure and accountable technology to build trust and reliability. In this paper, we introduce a model for the prediction of blockchain development platforms (Hyperledger, Ethereum, Corda, Stellar, Bitcoin, etc.). The proposed work utilizes multiple data sets based on blockchain development platforms and applies various traditional Machine Learning classification techniques. The obtained results show that models like Decision Tree and Random Forest have outperformed other traditional classification models concerning multiple data sets with 100% accuracy.

最近,像区块链这样的技术正在受到全世界的关注,因为它为所有类型的商业互动提供了一个安全、分散的框架。在选择最佳区块链平台时,需要考虑其实用性、适应性以及与现有软件的兼容性。由于新手软件工程师和开发人员并非每个学科的专家,他们应向外部专家寻求建议或进行自学。随着决策者、选择和标准的增多,决策过程也变得越来越复杂。比特币的成功刺激了卫生、教育、能源等不同领域对基于区块链的解决方案的需求。组织、研究人员、政府机构等都在向更安全、更负责任的技术迈进,以建立信任和可靠性。在本文中,我们介绍了一个用于预测区块链开发平台(Hyperledger、Ethereum、Corda、Stellar、Bitcoin 等)的模型。所提出的工作利用了基于区块链开发平台的多个数据集,并应用了各种传统的机器学习分类技术。结果表明,在多个数据集方面,决策树和随机森林等模型的准确率超过了其他传统分类模型,达到了 100%。
{"title":"Applying machine learning models on blockchain platform selection","authors":"Chhaya Dubey, Dharmendra Kumar, Ashutosh Kumar Singh, Vijay Kumar Dwivedi","doi":"10.1007/s13198-024-02363-2","DOIUrl":"https://doi.org/10.1007/s13198-024-02363-2","url":null,"abstract":"<p>Recently, technology like Blockchain is gaining attention all over the world today, because it provides a secure, decentralized framework for all types of commercial interactions. When choosing the optimal blockchain platform, one needs to consider its usefulness, adaptability, and compatibility with existing software. Because novice software engineers and developers are not experts in every discipline, they should seek advice from outside experts or educate themselves. As the number of decision-makers, choices, and criteria grows, the decision-making process becomes increasingly complicated. The success of Bitcoin has spiked the demand for blockchain-based solutions in different domains in the sector such as health, education, energy, etc. Organizations, researchers, government bodies, etc. are moving towards more secure and accountable technology to build trust and reliability. In this paper, we introduce a model for the prediction of blockchain development platforms (Hyperledger, Ethereum, Corda, Stellar, Bitcoin, etc.). The proposed work utilizes multiple data sets based on blockchain development platforms and applies various traditional Machine Learning classification techniques. The obtained results show that models like Decision Tree and Random Forest have outperformed other traditional classification models concerning multiple data sets with 100% accuracy.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141152435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of System Assurance Engineering and Management
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1