首页 > 最新文献

Applied Computational Intelligence and Soft Computing最新文献

英文 中文
An Improved Hashing Approach for Biological Sequence to Solve Exact Pattern Matching Problems 解决精确模式匹配问题的生物序列改进哈希算法
IF 2.9 Q2 Engineering Pub Date : 2023-11-20 DOI: 10.1155/2023/3278505
Prince Mahmud, Anisur Rahman, Kamrul Hasan Talukder
Pattern matching algorithms have gained a lot of importance in computer science, primarily because they are used in various domains such as computational biology, video retrieval, intrusion detection systems, and fraud detection. Finding one or more patterns in a given text is known as pattern matching. Two important things that are used to judge how well exact pattern matching algorithms work are the total number of attempts and the character comparisons that are made during the matching process. The primary focus of our proposed method is reducing the size of both components wherever possible. Despite sprinting, hash-based pattern matching algorithms may have hash collisions. The Efficient Hashing Method (EHM) algorithm is improved in this research. Despite the EHM algorithm’s effectiveness, it takes a lot of time in the preprocessing phase, and some hash collisions are generated. A novel hashing method has been proposed, which has reduced the preprocessing time and hash collision of the EHM algorithm. We devised the Hashing Approach for Pattern Matching (HAPM) algorithm by taking the best parts of the EHM and Quick Search (QS) algorithms and adding a way to avoid hash collisions. The preprocessing step of this algorithm combines the bad character table from the QS algorithm, the hashing strategy from the EHM algorithm, and the collision-reducing mechanism. To analyze the performance of our HAPM algorithm, we have used three types of datasets: E. coli, DNA sequences, and protein sequences. We looked at six algorithms discussed in the literature and compared our proposed method. The Hash-q with Unique FNG (HqUF) algorithm was only compared with E. coli and DNA datasets because it creates unique bits for DNA sequences. Our proposed HAPM algorithm also overcomes the problems of the HqUF algorithm. The new method beats older ones regarding average runtime, number of attempts, and character comparisons for long and short text patterns, though it did worse on some short patterns.
模式匹配算法在计算机科学中的重要性日益凸显,这主要是因为它们被广泛应用于计算生物学、视频检索、入侵检测系统和欺诈检测等多个领域。在给定文本中找到一个或多个模式被称为模式匹配。判断精确模式匹配算法效果的两个重要指标是尝试的总次数和匹配过程中进行的字符比较。我们提出的方法的主要重点是尽可能减少这两个部分的大小。尽管进行了冲刺,但基于散列的模式匹配算法可能会发生散列碰撞。本研究改进了高效散列法(EHM)算法。尽管 EHM 算法很有效,但它在预处理阶段需要花费大量时间,而且会产生一些散列碰撞。我们提出了一种新的散列方法,它减少了 EHM 算法的预处理时间和散列碰撞。我们汲取了 EHM 算法和快速搜索(QS)算法的精华,并增加了避免散列碰撞的方法,从而设计出了模式匹配散列方法(HAPM)算法。该算法的预处理步骤结合了 QS 算法的坏字符表、EHM 算法的散列策略和减少碰撞机制。为了分析 HAPM 算法的性能,我们使用了三种数据集:大肠杆菌、DNA 序列和蛋白质序列。我们研究了文献中讨论的六种算法,并对我们提出的方法进行了比较。Hash-q with Unique FNG (HqUF) 算法只与大肠杆菌和 DNA 数据集进行了比较,因为它能为 DNA 序列创建唯一比特。我们提出的 HAPM 算法也克服了 HqUF 算法的问题。在长文本和短文本模式的平均运行时间、尝试次数和字符比较方面,新方法优于旧方法,但在某些短模式上表现较差。
{"title":"An Improved Hashing Approach for Biological Sequence to Solve Exact Pattern Matching Problems","authors":"Prince Mahmud, Anisur Rahman, Kamrul Hasan Talukder","doi":"10.1155/2023/3278505","DOIUrl":"https://doi.org/10.1155/2023/3278505","url":null,"abstract":"Pattern matching algorithms have gained a lot of importance in computer science, primarily because they are used in various domains such as computational biology, video retrieval, intrusion detection systems, and fraud detection. Finding one or more patterns in a given text is known as pattern matching. Two important things that are used to judge how well exact pattern matching algorithms work are the total number of attempts and the character comparisons that are made during the matching process. The primary focus of our proposed method is reducing the size of both components wherever possible. Despite sprinting, hash-based pattern matching algorithms may have hash collisions. The Efficient Hashing Method (EHM) algorithm is improved in this research. Despite the EHM algorithm’s effectiveness, it takes a lot of time in the preprocessing phase, and some hash collisions are generated. A novel hashing method has been proposed, which has reduced the preprocessing time and hash collision of the EHM algorithm. We devised the Hashing Approach for Pattern Matching (HAPM) algorithm by taking the best parts of the EHM and Quick Search (QS) algorithms and adding a way to avoid hash collisions. The preprocessing step of this algorithm combines the bad character table from the QS algorithm, the hashing strategy from the EHM algorithm, and the collision-reducing mechanism. To analyze the performance of our HAPM algorithm, we have used three types of datasets: E. coli, DNA sequences, and protein sequences. We looked at six algorithms discussed in the literature and compared our proposed method. The Hash-q with Unique FNG (HqUF) algorithm was only compared with E. coli and DNA datasets because it creates unique bits for DNA sequences. Our proposed HAPM algorithm also overcomes the problems of the HqUF algorithm. The new method beats older ones regarding average runtime, number of attempts, and character comparisons for long and short text patterns, though it did worse on some short patterns.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139257629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TOPSIS Method Based on Entropy Measure for Solving Multiple-Attribute Group Decision-Making Problems with Spherical Fuzzy Soft Information 基于熵量的 TOPSIS 法解决球形模糊软信息的多属性群体决策问题
IF 2.9 Q2 Engineering Pub Date : 2023-11-18 DOI: 10.1155/2023/7927541
Perveen P. A. Fathima, Sunil Jacob John, T. Baiju
A spherical fuzzy soft set (SFSS) is a generalized soft set model, which is more sensible, practical, and exact. Being a very natural generalization, introducing uncertainty measures of SFSSs seems to be very important. In this paper, the concept of entropy, similarity, and distance measures are defined for the SFSSs and also, a characterization of spherical fuzzy soft entropy is proposed. Further, the relationship between entropy and similarity measures as well as entropy and distance measures are discussed in detail. As an application, an algorithm is proposed based on the improved technique for order preference by similarity to an ideal solution (TOPSIS) and the proposed entropy measure of SFSSs, to solve the multiple attribute group decision-making problems. Finally, an illustrative example is used to prove the effectiveness of the recommended algorithm.
球形模糊软集(SFSS)是一种广义的软集模型,它更加合理、实用和精确。作为一种非常自然的泛化,引入 SFSS 的不确定性度量似乎非常重要。本文定义了 SFSS 的熵、相似性和距离度量的概念,并提出了球形模糊软熵的特征。此外,还详细讨论了熵和相似性度量以及熵和距离度量之间的关系。作为应用,提出了一种基于改进的理想解相似度排序偏好技术(TOPSIS)和拟议的 SFSS 熵度量的算法,以解决多属性群体决策问题。最后,通过一个示例证明了所推荐算法的有效性。
{"title":"TOPSIS Method Based on Entropy Measure for Solving Multiple-Attribute Group Decision-Making Problems with Spherical Fuzzy Soft Information","authors":"Perveen P. A. Fathima, Sunil Jacob John, T. Baiju","doi":"10.1155/2023/7927541","DOIUrl":"https://doi.org/10.1155/2023/7927541","url":null,"abstract":"A spherical fuzzy soft set (SFSS) is a generalized soft set model, which is more sensible, practical, and exact. Being a very natural generalization, introducing uncertainty measures of SFSSs seems to be very important. In this paper, the concept of entropy, similarity, and distance measures are defined for the SFSSs and also, a characterization of spherical fuzzy soft entropy is proposed. Further, the relationship between entropy and similarity measures as well as entropy and distance measures are discussed in detail. As an application, an algorithm is proposed based on the improved technique for order preference by similarity to an ideal solution (TOPSIS) and the proposed entropy measure of SFSSs, to solve the multiple attribute group decision-making problems. Finally, an illustrative example is used to prove the effectiveness of the recommended algorithm.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139260978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuzzy Set and Soft Set Theories as Tools for Vocal Risk Diagnosis 模糊集和软集理论作为声乐风险诊断的工具
IF 2.9 Q2 Engineering Pub Date : 2023-11-15 DOI: 10.1155/2023/5525978
José Sanabria, Marinela Álvarez, O. Ferrer
New mathematical theories are being increasingly valued due to their versatility in the application of intelligent systems that allow decision-making and diagnosis in different real-world situations. This is especially relevant in the field of health sciences, where these theories have great potential to design effective solutions that improve people’s quality of life. In recent years, several prediction studies have been performed as indicators of vocal dysfunction. However, the rapid increase in new prediction studies as a result of advancing medical technology has dictated the need to develop reliable methods for the extraction of clinically meaningful knowledge, where complex and nonlinear interactions between these markers naturally exist. There is a growing need to focus the analysis not only on knowledge extraction but also on data transformation and treatment to enhance the quality of healthcare delivery. Mathematical tools such as fuzzy set theory and soft set theory have been successfully applied for data analysis in many real-life problems where there is presence of vagueness and uncertainty in the data. These theories contribute to improving data interpretability and dealing with the inherent uncertainty of real-world data, facilitating the decision-making process based on the available information. In this paper, we use soft set theory and fuzzy set theory to develop a prediction system based on knowledge from phonoaudiology. We use information such as patient age, fundamental frequency, and perturbation index to estimate the risk of voice loss in patients. Our goal is to help the speech-language pathologist in determining whether or not the patient requires intervention in the presence of a voice at risk or an altered voice result, taking into account that excessive and inappropriate voice behavior can result in organic manifestations.
新的数学理论因其在智能系统应用中的多功能性而日益受到重视,这些智能系统可以在不同的实际情况下进行决策和诊断。这一点在健康科学领域尤为重要,因为这些理论在设计有效的解决方案以提高人们的生活质量方面具有巨大的潜力。近年来,作为声带功能障碍的指标,已经开展了多项预测研究。然而,随着医疗技术的发展,新的预测研究迅速增加,这就要求开发可靠的方法来提取有临床意义的知识,因为这些指标之间自然存在着复杂和非线性的相互作用。现在越来越需要把分析重点不仅放在知识提取上,还放在数据转换和处理上,以提高医疗服务的质量。模糊集理论和软集理论等数学工具已成功应用于许多现实问题的数据分析,这些问题的数据存在模糊性和不确定性。这些理论有助于提高数据的可解释性,处理现实世界数据固有的不确定性,促进基于可用信息的决策过程。在本文中,我们利用软集理论和模糊集理论开发了一个基于语音听觉学知识的预测系统。我们利用患者年龄、基频和扰动指数等信息来估计患者失声的风险。我们的目标是帮助言语病理学家确定患者是否需要在出现嗓音风险或嗓音结果改变的情况下进行干预,同时考虑到过度和不当的嗓音行为可能导致器质性表现。
{"title":"Fuzzy Set and Soft Set Theories as Tools for Vocal Risk Diagnosis","authors":"José Sanabria, Marinela Álvarez, O. Ferrer","doi":"10.1155/2023/5525978","DOIUrl":"https://doi.org/10.1155/2023/5525978","url":null,"abstract":"New mathematical theories are being increasingly valued due to their versatility in the application of intelligent systems that allow decision-making and diagnosis in different real-world situations. This is especially relevant in the field of health sciences, where these theories have great potential to design effective solutions that improve people’s quality of life. In recent years, several prediction studies have been performed as indicators of vocal dysfunction. However, the rapid increase in new prediction studies as a result of advancing medical technology has dictated the need to develop reliable methods for the extraction of clinically meaningful knowledge, where complex and nonlinear interactions between these markers naturally exist. There is a growing need to focus the analysis not only on knowledge extraction but also on data transformation and treatment to enhance the quality of healthcare delivery. Mathematical tools such as fuzzy set theory and soft set theory have been successfully applied for data analysis in many real-life problems where there is presence of vagueness and uncertainty in the data. These theories contribute to improving data interpretability and dealing with the inherent uncertainty of real-world data, facilitating the decision-making process based on the available information. In this paper, we use soft set theory and fuzzy set theory to develop a prediction system based on knowledge from phonoaudiology. We use information such as patient age, fundamental frequency, and perturbation index to estimate the risk of voice loss in patients. Our goal is to help the speech-language pathologist in determining whether or not the patient requires intervention in the presence of a voice at risk or an altered voice result, taking into account that excessive and inappropriate voice behavior can result in organic manifestations.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139275166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comparative Analysis of Traditional SARIMA and Machine Learning Models for CPI Data Modelling in Pakistan 传统SARIMA模型与机器学习模型在巴基斯坦CPI数据建模中的比较分析
Q2 Engineering Pub Date : 2023-11-07 DOI: 10.1155/2023/3236617
Moiz Qureshi, Arsalan Khan, Muhammad Daniyal, Kassim Tawiah, Zahid Mehmood
Background. In economic theory, a steady consumer price index (CPI) and its associated low inflation rate (IR) are very much preferred to a volatile one. CPI is considered a major variable in measuring the IR of a country. These indices are those of price changes and have major significance in monetary policy decisions. In this study, different conventional and machine learning methodologies have been applied to model and forecast the CPI of Pakistan. Methods. Pakistan’s yearly CPI data from 1960 to 2021 were modelled using seasonal autoregressive moving average (SARIMA), neural network autoregressive (NNAR), and multilayer perceptron (MLP) models. Several forms of the models were compared by employing the root mean square error (RMSE), mean square error (MSE), and mean absolute percentage error (MAPE) as the key performance indicators (KPIs). Results. The 20-hidden-layered MLP model appeared as the best-performing model for CPI forecasting based on the KPIs. Forecasted values of Pakistan’s CPI from 2022 to 2031 showed an astronomical increase in value which is unpleasant to consumers and economic management. Conclusion. The increasing CPI trend observed if not addressed will trigger a rising purchasing power, thereby causing higher commodity prices. It is recommended that the government put vibrant policies in place to address this alarming situation.
背景。在经济理论中,稳定的消费者价格指数(CPI)和与之相关的低通货膨胀率(IR)比不稳定的价格指数更受欢迎。CPI被认为是衡量一个国家IR的主要变量。这些指数是价格变动指数,在货币政策决策中具有重要意义。在本研究中,不同的传统和机器学习方法已被应用于建模和预测巴基斯坦的CPI。方法。巴基斯坦1960年至2021年的年度CPI数据使用季节性自回归移动平均(SARIMA)、神经网络自回归(NNAR)和多层感知器(MLP)模型建模。采用均方根误差(RMSE)、均方误差(MSE)和平均绝对百分比误差(MAPE)作为关键绩效指标(kpi),对几种形式的模型进行比较。结果。20隐层MLP模型是基于kpi预测CPI的最佳模型。从2022年到2031年,巴基斯坦的CPI预测值出现了天文数字般的增长,这让消费者和经济管理部门感到不快。结论。观察到的CPI上升趋势如果不加以解决,将引发购买力上升,从而导致商品价格上涨。建议政府制定有力的政策来应对这一令人担忧的局面。
{"title":"A Comparative Analysis of Traditional SARIMA and Machine Learning Models for CPI Data Modelling in Pakistan","authors":"Moiz Qureshi, Arsalan Khan, Muhammad Daniyal, Kassim Tawiah, Zahid Mehmood","doi":"10.1155/2023/3236617","DOIUrl":"https://doi.org/10.1155/2023/3236617","url":null,"abstract":"Background. In economic theory, a steady consumer price index (CPI) and its associated low inflation rate (IR) are very much preferred to a volatile one. CPI is considered a major variable in measuring the IR of a country. These indices are those of price changes and have major significance in monetary policy decisions. In this study, different conventional and machine learning methodologies have been applied to model and forecast the CPI of Pakistan. Methods. Pakistan’s yearly CPI data from 1960 to 2021 were modelled using seasonal autoregressive moving average (SARIMA), neural network autoregressive (NNAR), and multilayer perceptron (MLP) models. Several forms of the models were compared by employing the root mean square error (RMSE), mean square error (MSE), and mean absolute percentage error (MAPE) as the key performance indicators (KPIs). Results. The 20-hidden-layered MLP model appeared as the best-performing model for CPI forecasting based on the KPIs. Forecasted values of Pakistan’s CPI from 2022 to 2031 showed an astronomical increase in value which is unpleasant to consumers and economic management. Conclusion. The increasing CPI trend observed if not addressed will trigger a rising purchasing power, thereby causing higher commodity prices. It is recommended that the government put vibrant policies in place to address this alarming situation.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135432915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Two-Phase Pattern Generation and Production Planning Procedure for the Stochastic Skiving Process 随机剥皮过程的两阶段模式生成和生产计划程序
Q2 Engineering Pub Date : 2023-11-06 DOI: 10.1155/2023/9918022
Tolga Kudret Karaca, Funda Samanlioglu, Ayca Altay
The stochastic skiving stock problem (SSP), a relatively new combinatorial optimization problem, is considered in this paper. The conventional SSP seeks to determine the optimum structure that skives small pieces of different sizes side by side to form as many large items (products) as possible that meet a desired width. This study studies a multiproduct case for the SSP under uncertain demand and waste rate, including products of different widths. This stochastic version of the SSP considers a random demand for each product and a random waste rate during production. A two-stage stochastic programming approach with a recourse action is implemented to study this stochastic N P -hard problem on a large scale. Furthermore, the problem is solved in two phases. In the first phase, the dragonfly algorithm constructs minimal patterns that serve as an input for the next phase. The second phase performs sample-average approximation, solving the stochastic production problem. Results indicate that the two-phase heuristic approach is highly efficient regarding computational run time and provides robust solutions with an optimality gap of 0.3% for the worst-case scenario. In addition, we also compare the performance of the dragonfly algorithm (DA) to the particle swarm optimization (PSO) for pattern generation. Benchmarks indicate that the DA produces more robust minimal pattern sets as the tightness of the problem increases.
本文研究了一类较新的组合优化问题——随机剥落库存问题。传统的SSP寻求确定最佳结构,将不同尺寸的小块并排剥离,形成尽可能多的大项目(产品),以满足所需的宽度。本文研究了需求和废品率不确定情况下SSP的多产品情况,包括不同宽度的产品。这个随机版本的SSP考虑了每个产品的随机需求和生产过程中的随机废品率。采用一种带追索作用的两阶段随机规划方法来研究这类大规模的随机N - P -难问题。此外,该问题的解决分为两个阶段。在第一阶段,蜻蜓算法构建最小的模式,作为下一阶段的输入。第二阶段进行样本平均近似,解决随机生产问题。结果表明,两阶段启发式方法在计算运行时间方面是高效的,并且在最坏情况下提供了具有0.3%最优性差距的鲁棒性解决方案。此外,我们还比较了蜻蜓算法(DA)和粒子群算法(PSO)在模式生成方面的性能。基准测试表明,随着问题的紧密性增加,数据分析产生了更健壮的最小模式集。
{"title":"A Two-Phase Pattern Generation and Production Planning Procedure for the Stochastic Skiving Process","authors":"Tolga Kudret Karaca, Funda Samanlioglu, Ayca Altay","doi":"10.1155/2023/9918022","DOIUrl":"https://doi.org/10.1155/2023/9918022","url":null,"abstract":"The stochastic skiving stock problem (SSP), a relatively new combinatorial optimization problem, is considered in this paper. The conventional SSP seeks to determine the optimum structure that skives small pieces of different sizes side by side to form as many large items (products) as possible that meet a desired width. This study studies a multiproduct case for the SSP under uncertain demand and waste rate, including products of different widths. This stochastic version of the SSP considers a random demand for each product and a random waste rate during production. A two-stage stochastic programming approach with a recourse action is implemented to study this stochastic <math xmlns=\"http://www.w3.org/1998/Math/MathML\" id=\"M1\"> <mi mathvariant=\"script\">N</mi> <mi mathvariant=\"script\">P</mi> </math> -hard problem on a large scale. Furthermore, the problem is solved in two phases. In the first phase, the dragonfly algorithm constructs minimal patterns that serve as an input for the next phase. The second phase performs sample-average approximation, solving the stochastic production problem. Results indicate that the two-phase heuristic approach is highly efficient regarding computational run time and provides robust solutions with an optimality gap of 0.3% for the worst-case scenario. In addition, we also compare the performance of the dragonfly algorithm (DA) to the particle swarm optimization (PSO) for pattern generation. Benchmarks indicate that the DA produces more robust minimal pattern sets as the tightness of the problem increases.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135679177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning Approaches to Predict Patient’s Length of Stay in Emergency Department 预测急诊科病人住院时间的机器学习方法
Q2 Engineering Pub Date : 2023-10-27 DOI: 10.1155/2023/8063846
Mohammad A. Shbool, Omar S. Arabeyyat, Ammar Al-Bazi, Abeer Al-Hyari, Arwa Salem, Thana’ Abu-Hmaid, Malak Ali
As the COVID-19 pandemic has afflicted the globe, health systems worldwide have also been significantly affected. This pandemic has impacted many sectors, including health in the Kingdom of Jordan. Crises that put heavy pressure on the health systems’ shoulders include the emergency departments (ED), the most demanded hospital resources during normal conditions, and critical during crises. However, managing the health systems efficiently and achieving the best planning and allocation of their EDs’ resources becomes crucial to improve their capabilities to accommodate the crisis’s impact. Knowing critical factors affecting the patient length of stay prediction is critical to reducing the risks of prolonged waiting and clustering inside EDs. That is, by focusing on these factors and analyzing the effect of each. This research aims to determine the critical factors that predict the outcome: the length of stay, i.e., the predictor variables. Therefore, patients’ length of stay in EDs across waiting time duration is categorized as (low, medium, and high) using supervised machine learning (ML) approaches. Unsupervised algorithms have been applied to classify the patient’s length of stay in local EDs in the Kingdom of Jordan. The Arab Medical Centre Hospital is selected as a case study to justify the performance of the proposed ML model. Data that spans a time interval of 22 months, covering the period before and after COVID-19, is used to train the proposed feedforward network. The proposed model is compared with other ML approaches to justify its superiority. Also, comparative and correlation analyses are conducted on the considered attributes (inputs) to help classify the LOS and the patient’s length of stay in the ED. The best algorithms to be used are the trees such as the decision stump, REB tree, and Random Forest and the multilayer perceptron (with batch sizes of 50 and 0.001 learning rate) for this specific problem. Results showed better performance in terms of accuracy and easiness of implementation.
随着COVID-19大流行在全球范围内肆虐,世界各地的卫生系统也受到了重大影响。这一大流行病影响到许多部门,包括约旦王国的卫生部门。给卫生系统带来沉重压力的危机包括急诊科(ED),它在正常情况下是医院资源需求最大的部门,在危机期间则是关键部门。然而,有效地管理卫生系统并实现最佳规划和分配其急诊科资源对于提高其适应危机影响的能力至关重要。了解影响患者住院时间预测的关键因素对于减少长时间等待和聚集在急诊室的风险至关重要。也就是说,通过关注这些因素并分析每个因素的影响。本研究旨在确定预测结果的关键因素:住院时间,即预测变量。因此,使用监督机器学习(ML)方法,将患者在急诊室的等待时间分为(低、中、高)三个级别。在约旦王国,已应用无监督算法对患者在当地急诊室的住院时间进行分类。选择阿拉伯医疗中心医院作为案例研究,以证明所提议的机器学习模型的性能。数据跨度为22个月,涵盖了COVID-19之前和之后的时间间隔,用于训练所提出的前馈网络。将所提出的模型与其他ML方法进行比较,以证明其优越性。此外,还对考虑的属性(输入)进行了比较和相关性分析,以帮助对LOS和患者在急诊室的住院时间进行分类。对于这个特定问题,使用的最佳算法是决策树桩、REB树和随机森林等树和多层感知器(批大小为50,学习率为0.001)。结果表明,该方法具有较好的准确性和易于实现性。
{"title":"Machine Learning Approaches to Predict Patient’s Length of Stay in Emergency Department","authors":"Mohammad A. Shbool, Omar S. Arabeyyat, Ammar Al-Bazi, Abeer Al-Hyari, Arwa Salem, Thana’ Abu-Hmaid, Malak Ali","doi":"10.1155/2023/8063846","DOIUrl":"https://doi.org/10.1155/2023/8063846","url":null,"abstract":"As the COVID-19 pandemic has afflicted the globe, health systems worldwide have also been significantly affected. This pandemic has impacted many sectors, including health in the Kingdom of Jordan. Crises that put heavy pressure on the health systems’ shoulders include the emergency departments (ED), the most demanded hospital resources during normal conditions, and critical during crises. However, managing the health systems efficiently and achieving the best planning and allocation of their EDs’ resources becomes crucial to improve their capabilities to accommodate the crisis’s impact. Knowing critical factors affecting the patient length of stay prediction is critical to reducing the risks of prolonged waiting and clustering inside EDs. That is, by focusing on these factors and analyzing the effect of each. This research aims to determine the critical factors that predict the outcome: the length of stay, i.e., the predictor variables. Therefore, patients’ length of stay in EDs across waiting time duration is categorized as (low, medium, and high) using supervised machine learning (ML) approaches. Unsupervised algorithms have been applied to classify the patient’s length of stay in local EDs in the Kingdom of Jordan. The Arab Medical Centre Hospital is selected as a case study to justify the performance of the proposed ML model. Data that spans a time interval of 22 months, covering the period before and after COVID-19, is used to train the proposed feedforward network. The proposed model is compared with other ML approaches to justify its superiority. Also, comparative and correlation analyses are conducted on the considered attributes (inputs) to help classify the LOS and the patient’s length of stay in the ED. The best algorithms to be used are the trees such as the decision stump, REB tree, and Random Forest and the multilayer perceptron (with batch sizes of 50 and 0.001 learning rate) for this specific problem. Results showed better performance in terms of accuracy and easiness of implementation.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136233976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forged Video Detection Using Deep Learning: A SLR 使用深度学习的伪造视频检测:单反
Q2 Engineering Pub Date : 2023-10-25 DOI: 10.1155/2023/6661192
Maryam Munawar, Iram Noreen, Raed S. Alharthi, Nadeem Sarwar
In today’s digital landscape, video and image data have emerged as pivotal and widely adopted means of communication. They serve not only as a ubiquitous mode of conveying information but also as indispensable evidential and substantiating elements across diverse domains, encompassing law enforcement, forensic investigations, media, and numerous others. This study employs a systematic literature review (SLR) methodology to meticulously investigate the existing body of knowledge. An exhaustive review and analysis of precisely 90 primary research studies were conducted, unveiling a range of research methodologies instrumental in detecting forged videos. The study’s findings shed light on several research methodologies integral to the detection of forged videos, including deep neural networks, convolutional neural networks, Deepfake analysis, watermarking networks, and clustering, amongst others. This array of techniques highlights the field and emphasizes the need to combat the evolving challenges posed by forged video content. The study shows that videos are susceptible to an array of manipulations, with key issues including frame insertion, deletion, and duplication due to their dynamic nature. The main limitations of the domain are copy-move forgery, object-based forgery, and frame-based forgery. This study serves as a comprehensive repository of the latest advancements and techniques, structured, and summarized to benefit researchers and practitioners in the field. It elucidates the complex challenges inherent to video forensics.
在当今的数字环境中,视频和图像数据已成为关键和广泛采用的通信手段。它们不仅是一种无处不在的信息传递方式,而且在各个领域(包括执法、法医调查、媒体和许多其他领域)都是不可或缺的证据和证实要素。本研究采用系统文献回顾(SLR)的方法,细致地调查现有的知识体系。对90项主要研究进行了详尽的审查和分析,揭示了一系列用于检测伪造视频的研究方法。该研究的发现揭示了几种检测伪造视频不可或缺的研究方法,包括深度神经网络、卷积神经网络、Deepfake分析、水印网络和聚类等。这一系列技术突出了这一领域,并强调了与伪造视频内容带来的不断变化的挑战作斗争的必要性。研究表明,视频容易受到一系列操作的影响,由于其动态特性,关键问题包括帧插入、删除和重复。该领域的主要限制是复制-移动伪造、基于对象的伪造和基于帧的伪造。本研究作为最新进展和技术的综合存储库,进行结构化和总结,以使该领域的研究人员和从业人员受益。它阐明了视频取证所固有的复杂挑战。
{"title":"Forged Video Detection Using Deep Learning: A SLR","authors":"Maryam Munawar, Iram Noreen, Raed S. Alharthi, Nadeem Sarwar","doi":"10.1155/2023/6661192","DOIUrl":"https://doi.org/10.1155/2023/6661192","url":null,"abstract":"In today’s digital landscape, video and image data have emerged as pivotal and widely adopted means of communication. They serve not only as a ubiquitous mode of conveying information but also as indispensable evidential and substantiating elements across diverse domains, encompassing law enforcement, forensic investigations, media, and numerous others. This study employs a systematic literature review (SLR) methodology to meticulously investigate the existing body of knowledge. An exhaustive review and analysis of precisely 90 primary research studies were conducted, unveiling a range of research methodologies instrumental in detecting forged videos. The study’s findings shed light on several research methodologies integral to the detection of forged videos, including deep neural networks, convolutional neural networks, Deepfake analysis, watermarking networks, and clustering, amongst others. This array of techniques highlights the field and emphasizes the need to combat the evolving challenges posed by forged video content. The study shows that videos are susceptible to an array of manipulations, with key issues including frame insertion, deletion, and duplication due to their dynamic nature. The main limitations of the domain are copy-move forgery, object-based forgery, and frame-based forgery. This study serves as a comprehensive repository of the latest advancements and techniques, structured, and summarized to benefit researchers and practitioners in the field. It elucidates the complex challenges inherent to video forensics.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135169448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Embedded Parallel Implementation of LDPC Decoder for Ultra-Reliable Low-Latency Communications 用于超可靠低延迟通信的LDPC解码器的嵌入式并行实现
Q2 Engineering Pub Date : 2023-10-21 DOI: 10.1155/2023/5573438
Mhammed Benhayoun, Mouhcine Razi, Anas Mansouri, Ali Ahaitouf
Ultra-reliable low-latency communications, URLLC, are designed for applications such as self-driving cars and telesurgery requiring a response in milliseconds and are very sensitive to transmission errors. To match the computational complexity of LDPC decoding algorithms to URLLC applications on IoT devices having very limited computational resources, this paper presents a new parallel and low-latency software implementation of the LDPC decoder. First, a decoding algorithm optimization and a compact data structure are proposed. Next, a parallel software implementation is performed on ARM multicore platforms in order to evaluate the latency of the proposed optimization. The synthesis results highlight a reduction in the memory size requirement by 50% and a three-time speedup in terms of processing time when compared to previous software decoder implementations. The reached decoding latency on the parallel processing platform is 150 μs for 288 bits with a bit error ratio of 3.410–9.
超可靠低延迟通信(URLLC)专为自动驾驶汽车和远程手术等应用而设计,这些应用需要毫秒级的响应,并且对传输错误非常敏感。为了使LDPC解码算法的计算复杂度与计算资源非常有限的物联网设备上的URLLC应用相匹配,本文提出了一种新的并行低延迟LDPC解码器的软件实现。首先,对译码算法进行了优化,提出了一种紧凑的数据结构。接下来,在ARM多核平台上进行了并行软件实现,以评估所提出的优化的延迟。综合结果突出表明,与以前的软件解码器实现相比,内存大小需求减少了50%,处理时间加快了三倍。在并行处理平台上达到的解码延迟为150 μs / 288位,误码率为3.410-9。
{"title":"Embedded Parallel Implementation of LDPC Decoder for Ultra-Reliable Low-Latency Communications","authors":"Mhammed Benhayoun, Mouhcine Razi, Anas Mansouri, Ali Ahaitouf","doi":"10.1155/2023/5573438","DOIUrl":"https://doi.org/10.1155/2023/5573438","url":null,"abstract":"Ultra-reliable low-latency communications, URLLC, are designed for applications such as self-driving cars and telesurgery requiring a response in milliseconds and are very sensitive to transmission errors. To match the computational complexity of LDPC decoding algorithms to URLLC applications on IoT devices having very limited computational resources, this paper presents a new parallel and low-latency software implementation of the LDPC decoder. First, a decoding algorithm optimization and a compact data structure are proposed. Next, a parallel software implementation is performed on ARM multicore platforms in order to evaluate the latency of the proposed optimization. The synthesis results highlight a reduction in the memory size requirement by 50% and a three-time speedup in terms of processing time when compared to previous software decoder implementations. The reached decoding latency on the parallel processing platform is 150 μs for 288 bits with a bit error ratio of 3.410–9.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135512015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial Emotion Recognition and Classification Using the Convolutional Neural Network-10 (CNN-10) 基于卷积神经网络-10 (CNN-10)的面部情绪识别与分类
Q2 Engineering Pub Date : 2023-10-13 DOI: 10.1155/2023/2457898
Emmanuel Gbenga Dada, David Opeoluwa Oyewola, Stephen Bassi Joseph, Onyeka Emebo, Olugbenga Oluseun Oluwagbemi
The importance of facial expressions in nonverbal communication is significant because they help better represent the inner emotions of individuals. Emotions can depict the state of health and internal wellbeing of individuals. Facial expression detection has been a hot research topic in the last couple of years. The motivation for applying the convolutional neural network-10 (CNN-10) model for facial expression recognition stems from its ability to detect spatial features, manage translation invariance, understand expressive feature representations, gather global context, and achieve scalability, adaptability, and interoperability with transfer learning methods. This model offers a powerful instrument for reliably detecting and comprehending facial expressions, supporting usage in recognition of emotions, interaction between humans and computers, cognitive computing, and other areas. Earlier studies have developed different deep learning architectures to offer solutions to the challenge of facial expression recognition. Many of these studies have good performance on datasets of images taken under controlled conditions, but they fall short on more difficult datasets with more image diversity and incomplete faces. This paper applied CNN-10 and ViT models for facial emotion classification. The performance of the proposed models was compared with that of VGG19 and INCEPTIONV3. The CNN-10 outperformed the other models on the CK + dataset with a 99.9% accuracy score, FER-2013 with an accuracy of 84.3%, and JAFFE with an accuracy of 95.4%.
面部表情在非语言交流中的重要性是显著的,因为它们有助于更好地代表个人的内心情绪。情绪可以描述个人的健康状态和内在幸福。面部表情检测是近年来的一个研究热点。将卷积神经网络-10 (CNN-10)模型应用于面部表情识别的动机源于其检测空间特征、管理翻译不变性、理解表达特征表示、收集全局上下文以及与迁移学习方法实现可扩展性、适应性和互操作性的能力。该模型为可靠地检测和理解面部表情提供了一个强大的工具,支持在情感识别、人与计算机之间的交互、认知计算和其他领域的使用。早期的研究已经开发了不同的深度学习架构,为面部表情识别的挑战提供了解决方案。其中许多研究在受控条件下拍摄的图像数据集上表现良好,但在图像多样性更大、人脸不完整的更困难的数据集上表现不佳。本文采用CNN-10和ViT模型进行面部情绪分类。将所提模型的性能与VGG19和INCEPTIONV3进行了比较。CNN-10在CK +数据集上的准确率为99.9%,FER-2013的准确率为84.3%,JAFFE的准确率为95.4%,优于其他模型。
{"title":"Facial Emotion Recognition and Classification Using the Convolutional Neural Network-10 (CNN-10)","authors":"Emmanuel Gbenga Dada, David Opeoluwa Oyewola, Stephen Bassi Joseph, Onyeka Emebo, Olugbenga Oluseun Oluwagbemi","doi":"10.1155/2023/2457898","DOIUrl":"https://doi.org/10.1155/2023/2457898","url":null,"abstract":"The importance of facial expressions in nonverbal communication is significant because they help better represent the inner emotions of individuals. Emotions can depict the state of health and internal wellbeing of individuals. Facial expression detection has been a hot research topic in the last couple of years. The motivation for applying the convolutional neural network-10 (CNN-10) model for facial expression recognition stems from its ability to detect spatial features, manage translation invariance, understand expressive feature representations, gather global context, and achieve scalability, adaptability, and interoperability with transfer learning methods. This model offers a powerful instrument for reliably detecting and comprehending facial expressions, supporting usage in recognition of emotions, interaction between humans and computers, cognitive computing, and other areas. Earlier studies have developed different deep learning architectures to offer solutions to the challenge of facial expression recognition. Many of these studies have good performance on datasets of images taken under controlled conditions, but they fall short on more difficult datasets with more image diversity and incomplete faces. This paper applied CNN-10 and ViT models for facial emotion classification. The performance of the proposed models was compared with that of VGG19 and INCEPTIONV3. The CNN-10 outperformed the other models on the CK + dataset with a 99.9% accuracy score, FER-2013 with an accuracy of 84.3%, and JAFFE with an accuracy of 95.4%.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135857347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local Search-Based Metaheuristic Methods for the Solid Waste Collection Problem 基于局部搜索的固体废物收集问题的元启发式方法
Q2 Engineering Pub Date : 2023-10-06 DOI: 10.1155/2023/5398400
Haneen Algethami
The solid waste collection problem refers to truck route optimisation to collect waste from containers across various locations. Recent concerns exist over the impact of solid waste management on the environment. Hence, it is necessary to find feasible routes while minimising operational costs and fuel consumption. In this paper, in order to reduce fuel consumption, the number of trucks used is considered in the objective function along with the waste load and the travelling time. With the current computational capabilities, finding an optimal solution is challenging. Thus, this study aims to investigate the effect of well-known metaheuristic methods on this problem’s objective function and computational times. The routing solver in the Google OR-tools solver is utilised with three well-known metaheuristic methods for neighbourhood exploration: a guided local search (GLS), a tabu search (TS), and simulated annealing (SA), with two initialisation strategies, Clarke and Wright’s algorithm and the nearest neighbour algorithm. Results showed that optimal solutions are found in faster computational times than using only an IP solver, especially for large instances. Local search methods, notably GLS, have significantly improved the route construction process. The nearest neighbour algorithm has often outperformed the Clarke and Wright's methods. The findings here can be applied to improve operations in Saudi Arabia’s waste management sector.
固体废物收集问题指的是卡车路线优化,以便从不同地点的容器中收集废物。最近人们关注固体废物管理对环境的影响。因此,有必要在最小化运营成本和燃料消耗的同时找到可行的路线。在本文中,为了降低燃油消耗,在目标函数中考虑了车辆数量、废载和行驶时间。以目前的计算能力,找到一个最优解是具有挑战性的。因此,本研究旨在探讨众所周知的元启发式方法对该问题的目标函数和计算时间的影响。Google OR-tools求解器中的路由求解器采用三种著名的元启发式方法进行邻域探索:引导局部搜索(GLS)、禁忌搜索(TS)和模拟退火(SA),并采用Clarke和Wright算法和最近邻算法两种初始化策略。结果表明,与仅使用IP求解器相比,在更快的计算时间内找到了最优解,特别是对于大型实例。局部搜索方法,特别是GLS,显著改善了路线构建过程。最近邻算法的表现往往优于克拉克和赖特的方法。这里的研究结果可以应用于改善沙特阿拉伯废物管理部门的运营。
{"title":"Local Search-Based Metaheuristic Methods for the Solid Waste Collection Problem","authors":"Haneen Algethami","doi":"10.1155/2023/5398400","DOIUrl":"https://doi.org/10.1155/2023/5398400","url":null,"abstract":"The solid waste collection problem refers to truck route optimisation to collect waste from containers across various locations. Recent concerns exist over the impact of solid waste management on the environment. Hence, it is necessary to find feasible routes while minimising operational costs and fuel consumption. In this paper, in order to reduce fuel consumption, the number of trucks used is considered in the objective function along with the waste load and the travelling time. With the current computational capabilities, finding an optimal solution is challenging. Thus, this study aims to investigate the effect of well-known metaheuristic methods on this problem’s objective function and computational times. The routing solver in the Google OR-tools solver is utilised with three well-known metaheuristic methods for neighbourhood exploration: a guided local search (GLS), a tabu search (TS), and simulated annealing (SA), with two initialisation strategies, Clarke and Wright’s algorithm and the nearest neighbour algorithm. Results showed that optimal solutions are found in faster computational times than using only an IP solver, especially for large instances. Local search methods, notably GLS, have significantly improved the route construction process. The nearest neighbour algorithm has often outperformed the Clarke and Wright's methods. The findings here can be applied to improve operations in Saudi Arabia’s waste management sector.","PeriodicalId":44894,"journal":{"name":"Applied Computational Intelligence and Soft Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135346340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Applied Computational Intelligence and Soft Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1