首页 > 最新文献

Recent Advances in Computer Science and Communications最新文献

英文 中文
Supervised Learning based E-mail/ SMS Spam Classifier 基于监督学习的电子邮件/短信垃圾邮件分类器
Pub Date : 2024-06-10 DOI: 10.2174/0126662558279046240126051302
Satendra Kumar, Raj Kumar, A. Saini
One of the challenging problems facing the modern Internet is spam,which can annoy individual customers and wreak financial havoc on businesses. Spam communications target customers without their permission and clog their mailboxes. They consumemore time and organizational resources when checking for and deleting spam. Even thoughmost web users openly dislike spam, enough are willing to accept lucrative deals that spam remains a real problem. While most web users are well aware of their hatred of spam, the factthat enough of them still click on commercial offers means spammers can still make moneyfrom them. While most customers know what to do, they need clear instructions on avoidingand deleting spam. No matter what you do to eliminate spam, you won't succeed. Filtering isthe most straightforward and practical technique in spam-blocking strategies.We present procedures for identifying emails as spam or ham based on text classification. Different methods of e-mail organization preprocessing are interrelated, for example, applying stop word exclusion, stemming, including reduction and highlight selection strategies toextract buzzwords from each quality, and finally, using unique classifiers to Quarantine messages as spam or ham.The Nave Bayes classifier is a good choice. Some classifiers, such as Simple Logisticand Adaboost, perform well. However, the Support Vector Machine Classifier (SVC) outperforms it. Therefore, the SVC makes decisions based on each case's comparisons and perspectives.Many spam separation studies have focused on recent classifier-related challenges. Machine Learning (ML) for spam detection is an important area of modern research. Today,spam detection using ML is an important area of research. Examine the adequacy of the proposed work and recognize the application of multiple learning estimates to extract spam fromemails. Similarly, estimates have also been scrutinized.
垃圾邮件是现代互联网面临的挑战性问题之一,它不仅会惹恼个人客户,还会给企业造成经济损失。垃圾邮件在未经客户许可的情况下以客户为目标,堵塞了他们的邮箱。在检查和删除垃圾邮件时,它们会消耗更多的时间和组织资源。尽管大多数网络用户公开表示不喜欢垃圾邮件,但他们愿意接受有利可图的交易,因此垃圾邮件仍然是一个现实问题。虽然大多数网络用户都很清楚自己憎恨垃圾邮件,但他们中仍有足够多的人点击商业广告,这意味着垃圾邮件发送者仍能从中牟利。虽然大多数客户知道该怎么做,但他们需要明确的说明来避免和删除垃圾邮件。无论你用什么方法来消除垃圾邮件,都不会成功。在垃圾邮件拦截策略中,过滤是最直接、最实用的技术。我们介绍了基于文本分类识别垃圾邮件或火腿肠邮件的程序。不同的电子邮件组织预处理方法是相互关联的,例如,应用停止词排除、词干处理、包括缩减和高亮选择策略,从每个质量中提取流行词,最后,使用独特的分类器将邮件隔离为垃圾邮件或火腿肠。一些分类器,如 Simple Logistic 和 Adaboost,表现也不错。不过,支持向量机分类器(SVC)的表现要优于它。因此,支持向量机分类器根据每个案例的比较和视角做出决策。用于垃圾邮件检测的机器学习(ML)是现代研究的一个重要领域。如今,使用 ML 进行垃圾邮件检测是一个重要的研究领域。检查提议的工作是否充分,并认识到应用多种学习估计值从邮件中提取垃圾信息的重要性。同样,也对估计值进行了仔细研究。
{"title":"Supervised Learning based E-mail/ SMS Spam Classifier","authors":"Satendra Kumar, Raj Kumar, A. Saini","doi":"10.2174/0126662558279046240126051302","DOIUrl":"https://doi.org/10.2174/0126662558279046240126051302","url":null,"abstract":"\u0000\u0000One of the challenging problems facing the modern Internet is spam,\u0000which can annoy individual customers and wreak financial havoc on businesses. Spam communications target customers without their permission and clog their mailboxes. They consume\u0000more time and organizational resources when checking for and deleting spam. Even though\u0000most web users openly dislike spam, enough are willing to accept lucrative deals that spam remains a real problem. While most web users are well aware of their hatred of spam, the fact\u0000that enough of them still click on commercial offers means spammers can still make money\u0000from them. While most customers know what to do, they need clear instructions on avoiding\u0000and deleting spam. No matter what you do to eliminate spam, you won't succeed. Filtering is\u0000the most straightforward and practical technique in spam-blocking strategies.\u0000\u0000\u0000\u0000We present procedures for identifying emails as spam or ham based on text classification. Different methods of e-mail organization preprocessing are interrelated, for example, applying stop word exclusion, stemming, including reduction and highlight selection strategies to\u0000extract buzzwords from each quality, and finally, using unique classifiers to Quarantine messages as spam or ham.\u0000\u0000\u0000\u0000The Nave Bayes classifier is a good choice. Some classifiers, such as Simple Logistic\u0000and Adaboost, perform well. However, the Support Vector Machine Classifier (SVC) outperforms it. Therefore, the SVC makes decisions based on each case's comparisons and perspectives.\u0000\u0000\u0000\u0000Many spam separation studies have focused on recent classifier-related challenges. Machine Learning (ML) for spam detection is an important area of modern research. Today,\u0000spam detection using ML is an important area of research. Examine the adequacy of the proposed work and recognize the application of multiple learning estimates to extract spam from\u0000emails. Similarly, estimates have also been scrutinized.\u0000","PeriodicalId":506582,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" 57","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141366173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ROUGE-SS: A New ROUGE Variant for the Evaluation of TextSummarization ROUGE-SS:用于文本总结评估的 ROUGE 新变体
Pub Date : 2024-06-06 DOI: 10.2174/0126662558304595240528111535
Sandeep Kumar, Arun Solanki, NZ Jhanjhi
Prior research on abstractive text summarization has predominantlyrelied on the ROUGE evaluation metric, which, while effective, has limitations in capturingsemantic meaning due to its focus on exact word or phrase matching. This deficiency is particularly pronounced in abstractive summarization approaches, where the goal is to generate novel summaries by rephrasing and paraphrasing the source text, highlighting the need for a morenuanced evaluation metric capable of capturing semantic similarity.In this study, the limitations of existing ROUGE metrics are addressed by proposinga novel variant called ROUGE-SS. Unlike traditional ROUGE metrics, ROUGE-SS extendsbeyond exact word matching to consider synonyms and semantic similarity. Leveraging resources such as the WordNet online dictionary, ROUGE-SS identifies matches between sourcetext and summaries based on both exact word overlaps and semantic context. Experiments areconducted to evaluate the performance of ROUGE-SS compared to other ROUGE variants,particularly in assessing abstractive summarization models. The algorithm for the synonymfeatures (ROUGE-SS) is also proposed.The experiments demonstrate the superior performance of ROUGE-SS in evaluatingabstractive text summarization models compared to existing ROUGE variants. ROUGE-SSyields higher F1 scores and better overall performance, achieving a significant reduction intraining loss and impressive accuracy. The proposed ROUGE-SS evaluation technique is evaluated in different datasets like CNN/Daily Mail, DUC-2004, Gigawords, and Inshorts Newsdatasets. ROUGE-SS gives better results than other ROUGE variant metrics. The F1-score ofthe proposed ROUGE-SS metric is improved by an average of 8.8%. These findings underscore the effectiveness of ROUGE-SS in capturing semantic similarity and providing a morecomprehensive evaluation metric for abstractive summarization.In conclusion, the introduction of ROUGE-SS represents a significant advancement in the field of abstractive text summarization evaluation. By extending beyond exactword matching to incorporate synonyms and semantic context, ROUGE-SS offers researchersa more effective tool for assessing summarization quality. This study highlights the importanceof considering semantic meaning in evaluation metrics and provides a promising direction forfuture research on abstractive text summarization.
此前关于抽象文本摘要的研究主要依赖于 ROUGE 评价指标,该指标虽然有效,但由于侧重于精确的单词或短语匹配,因此在捕捉语义方面存在局限性。这种缺陷在抽象摘要方法中尤为明显,因为抽象摘要方法的目标是通过重新措辞和转述源文本来生成新颖的摘要,这就凸显了对一种能够捕捉语义相似性的更均衡评价指标的需求。在本研究中,针对现有 ROUGE 指标的局限性,提出了一种名为 ROUGE-SS 的新型变体。与传统的 ROUGE 指标不同,ROUGE-SS 不局限于精确的词语匹配,还考虑了同义词和语义相似性。借助 WordNet 在线词典等资源,ROUGE-SS 可根据精确的词语重叠和语义上下文来识别源文本和摘要之间的匹配。实验评估了 ROUGE-SS 与其他 ROUGE 变体相比的性能,尤其是在评估抽象摘要模型时的性能。实验结果表明,与现有的 ROUGE 变体相比,ROUGE-SS 在评估抽象文本摘要模型方面表现出色。ROUGE-SS 获得了更高的 F1 分数和更好的整体性能,显著减少了训练损失,准确率也令人印象深刻。我们在 CNN/每日邮件、DUC-2004、Gigawords 和 Inshorts News 等不同数据集中对所提出的 ROUGE-SS 评估技术进行了评估。与其他 ROUGE 变体指标相比,ROUGE-SS 得出了更好的结果。拟议的 ROUGE-SS 指标的 F1 分数平均提高了 8.8%。总之,ROUGE-SS 的引入代表了抽象文本摘要评价领域的重大进步。ROUGE-SS 将精确词语匹配扩展到同义词和语义上下文,为研究人员提供了更有效的摘要质量评估工具。这项研究强调了在评价指标中考虑语义的重要性,并为抽象文本摘要的未来研究提供了一个很有前途的方向。
{"title":"ROUGE-SS: A New ROUGE Variant for the Evaluation of Text\u0000Summarization","authors":"Sandeep Kumar, Arun Solanki, NZ Jhanjhi","doi":"10.2174/0126662558304595240528111535","DOIUrl":"https://doi.org/10.2174/0126662558304595240528111535","url":null,"abstract":"\u0000\u0000Prior research on abstractive text summarization has predominantly\u0000relied on the ROUGE evaluation metric, which, while effective, has limitations in capturing\u0000semantic meaning due to its focus on exact word or phrase matching. This deficiency is particularly pronounced in abstractive summarization approaches, where the goal is to generate novel summaries by rephrasing and paraphrasing the source text, highlighting the need for a more\u0000nuanced evaluation metric capable of capturing semantic similarity.\u0000\u0000\u0000\u0000In this study, the limitations of existing ROUGE metrics are addressed by proposing\u0000a novel variant called ROUGE-SS. Unlike traditional ROUGE metrics, ROUGE-SS extends\u0000beyond exact word matching to consider synonyms and semantic similarity. Leveraging resources such as the WordNet online dictionary, ROUGE-SS identifies matches between source\u0000text and summaries based on both exact word overlaps and semantic context. Experiments are\u0000conducted to evaluate the performance of ROUGE-SS compared to other ROUGE variants,\u0000particularly in assessing abstractive summarization models. The algorithm for the synonym\u0000features (ROUGE-SS) is also proposed.\u0000\u0000\u0000\u0000The experiments demonstrate the superior performance of ROUGE-SS in evaluating\u0000abstractive text summarization models compared to existing ROUGE variants. ROUGE-SS\u0000yields higher F1 scores and better overall performance, achieving a significant reduction in\u0000training loss and impressive accuracy. The proposed ROUGE-SS evaluation technique is evaluated in different datasets like CNN/Daily Mail, DUC-2004, Gigawords, and Inshorts News\u0000datasets. ROUGE-SS gives better results than other ROUGE variant metrics. The F1-score of\u0000the proposed ROUGE-SS metric is improved by an average of 8.8%. These findings underscore the effectiveness of ROUGE-SS in capturing semantic similarity and providing a more\u0000comprehensive evaluation metric for abstractive summarization.\u0000\u0000\u0000\u0000In conclusion, the introduction of ROUGE-SS represents a significant advancement in the field of abstractive text summarization evaluation. By extending beyond exact\u0000word matching to incorporate synonyms and semantic context, ROUGE-SS offers researchers\u0000a more effective tool for assessing summarization quality. This study highlights the importance\u0000of considering semantic meaning in evaluation metrics and provides a promising direction for\u0000future research on abstractive text summarization.\u0000","PeriodicalId":506582,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":"207 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141375871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Generic Integrated Framework of Unsupervised Learning and NaturalLanguage Processing Techniques for Digital Healthcare: A ComprehensiveReview and Future Research Directions 用于数字医疗的无监督学习和自然语言处理技术的通用集成框架:全面回顾与未来研究方向
Pub Date : 2024-06-03 DOI: 10.2174/0126662558297036240527120451
K. Shastry
The increasing availability of digital healthcare data has opened up fresh prospectsfor improving healthcare through data analysis. Machine learning (ML) procedures exhibitgreat promise in analyzing large volumes of healthcare data to extract insights that could beutilized to improve patient outcomes and healthcare delivery. In this work, we suggest an integratedframework for digital healthcare data analysis by integrating unsupervised learningtechniques and natural language processing (NLP) techniques into the analysis pipeline. Themodule on unsupervised learning will involve techniques, such as clustering and anomaly detection.By clustering similar patients together based on their medical history and other relevantfactors, healthcare providers can identify subgroups of patients who may require differenttreatment approaches. Anomaly detection can also help to detect patients who stray from thenorm, which could be indicative of underlying health issues or other issues that need additionalinvestigation. The second module on NLP will enable healthcare providers to analyze unstructuredtext data such as clinical notes, patient surveys, and social media posts. NLP techniquescan help to identify key themes and patterns in these datasets, requiring awareness that couldnot be readily apparent through other means. Overall, incorporating unsupervised learningtechniques and NLP into the analysis pipeline for digital healthcare data possesses the promiseto enhance patient results and lead to more personalized treatments, and represents a potentialdomain for upcoming research in this field. In this research, we also review the current state ofresearch in digital healthcare information examination with ML, including applications likeforecasting clinic readmissions, finding cancerous tumors, and developing personalized drugdosing recommendations. We also examine the potential benefits and challenges of utilizingML in healthcare data analysis, including issues related to data quality, privacy, and interpretability.Lastly, we discuss the forthcoming research paths, involving the necessity for enhancedmethods for incorporating information from several resources, developing more interpretableML patterns, and addressing ethical and regulatory challenges. The usage of ML in digitalhealthcare data analysis promises to transform healthcare by empowering more precise diagnoses,personalized treatments, and improved health outcomes, and this work offers a completeoverview of the current trends.
越来越多的数字医疗数据为通过数据分析改善医疗服务开辟了新的前景。机器学习(ML)程序在分析大量医疗保健数据以提取可用于改善患者治疗效果和医疗保健服务的洞察力方面大有可为。在这项工作中,我们通过将无监督学习技术和自然语言处理(NLP)技术整合到分析管道中,为数字医疗保健数据分析提出了一个集成框架。无监督学习模块将涉及聚类和异常检测等技术。通过根据病史和其他相关因素对相似患者进行聚类,医疗服务提供者可以识别出可能需要不同治疗方法的患者亚群。异常检测还有助于发现偏离常规的病人,这可能表明潜在的健康问题或其他需要进一步调查的问题。关于 NLP 的第二个模块将使医疗服务提供者能够分析临床笔记、患者调查和社交媒体帖子等非结构化文本数据。NLP 技术可以帮助识别这些数据集中的关键主题和模式,这就需要认识到通过其他方法无法轻易发现的问题。总之,将无监督学习技术和 NLP 纳入数字医疗数据的分析管道有望提高患者的治疗效果,并带来更加个性化的治疗方法,这也是该领域即将开展的研究的一个潜在领域。在这项研究中,我们还回顾了利用 ML 进行数字医疗信息检查的研究现状,包括预测门诊再入院率、发现癌症肿瘤和开发个性化用药建议等应用。最后,我们还讨论了未来的研究方向,其中包括必须改进方法以整合来自多个资源的信息、开发更多可解释的 ML 模式以及应对伦理和监管方面的挑战。在数字医疗数据分析中使用 ML 有望通过提供更精确的诊断、个性化的治疗和更好的医疗效果来改变医疗行业。
{"title":"A Generic Integrated Framework of Unsupervised Learning and Natural\u0000Language Processing Techniques for Digital Healthcare: A Comprehensive\u0000Review and Future Research Directions","authors":"K. Shastry","doi":"10.2174/0126662558297036240527120451","DOIUrl":"https://doi.org/10.2174/0126662558297036240527120451","url":null,"abstract":"\u0000\u0000The increasing availability of digital healthcare data has opened up fresh prospects\u0000for improving healthcare through data analysis. Machine learning (ML) procedures exhibit\u0000great promise in analyzing large volumes of healthcare data to extract insights that could be\u0000utilized to improve patient outcomes and healthcare delivery. In this work, we suggest an integrated\u0000framework for digital healthcare data analysis by integrating unsupervised learning\u0000techniques and natural language processing (NLP) techniques into the analysis pipeline. The\u0000module on unsupervised learning will involve techniques, such as clustering and anomaly detection.\u0000By clustering similar patients together based on their medical history and other relevant\u0000factors, healthcare providers can identify subgroups of patients who may require different\u0000treatment approaches. Anomaly detection can also help to detect patients who stray from the\u0000norm, which could be indicative of underlying health issues or other issues that need additional\u0000investigation. The second module on NLP will enable healthcare providers to analyze unstructured\u0000text data such as clinical notes, patient surveys, and social media posts. NLP techniques\u0000can help to identify key themes and patterns in these datasets, requiring awareness that could\u0000not be readily apparent through other means. Overall, incorporating unsupervised learning\u0000techniques and NLP into the analysis pipeline for digital healthcare data possesses the promise\u0000to enhance patient results and lead to more personalized treatments, and represents a potential\u0000domain for upcoming research in this field. In this research, we also review the current state of\u0000research in digital healthcare information examination with ML, including applications like\u0000forecasting clinic readmissions, finding cancerous tumors, and developing personalized drug\u0000dosing recommendations. We also examine the potential benefits and challenges of utilizing\u0000ML in healthcare data analysis, including issues related to data quality, privacy, and interpretability.\u0000Lastly, we discuss the forthcoming research paths, involving the necessity for enhanced\u0000methods for incorporating information from several resources, developing more interpretable\u0000ML patterns, and addressing ethical and regulatory challenges. The usage of ML in digital\u0000healthcare data analysis promises to transform healthcare by empowering more precise diagnoses,\u0000personalized treatments, and improved health outcomes, and this work offers a complete\u0000overview of the current trends.\u0000","PeriodicalId":506582,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141388369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recent Advances in Artificial Intelligence & Machine Learning:A Practical Approach 人工智能和机器学习的最新进展:实用方法
Pub Date : 2024-05-01 DOI: 10.2174/266625581703240502163544
Vikash Yadav
{"title":"Recent Advances in Artificial Intelligence & Machine Learning:\u0000A Practical Approach","authors":"Vikash Yadav","doi":"10.2174/266625581703240502163544","DOIUrl":"https://doi.org/10.2174/266625581703240502163544","url":null,"abstract":"<jats:sec>\u0000<jats:title/>\u0000<jats:p/>\u0000</jats:sec>","PeriodicalId":506582,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":"13 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141024011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence (AI) driven Smart World 人工智能(AI)驱动的智能世界
Pub Date : 2024-04-17 DOI: 10.2174/266625581702240417140438
Sarika Jain
{"title":"Artificial Intelligence (AI) driven Smart World","authors":"Sarika Jain","doi":"10.2174/266625581702240417140438","DOIUrl":"https://doi.org/10.2174/266625581702240417140438","url":null,"abstract":"<jats:sec>\u0000<jats:title />\u0000<jats:p />\u0000</jats:sec>","PeriodicalId":506582,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140692076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Cost-Minimized Task Migration Assignment Mechanism in Blockchain Based Edge Computing System 基于区块链的边缘计算系统中成本最小化的任务迁移分配机制
Pub Date : 2024-04-15 DOI: 10.2174/0126662558292891240409050246
Binghua Xu, Yan Jin, Lei Yu
Cloud computing is usually introduced to execute computing intensivetasks for data processing and data mining. As a supplement to cloud computing, edgecomputing is provided as a new paradigm to effectively reduce processing latency, energy consumptioncost and bandwidth consumption for time-sensitive tasks or resource-sensitive tasks.To better meet such requirements during task assignment in edge computing systems, an intelligenttask migration assignment mechanism based on blockchain is proposed, which jointlyconsiders the factors of resource allocation, resource control and credit degree.Cloud computing are usually introduced to execute computing intensive tasks for data processing and data mining. However, this paradigm may not be effective to execute latency sensitive or dynamic interactive tasks. As a supplement to the cloud computing, edge computing has attracted much attention because it can effectively reduce task processing latency, energy consumption cost and bandwidth consumption.In this paper, an optimization problem is firstly constructed to minimize the totalcost of completing all tasks under constraints of delay, energy consumption, communication,and credit degree. Here, the terminal node mines computing resources from edge nodes tocomplete task migration. An incentive method based on blockchain is provided to mobilize theactivity of terminal nodes and edge nodes, and to ensure the security of the transaction duringmigration. The designed allocation rules ensure the fairness of rewards for successfully miningresource. To solve the optimization problem, an intelligent migration algorithm that utilizes adual “actor-reviewer” neural network on inverse gradient update is proposed which makes thetraining process more stable and easier to converge.To better meet requirements of the latency, energy consumption and security for computing intensive tasks, an intelligent computing migration mechanism based on blockchain applications is proposed, which considers the factors of resource allocation, resource control and credit degree.Compared to the existing two benchmark mechanisms, the extensive simulation resultsindicate that the proposed mechanism based on neural network can converge at a fasterspeed and achieve the minimal total cost.To satisfy the requirements of delay and energy consumption for computing intensivetasks in edge computing scenarios, an intelligent, blockchain based task migration assignmentmechanism with joint resource allocation and control is proposed. To realize thismechanism effectively, a dual “actor-reviewer” neural network algorithm is designed and executed.
云计算通常用于执行数据处理和数据挖掘等计算密集型任务。为了更好地满足边缘计算系统任务分配过程中的这些要求,本文提出了一种基于区块链的智能任务迁移分配机制,该机制综合考虑了资源分配、资源控制和信用度等因素。云计算通常用于执行数据处理和数据挖掘等计算密集型任务,但这种模式可能无法有效执行对延迟敏感或动态的交互式任务。作为云计算的补充,边缘计算能有效减少任务处理延迟、能耗成本和带宽消耗,因此备受关注。本文首先构建了一个优化问题,即在延迟、能耗、通信和信用度等约束条件下,最小化完成所有任务的总成本。在这里,终端节点从边缘节点挖掘计算资源来完成任务迁移。基于区块链的激励方法可以调动终端节点和边缘节点的积极性,并确保迁移过程中交易的安全性。设计的分配规则确保了成功开采资源后奖励的公平性。为了更好地满足计算密集型任务对时延、能耗和安全的要求,提出了一种基于区块链应用的智能计算迁移机制,综合考虑了资源分配、资源控制和信用度等因素。为满足边缘计算场景下计算密集型任务对时延和能耗的要求,提出了一种基于区块链的智能任务迁移分配机制,并对其进行了资源分配和控制。为有效实现这一机制,设计并执行了一种双 "行为者-审查者 "神经网络算法。
{"title":"A Cost-Minimized Task Migration Assignment Mechanism in Blockchain Based Edge Computing System","authors":"Binghua Xu, Yan Jin, Lei Yu","doi":"10.2174/0126662558292891240409050246","DOIUrl":"https://doi.org/10.2174/0126662558292891240409050246","url":null,"abstract":"\u0000\u0000Cloud computing is usually introduced to execute computing intensive\u0000tasks for data processing and data mining. As a supplement to cloud computing, edge\u0000computing is provided as a new paradigm to effectively reduce processing latency, energy consumption\u0000cost and bandwidth consumption for time-sensitive tasks or resource-sensitive tasks.\u0000To better meet such requirements during task assignment in edge computing systems, an intelligent\u0000task migration assignment mechanism based on blockchain is proposed, which jointly\u0000considers the factors of resource allocation, resource control and credit degree.\u0000\u0000\u0000\u0000Cloud computing are usually introduced to execute computing intensive tasks for data processing and data mining. However, this paradigm may not be effective to execute latency sensitive or dynamic interactive tasks. As a supplement to the cloud computing, edge computing has attracted much attention because it can effectively reduce task processing latency, energy consumption cost and bandwidth consumption.\u0000\u0000\u0000\u0000In this paper, an optimization problem is firstly constructed to minimize the total\u0000cost of completing all tasks under constraints of delay, energy consumption, communication,\u0000and credit degree. Here, the terminal node mines computing resources from edge nodes to\u0000complete task migration. An incentive method based on blockchain is provided to mobilize the\u0000activity of terminal nodes and edge nodes, and to ensure the security of the transaction during\u0000migration. The designed allocation rules ensure the fairness of rewards for successfully mining\u0000resource. To solve the optimization problem, an intelligent migration algorithm that utilizes a\u0000dual “actor-reviewer” neural network on inverse gradient update is proposed which makes the\u0000training process more stable and easier to converge.\u0000\u0000\u0000\u0000To better meet requirements of the latency, energy consumption and security for computing intensive tasks, an intelligent computing migration mechanism based on blockchain applications is proposed, which considers the factors of resource allocation, resource control and credit degree.\u0000\u0000\u0000\u0000Compared to the existing two benchmark mechanisms, the extensive simulation results\u0000indicate that the proposed mechanism based on neural network can converge at a faster\u0000speed and achieve the minimal total cost.\u0000\u0000\u0000\u0000To satisfy the requirements of delay and energy consumption for computing intensive\u0000tasks in edge computing scenarios, an intelligent, blockchain based task migration assignment\u0000mechanism with joint resource allocation and control is proposed. To realize this\u0000mechanism effectively, a dual “actor-reviewer” neural network algorithm is designed and executed.\u0000","PeriodicalId":506582,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":"298 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140703769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extensive Review of Literature on Explainable AI (XAI) in HealthcareApplications 医疗保健应用中的可解释人工智能(XAI)文献综述
Pub Date : 2024-03-20 DOI: 10.2174/0126662558296699240314055348
Ramasamy Mariappan
Artificial Intelligence (AI) techniques are widely being used in the medical fields orvarious applications including diagnosis of diseases, prediction and classification of diseases,drug discovery, etc. However, these AI techniques are lacking in the transparency of the predictionsor decisions made due to their black box-type operations. The explainable AI (XAI)addresses such issues faced by AI to make better interpretations or decisions by physicians.This article explores XAI techniques in the field of healthcare applications, including the Internetof Medical Things (IoMT). XAI aims to provide transparency, accountability, and traceabilityin AI-based systems in healthcare applications. It can help in interpreting the predictionsor decisions made in medical diagnosis systems, medical decision support systems, smartwearable healthcare devices, etc. Nowadays, XAI methods have been utilized in numerousmedical applications over the Internet of Things (IOT), such as medical diagnosis, prognosis,and explanations of the AI models, and hence, XAI in the context of IoMT and healthcare hasthe potential to enhance the reliability and trustworthiness of AI systems.
人工智能(AI)技术被广泛应用于医学领域或各种应用,包括疾病诊断、疾病预测和分类、药物发现等。然而,这些人工智能技术由于其黑箱式的操作,预测或决策缺乏透明度。可解释的人工智能(XAI)解决了人工智能面临的这些问题,使医生能做出更好的解释或决策。本文探讨了 XAI 技术在医疗保健领域的应用,包括医疗物联网(IoMT)。XAI 旨在为医疗应用中基于人工智能的系统提供透明度、问责制和可追溯性。它有助于解释医疗诊断系统、医疗决策支持系统、智能可穿戴医疗设备等中的预测或决策。如今,XAI 方法已被广泛应用于物联网(IOT)上的医疗应用,如医疗诊断、预后和人工智能模型的解释,因此,在物联网和医疗保健背景下,XAI 有可能提高人工智能系统的可靠性和可信度。
{"title":"Extensive Review of Literature on Explainable AI (XAI) in Healthcare\u0000Applications","authors":"Ramasamy Mariappan","doi":"10.2174/0126662558296699240314055348","DOIUrl":"https://doi.org/10.2174/0126662558296699240314055348","url":null,"abstract":"\u0000\u0000Artificial Intelligence (AI) techniques are widely being used in the medical fields or\u0000various applications including diagnosis of diseases, prediction and classification of diseases,\u0000drug discovery, etc. However, these AI techniques are lacking in the transparency of the predictions\u0000or decisions made due to their black box-type operations. The explainable AI (XAI)\u0000addresses such issues faced by AI to make better interpretations or decisions by physicians.\u0000This article explores XAI techniques in the field of healthcare applications, including the Internet\u0000of Medical Things (IoMT). XAI aims to provide transparency, accountability, and traceability\u0000in AI-based systems in healthcare applications. It can help in interpreting the predictions\u0000or decisions made in medical diagnosis systems, medical decision support systems, smart\u0000wearable healthcare devices, etc. Nowadays, XAI methods have been utilized in numerous\u0000medical applications over the Internet of Things (IOT), such as medical diagnosis, prognosis,\u0000and explanations of the AI models, and hence, XAI in the context of IoMT and healthcare has\u0000the potential to enhance the reliability and trustworthiness of AI systems.\u0000","PeriodicalId":506582,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":"60 S278","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140224184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Face Recognition Using LBPH and CNN 使用 LBPH 和 CNN 进行人脸识别
Pub Date : 2024-03-15 DOI: 10.2174/0126662558282684240213062932
R. Shukla, A. Tiwari, Ashish Ranjan Mishra
The purpose of this paper was to use Machine Learning (ML) techniquesto extract facial features from images. Accurate face detection and recognition has long been aproblem in computer vision. According to a recent study, Local Binary Pattern (LBP) is a superiorfacial descriptor for face recognition. A person's face may make their identity, feelings,and ideas more obvious. In the modern world, everyone wants to feel secure from unauthorizedauthentication. Face detection and recognition help increase security; however, the most difficultchallenge is to accurately recognise faces without creating any false identities.The proposed method uses a Local Binary Pattern Histogram (LBPH) and ConvolutionNeural Network (CNN) to preprocess face images with equalized histograms.LBPH in the proposed technique is used to extract and join the histogram values into asingle vector. The technique has been found to result in a reduction in training loss and an increasein validation accuracy of over 96.5%. Prior algorithms have been reported with loweraccuracy when compared to LBPH using CNN.This study demonstrates how studying characteristics produces more precise results,as the number of epochs increases. By comparing facial similarities, the vector has generatedthe best result.
本文旨在利用机器学习(ML)技术从图像中提取面部特征。准确的人脸检测和识别一直是计算机视觉领域的难题。根据最近的一项研究,局部二进制模式(Local Binary Pattern,LBP)是一种用于人脸识别的优秀面部描述符。一个人的脸可能会让他的身份、情感和想法更加明显。在现代社会,每个人都希望获得安全感,避免未经授权的身份验证。所提出的方法使用局部二进制模式直方图(LBPH)和卷积神经网络(CNN)对具有均衡直方图的人脸图像进行预处理。该技术减少了训练损失,验证准确率提高了 96.5% 以上。据报道,与使用 CNN 的 LBPH 相比,之前的算法准确率较低。本研究展示了随着历时次数的增加,研究特征如何产生更精确的结果。通过比较面部相似度,该向量得出了最佳结果。
{"title":"Face Recognition Using LBPH and CNN","authors":"R. Shukla, A. Tiwari, Ashish Ranjan Mishra","doi":"10.2174/0126662558282684240213062932","DOIUrl":"https://doi.org/10.2174/0126662558282684240213062932","url":null,"abstract":"\u0000\u0000The purpose of this paper was to use Machine Learning (ML) techniques\u0000to extract facial features from images. Accurate face detection and recognition has long been a\u0000problem in computer vision. According to a recent study, Local Binary Pattern (LBP) is a superior\u0000facial descriptor for face recognition. A person's face may make their identity, feelings,\u0000and ideas more obvious. In the modern world, everyone wants to feel secure from unauthorized\u0000authentication. Face detection and recognition help increase security; however, the most difficult\u0000challenge is to accurately recognise faces without creating any false identities.\u0000\u0000\u0000\u0000The proposed method uses a Local Binary Pattern Histogram (LBPH) and Convolution\u0000Neural Network (CNN) to preprocess face images with equalized histograms.\u0000\u0000\u0000\u0000LBPH in the proposed technique is used to extract and join the histogram values into a\u0000single vector. The technique has been found to result in a reduction in training loss and an increase\u0000in validation accuracy of over 96.5%. Prior algorithms have been reported with lower\u0000accuracy when compared to LBPH using CNN.\u0000\u0000\u0000\u0000This study demonstrates how studying characteristics produces more precise results,\u0000as the number of epochs increases. By comparing facial similarities, the vector has generated\u0000the best result.\u0000","PeriodicalId":506582,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":"21 52","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140240419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Prospective Metaverse Paradigm Based on the Reality-Virtuality Continuum and Digital Twins 基于现实-虚拟连续性和数字孪生的前瞻性元宇宙范式
Pub Date : 2024-03-08 DOI: 10.2174/0126662558294125240307094426
Abolfazl Zare, Aliakbar Jalali
After decades of introducing the concept of virtual reality, the expansion, and significantadvances of technologies and innovations, such as 6g, edge computing, the internet ofthings, robotics, artificial intelligence, blockchain, quantum computing, and digital twins, theworld is on the cusp of a new revolution. By moving through the three stages of the digitaltwin, digital native, and finally surrealist, the metaverse has created a new vision of the futureof human and societal life so that we are likely to face the next generation of societies (perhapssociety 6) in the not too distant future. However, until then, the reality has been that themetaverse is still in its infancy, perhaps where the internet was in 1990. There is still no singledefinition, few studies have been conducted, there is no comprehensive and complete paradigmor clear framework, and due to the high financial volume of technology giants, most of thesestudies have focused on profitable areas such as gaming and entertainment. The motivation andpurpose of this article are to introduce a prospective metaverse paradigm based on the revisedreality-virtuality continuum and provide a new supporting taxonomy with the three dimensionsof interaction, immersion, and extent of world knowledge to develop and strengthen the theoreticalfoundations of the metaverse and help researchers. Furthermore, there is still no comprehensiveand agreed-upon conceptual framework for the metaverse. To this end, by reviewingthe research literature, discovering the important components of technological buildingblocks, especially digital twins, and presenting a new concept called meta-twins, a prospectiveconceptual framework based on the revised reality-virtuality continuum with a new supportingtaxonomy was presented.
经过数十年虚拟现实概念的引入、6G、边缘计算、物联网、机器人、人工智能、区块链、量子计算和数字双胞胎等技术和创新的扩展和重大进展,世界正处于一场新革命的风口浪尖。通过经历数字双胞胎、数字原生和超现实主义三个阶段,元宇宙为人类和社会生活的未来创造了新的愿景,因此我们很可能在不远的将来面对下一代社会(也许是第六社会)。然而,在此之前,网络世界仍处于起步阶段,也许就像 1990 年的互联网一样。目前还没有一个明确的定义,开展的研究也很少,没有一个全面完整的范式或清晰的框架,而且由于科技巨头的高额资金,大多数研究都集中在游戏和娱乐等盈利领域。本文的动机和目的是在修订的现实-虚拟连续统一体的基础上引入一种前瞻性的元宇宙范式,并从交互、沉浸和世界知识程度三个维度提供一种新的支持性分类法,以发展和加强元宇宙的理论基础,为研究者提供帮助。此外,目前还没有一个全面的、一致认可的元海外概念框架。为此,通过回顾研究文献,发现技术积木(尤其是数字孪生)的重要组成部分,并提出了一个新概念--元孪生,从而提出了一个基于修订后的现实-虚拟连续体的前瞻性概念框架,以及一个新的支持性分类学。
{"title":"A Prospective Metaverse Paradigm Based on the Reality-Virtuality Continuum and Digital Twins","authors":"Abolfazl Zare, Aliakbar Jalali","doi":"10.2174/0126662558294125240307094426","DOIUrl":"https://doi.org/10.2174/0126662558294125240307094426","url":null,"abstract":"\u0000\u0000After decades of introducing the concept of virtual reality, the expansion, and significant\u0000advances of technologies and innovations, such as 6g, edge computing, the internet of\u0000things, robotics, artificial intelligence, blockchain, quantum computing, and digital twins, the\u0000world is on the cusp of a new revolution. By moving through the three stages of the digital\u0000twin, digital native, and finally surrealist, the metaverse has created a new vision of the future\u0000of human and societal life so that we are likely to face the next generation of societies (perhaps\u0000society 6) in the not too distant future. However, until then, the reality has been that the\u0000metaverse is still in its infancy, perhaps where the internet was in 1990. There is still no single\u0000definition, few studies have been conducted, there is no comprehensive and complete paradigm\u0000or clear framework, and due to the high financial volume of technology giants, most of these\u0000studies have focused on profitable areas such as gaming and entertainment. The motivation and\u0000purpose of this article are to introduce a prospective metaverse paradigm based on the revised\u0000reality-virtuality continuum and provide a new supporting taxonomy with the three dimensions\u0000of interaction, immersion, and extent of world knowledge to develop and strengthen the theoretical\u0000foundations of the metaverse and help researchers. Furthermore, there is still no comprehensive\u0000and agreed-upon conceptual framework for the metaverse. To this end, by reviewing\u0000the research literature, discovering the important components of technological building\u0000blocks, especially digital twins, and presenting a new concept called meta-twins, a prospective\u0000conceptual framework based on the revised reality-virtuality continuum with a new supporting\u0000taxonomy was presented.\u0000","PeriodicalId":506582,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140257298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Security Analysis Model for IoT-ecosystem Using Machine Learning-(ML) Approach 使用机器学习(ML)方法的物联网生态系统安全分析模型
Pub Date : 2024-03-01 DOI: 10.2174/0126662558286885240223093414
Pradeep Kumar N.S, M. P. Kantipudi, Praveen N, Suresh S, Dr Rajanikanth Aluvalu, Jayant Jagtap
The attacks on IoT systems are increasing as the devices and communicationnetworks are progressively integrated. If no attacks are found in IoT for a long time, itwill affect the availability of services that can result in data leaks and can create a significantimpact on the associated costs and quality of services. Therefore, the attacks and security vulnerabilityin the IoT ecosystem must be detected to provide robust security and defensivemechanisms for real-time applications.This paper proposes an analytical design of an intelligent attack detection frameworkusing multiple machine learning techniques to provide cost-effective and efficient securityanalysis services in the IoT ecosystem.The performance validation of the proposed framework is carried out by multiple performanceindicators.The simulation outcome exhibits the effectiveness of the proposed system interms of accuracy and F1-score for the detection of various types of attacking scenarios.
随着设备和通信网络的逐步集成,对物联网系统的攻击也在不断增加。如果在物联网中长期未发现攻击,就会影响服务的可用性,导致数据泄露,并对相关成本和服务质量造成重大影响。因此,必须检测物联网生态系统中的攻击和安全漏洞,为实时应用提供强大的安全和防御机制。本文提出了一种智能攻击检测框架的分析设计,利用多种机器学习技术为物联网生态系统提供低成本、高效率的安全分析服务。
{"title":"A Security Analysis Model for IoT-ecosystem Using Machine Learning-\u0000(ML) Approach","authors":"Pradeep Kumar N.S, M. P. Kantipudi, Praveen N, Suresh S, Dr Rajanikanth Aluvalu, Jayant Jagtap","doi":"10.2174/0126662558286885240223093414","DOIUrl":"https://doi.org/10.2174/0126662558286885240223093414","url":null,"abstract":"\u0000\u0000The attacks on IoT systems are increasing as the devices and communication\u0000networks are progressively integrated. If no attacks are found in IoT for a long time, it\u0000will affect the availability of services that can result in data leaks and can create a significant\u0000impact on the associated costs and quality of services. Therefore, the attacks and security vulnerability\u0000in the IoT ecosystem must be detected to provide robust security and defensive\u0000mechanisms for real-time applications.\u0000\u0000\u0000\u0000This paper proposes an analytical design of an intelligent attack detection framework\u0000using multiple machine learning techniques to provide cost-effective and efficient security\u0000analysis services in the IoT ecosystem.\u0000\u0000\u0000\u0000The performance validation of the proposed framework is carried out by multiple performance\u0000indicators.\u0000\u0000\u0000\u0000The simulation outcome exhibits the effectiveness of the proposed system in\u0000terms of accuracy and F1-score for the detection of various types of attacking scenarios.\u0000","PeriodicalId":506582,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140091041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Recent Advances in Computer Science and Communications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1