首页 > 最新文献

Big Data Research最新文献

英文 中文
Efficient training: Federated learning cost analysis 高效训练:联邦学习成本分析
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-20 DOI: 10.1016/j.bdr.2025.100510
Rafael Teixeira , Leonardo Almeida , Mário Antunes , Diogo Gomes , Rui L. Aguiar
With the rapid development of 6G, Artificial Intelligence (AI) is expected to play a pivotal role in network management, resource optimization, and intrusion detection. However, deploying AI models in 6G networks faces several challenges, such as the lack of dedicated hardware for AI tasks and the need to protect user privacy. To address these challenges, Federated Learning (FL) emerges as a promising solution for distributed AI training without the need to move data from users' devices. This paper investigates the performance and costs of different FL approaches regarding training time, communication overhead, and energy consumption. The results show that FL can significantly accelerate the training process while reducing the data transferred across the network. However, the effectiveness of FL depends on the specific FL approach and the network conditions.
随着6G的快速发展,人工智能(AI)有望在网络管理、资源优化和入侵检测方面发挥关键作用。然而,在6G网络中部署人工智能模型面临着一些挑战,例如缺乏用于人工智能任务的专用硬件以及需要保护用户隐私。为了应对这些挑战,联邦学习(FL)成为分布式人工智能训练的一种很有前途的解决方案,无需从用户设备中移动数据。本文研究了不同的FL方法在训练时间、通信开销和能源消耗方面的性能和成本。结果表明,FL可以显著加快训练过程,同时减少跨网络传输的数据量。然而,FL的有效性取决于具体的FL方法和网络条件。
{"title":"Efficient training: Federated learning cost analysis","authors":"Rafael Teixeira ,&nbsp;Leonardo Almeida ,&nbsp;Mário Antunes ,&nbsp;Diogo Gomes ,&nbsp;Rui L. Aguiar","doi":"10.1016/j.bdr.2025.100510","DOIUrl":"10.1016/j.bdr.2025.100510","url":null,"abstract":"<div><div>With the rapid development of 6G, Artificial Intelligence (AI) is expected to play a pivotal role in network management, resource optimization, and intrusion detection. However, deploying AI models in 6G networks faces several challenges, such as the lack of dedicated hardware for AI tasks and the need to protect user privacy. To address these challenges, Federated Learning (FL) emerges as a promising solution for distributed AI training without the need to move data from users' devices. This paper investigates the performance and costs of different FL approaches regarding training time, communication overhead, and energy consumption. The results show that FL can significantly accelerate the training process while reducing the data transferred across the network. However, the effectiveness of FL depends on the specific FL approach and the network conditions.</div></div>","PeriodicalId":56017,"journal":{"name":"Big Data Research","volume":"40 ","pages":"Article 100510"},"PeriodicalIF":3.5,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143454033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Tesseract optical character recognition performance on Thai document datasets 改进泰语文档数据集上的Tesseract光学字符识别性能
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-08 DOI: 10.1016/j.bdr.2025.100508
Noppol Anakpluek, Watcharakorn Pasanta, Latthawan Chantharasukha, Pattanawong Chokratansombat, Pajaya Kanjanakaew, Thitirat Siriborvornratanakul
This research aims to improve the accuracy and efficiency of Optical Character Recognition (OCR) technology for the Thai language, specifically in the context of Thai government documents. OCR enables the conversion of text from images into machine-readable format, facilitating document storage and further processing. However, applying OCR to the Thai language presents unique challenges due to its complexity. This study focuses on enhancing the performance of the Tesseract OCR engine, a widely used free OCR technology, by implementing various image preprocessing techniques such as masking, adaptive thresholds, median filtering, Canny edge detection, and morphological operators. A dataset of Thai documents is utilized, and the OCR system's output is evaluated using word error rate (WER) and character error rate (CER) metrics. To improve text extraction accuracy, the research employs the original U-Net architecture [19] for image segmentation. Furthermore, the Tesseract OCR engine is finetuned, and image preprocessing is performed to optimize OCR system accuracy. The developed tools automate workflow processes, alleviate constraints on model training, and enable the effective utilization of information from official Thai documents for various purposes.
本研究旨在提高泰国语光学字符识别(OCR)技术的准确性和效率,特别是在泰国政府文件的背景下。OCR可以将图像中的文本转换为机器可读的格式,方便文档存储和进一步处理。然而,由于其复杂性,将OCR应用于泰语面临着独特的挑战。本研究的重点是通过实现各种图像预处理技术,如掩模、自适应阈值、中值滤波、Canny边缘检测和形态学算子,来增强广泛使用的免费OCR技术Tesseract OCR引擎的性能。使用泰语文档的数据集,并使用单词错误率(WER)和字符错误率(CER)指标评估OCR系统的输出。为了提高文本提取的准确性,本研究采用了原始的U-Net架构[19]进行图像分割。此外,对Tesseract OCR引擎进行了微调,并对图像进行了预处理,以优化OCR系统的精度。开发的工具使工作流程自动化,减轻了模型训练的限制,并能够有效地利用泰国官方文档中的信息用于各种目的。
{"title":"Improved Tesseract optical character recognition performance on Thai document datasets","authors":"Noppol Anakpluek,&nbsp;Watcharakorn Pasanta,&nbsp;Latthawan Chantharasukha,&nbsp;Pattanawong Chokratansombat,&nbsp;Pajaya Kanjanakaew,&nbsp;Thitirat Siriborvornratanakul","doi":"10.1016/j.bdr.2025.100508","DOIUrl":"10.1016/j.bdr.2025.100508","url":null,"abstract":"<div><div>This research aims to improve the accuracy and efficiency of Optical Character Recognition (OCR) technology for the Thai language, specifically in the context of Thai government documents. OCR enables the conversion of text from images into machine-readable format, facilitating document storage and further processing. However, applying OCR to the Thai language presents unique challenges due to its complexity. This study focuses on enhancing the performance of the Tesseract OCR engine, a widely used free OCR technology, by implementing various image preprocessing techniques such as masking, adaptive thresholds, median filtering, Canny edge detection, and morphological operators. A dataset of Thai documents is utilized, and the OCR system's output is evaluated using word error rate (WER) and character error rate (CER) metrics. To improve text extraction accuracy, the research employs the original U-Net architecture [<span><span>19</span></span>] for image segmentation. Furthermore, the Tesseract OCR engine is finetuned, and image preprocessing is performed to optimize OCR system accuracy. The developed tools automate workflow processes, alleviate constraints on model training, and enable the effective utilization of information from official Thai documents for various purposes.</div></div>","PeriodicalId":56017,"journal":{"name":"Big Data Research","volume":"39 ","pages":"Article 100508"},"PeriodicalIF":3.5,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143396246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel approach for job matching and skill recommendation using transformers and the O*NET database 一种利用变压器和O*NET数据库进行工作匹配和技能推荐的新方法
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-07 DOI: 10.1016/j.bdr.2025.100509
Rubén Alonso , Danilo Dessí , Antonello Meloni , Diego Reforgiato Recupero
Today we have tons of information posted on the web every day regarding job supply and demand which has heavily affected the job market. The online enrolling process has thus become efficient for applicants as it allows them to present their resumes using the Internet and, as such, simultaneously to numerous organizations. Online systems such as Monster.com, OfferZen, and LinkedIn contain millions of job offers and resumes of potential candidates leaving to companies with the hard task to face an enormous amount of data to manage to select the most suitable applicant. The task of assessing the resumes of candidates and providing automatic recommendations on which one suits a particular position best has, therefore, become essential to speed up the hiring process. Similarly, it is important to help applicants to quickly find a job appropriate to their skills and provide recommendations about what they need to master to become eligible for certain jobs. Our approach lies in this context and proposes a new method to identify skills from candidates' resumes and match resumes with job descriptions. We employed the O*NET database entities related to different skills and abilities required by different jobs; moreover, we leveraged deep learning technologies to compute the semantic similarity between O*NET entities and part of text extracted from candidates' resumes. The ultimate goal is to identify the most suitable job for a certain resume according to the information there contained. We have defined two scenarios: i) given a resume, identify the top O*NET occupations with the highest match with the resume, ii) given a candidate's resume and a set of job descriptions, identify which one of the input jobs is the most suitable for the candidate. The evaluation that has been carried out indicates that the proposed approach outperforms the baselines in the two scenarios. Finally, we provide a use case for candidates where it is possible to recommend courses with the goal to fill certain skills and make them qualified for a certain job.
今天,我们每天在网上发布大量关于工作供求的信息,这些信息严重影响了就业市场。因此,在线报名过程对申请人来说变得高效,因为它允许他们使用互联网提交简历,并同时向许多组织提交简历。Monster.com、OfferZen和LinkedIn等在线系统包含了数百万份潜在求职者的工作邀请和简历,这些求职者留给公司的任务艰巨,需要面对大量数据,才能选择最合适的求职者。因此,评估候选人的简历并自动推荐最适合特定职位的人,对加快招聘过程至关重要。同样,帮助求职者快速找到一份适合他们技能的工作,并提供他们需要掌握哪些技能才能胜任某些工作的建议也很重要。我们的方法就是在这种背景下,提出了一种新的方法来从候选人的简历中识别技能,并将简历与职位描述相匹配。我们采用了与不同工作所需的不同技能和能力相关的O*NET数据库实体;此外,我们利用深度学习技术来计算O*NET实体与从候选人简历中提取的部分文本之间的语义相似度。最终的目标是根据简历中包含的信息来确定最适合的工作。我们定义了两个场景:i)给出一份简历,找出与简历匹配度最高的O*NET职业;ii)给出一份求职者的简历和一组职位描述,找出输入的职位中哪一个最适合该求职者。已进行的评估表明,拟议的方法在两种情况下优于基线。最后,我们为候选人提供了一个用例,在这个用例中,可以推荐课程,以满足特定技能的要求,并使他们能够胜任特定的工作。
{"title":"A novel approach for job matching and skill recommendation using transformers and the O*NET database","authors":"Rubén Alonso ,&nbsp;Danilo Dessí ,&nbsp;Antonello Meloni ,&nbsp;Diego Reforgiato Recupero","doi":"10.1016/j.bdr.2025.100509","DOIUrl":"10.1016/j.bdr.2025.100509","url":null,"abstract":"<div><div>Today we have tons of information posted on the web every day regarding job supply and demand which has heavily affected the job market. The online enrolling process has thus become efficient for applicants as it allows them to present their resumes using the Internet and, as such, simultaneously to numerous organizations. Online systems such as Monster.com, OfferZen, and LinkedIn contain millions of job offers and resumes of potential candidates leaving to companies with the hard task to face an enormous amount of data to manage to select the most suitable applicant. The task of assessing the resumes of candidates and providing automatic recommendations on which one suits a particular position best has, therefore, become essential to speed up the hiring process. Similarly, it is important to help applicants to quickly find a job appropriate to their skills and provide recommendations about what they need to master to become eligible for certain jobs. Our approach lies in this context and proposes a new method to identify skills from candidates' resumes and match resumes with job descriptions. We employed the O*NET database entities related to different skills and abilities required by different jobs; moreover, we leveraged deep learning technologies to compute the semantic similarity between O*NET entities and part of text extracted from candidates' resumes. The ultimate goal is to identify the most suitable job for a certain resume according to the information there contained. We have defined two scenarios: i) given a resume, identify the top O*NET occupations with the highest match with the resume, ii) given a candidate's resume and a set of job descriptions, identify which one of the input jobs is the most suitable for the candidate. The evaluation that has been carried out indicates that the proposed approach outperforms the baselines in the two scenarios. Finally, we provide a use case for candidates where it is possible to recommend courses with the goal to fill certain skills and make them qualified for a certain job.</div></div>","PeriodicalId":56017,"journal":{"name":"Big Data Research","volume":"39 ","pages":"Article 100509"},"PeriodicalIF":3.5,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143377085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Has machine paraphrasing skills approached humans? Detecting automatically and manually generated paraphrased cases 机器的释义能力已经接近人类了吗?检测自动和手动生成的释义案例
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-22 DOI: 10.1016/j.bdr.2025.100507
Iqra Muneer , Aysha Shehzadi , Muhammad Adnan Ashraf , Rao Muhammad Adeel Nawab
In recent years, automatic text rewriting (or paraphrasing) tools are readily and publicly available. These tools have enabled text paraphrasing as an exceptionally straightforward approach that encourages trouble-free plagiarism and text reuse. In literature, the majority of efforts have focused on detecting real cases (manual/human paraphrasing) of paraphrasing (mainly in the domain of journalism). However, the problem of paraphrase detection has not been thoroughly explored for artificial cases (machine paraphrased), mainly, due to lack of standard resources for its evaluation. To fulfill this gap, this study proposes three benchmark corpora for artificial cases of paraphrases at sentence level, and one real corpus contains examples from daily life activities. Three popular and widely used automatic text rewriting online tools have been used, i.e., paraphrasing-tools, articlerewritetool and rewritertools, to develop artificial case corpora. Further, we used two real cases corpora, including Microsoft Paraphrase Corpus (MSRP) (from the domain of journalism) and a proposed real corpus which is a combination of carefully extracted Quora question pairs and MSRP (Q-MSRP). Both real case and artificial case paraphrases were evaluated using classical machine learning, transfer learning, Large language models and a proposed model, to investigate which of the two types of paraphrasing is more difficult to detect. The results show that our proposed model outperforms all the other approaches for both artificial and real case paraphrase detection. A thorough analysis of the results suggests that, by far, manual paraphrasing is still harder to detect but certain machine paraphrased texts are equally difficult to detect. All proposed corpora are freely available to promote the research on artificial case paraphrase detection.
近年来,自动文本重写(或改写)工具很容易公开可用。这些工具使文本释义成为一种非常直接的方法,鼓励无故障的剽窃和文本重用。在文献中,大多数的努力都集中在检测释义的真实案例(手动/人工释义)(主要在新闻领域)。然而,对于人工案例(机器释义)的释义检测问题尚未深入探讨,主要原因是缺乏标准的评价资源。为了填补这一空白,本研究提出了三个句子层面的人工释义基准语料库,一个包含日常生活活动实例的真实语料库。本文利用三种流行的、广泛使用的在线自动文本改写工具,即释义工具、文章书写工具和重写工具,来开发人工案例语料库。此外,我们使用了两个真实案例语料库,包括微软释义语料库(MSRP)(来自新闻领域)和一个提议的真实语料库,该语料库是精心提取的Quora问题对和MSRP (Q-MSRP)的组合。使用经典机器学习、迁移学习、大型语言模型和一个建议模型对真实案例和人工案例释义进行评估,以研究哪一种类型的释义更难检测。结果表明,我们提出的模型在人工和真实案例释义检测方面都优于所有其他方法。对结果的全面分析表明,到目前为止,人工释义仍然难以检测,但某些机器释义的文本同样难以检测。所有建议的语料库都是免费提供的,以促进人工案例释义检测的研究。
{"title":"Has machine paraphrasing skills approached humans? Detecting automatically and manually generated paraphrased cases","authors":"Iqra Muneer ,&nbsp;Aysha Shehzadi ,&nbsp;Muhammad Adnan Ashraf ,&nbsp;Rao Muhammad Adeel Nawab","doi":"10.1016/j.bdr.2025.100507","DOIUrl":"10.1016/j.bdr.2025.100507","url":null,"abstract":"<div><div>In recent years, automatic text rewriting (or paraphrasing) tools are readily and publicly available. These tools have enabled text paraphrasing as an exceptionally straightforward approach that encourages trouble-free plagiarism and text reuse. In literature, the majority of efforts have focused on detecting real cases (manual/human paraphrasing) of paraphrasing (mainly in the domain of journalism). However, the problem of paraphrase detection has not been thoroughly explored for artificial cases (machine paraphrased), mainly, due to lack of standard resources for its evaluation. To fulfill this gap, this study proposes three benchmark corpora for artificial cases of paraphrases at sentence level, and one real corpus contains examples from daily life activities. Three popular and widely used automatic text rewriting online tools have been used, i.e., paraphrasing-tools, articlerewritetool and rewritertools, to develop artificial case corpora. Further, we used two real cases corpora, including Microsoft Paraphrase Corpus (MSRP) (from the domain of journalism) and a proposed real corpus which is a combination of carefully extracted Quora question pairs and MSRP (Q-MSRP). Both real case and artificial case paraphrases were evaluated using classical machine learning, transfer learning, Large language models and a proposed model, to investigate which of the two types of paraphrasing is more difficult to detect. The results show that our proposed model outperforms all the other approaches for both artificial and real case paraphrase detection. A thorough analysis of the results suggests that, by far, manual paraphrasing is still harder to detect but certain machine paraphrased texts are equally difficult to detect. All proposed corpora are freely available to promote the research on artificial case paraphrase detection.</div></div>","PeriodicalId":56017,"journal":{"name":"Big Data Research","volume":"39 ","pages":"Article 100507"},"PeriodicalIF":3.5,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143092345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-granularity enhanced graph convolutional network for aspect sentiment triplet extraction 面向方面情感三元组提取的多粒度增强图卷积网络
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-17 DOI: 10.1016/j.bdr.2025.100506
Mingwei Tang , Kun Yang , Linping Tao , Mingfeng Zhao , Wei Zhou
Aspect Sentiment Triple Extraction (ASTE) is an emerging sentiment analysis task, which describes both aspect terms and their sentiment polarity, as well as opinion terms that represent sentiment polarity. Some models have been presented to analyze sentence sentiment more accurately. Nonetheless, previous models have had problems, like inconsistent sentiment predictions for one-to-many, many-to-one, and sequence annotation. In addition, part-of-speech and contextual semantic information are ignored, resulting in the inability to identify complete multi-word aspect terms and opinion terms. To address these problems, we propose a Multi-granularity Enhanced Graph Convolutional Network (MGEGCN) to solve the problem of inaccurate multi-word term recognition. First, we propose a dual-channel enhanced graph convolutional network, which simultaneously analyzes syntactic structure and part-of-speech information and uses the combined effect of the two to enhance the deep semantic information of aspect terms and opinion terms. Second, we also design a multi-scale attention, which combines self-attention with deep separable convolution to enhance attention to aspect terms and opinion terms. In addition, a convolutional decoding strategy is used in the decoding stage to extract triples by directly detecting and classifying the relational regions in the table. In the experimental part, we conduct analysis on two public datasets (ASTE-DATA-v1 and ASTE-DATA-v2) to prove that the model improves the performance of ASTE tasks. In four subsets (14res, 14lap, 15res, and 16res), the F1 scores of the MGEGCN method are 75.65%, 61.62%, 67.62%, 74.12% and 74.69%, 62.10%, 68.18%, 74.00%, respectively.
方面情感三重提取(ASTE)是一种新兴的情感分析任务,它既描述方面术语及其情感极性,也描述代表情感极性的意见术语。为了更准确地分析句子情感,已经提出了一些模型。尽管如此,以前的模型存在一些问题,比如一对多、多对一和序列注释的情感预测不一致。此外,词性和上下文语义信息被忽略,导致无法识别完整的多词方面术语和意见术语。为了解决这些问题,我们提出了一种多粒度增强图卷积网络(MGEGCN)来解决多词术语识别不准确的问题。首先,我们提出了一种双通道增强图卷积网络,该网络同时分析句法结构和词性信息,并利用两者的联合作用增强方面词和意见词的深层语义信息。其次,我们还设计了一个多尺度注意,将自我注意与深度可分离卷积相结合,以增强对方面项和意见项的注意。此外,在解码阶段采用卷积解码策略,通过直接检测和分类表中的关系区域提取三元组。在实验部分,我们对两个公共数据集(ASTE- data -v1和ASTE- data -v2)进行了分析,证明该模型提高了ASTE任务的性能。在14res、14lap、15res和16res 4个子集中,MGEGCN方法的F1得分分别为75.65%、61.62%、67.62%、74.12%和74.69%、62.10%、68.18%、74.00%。
{"title":"Multi-granularity enhanced graph convolutional network for aspect sentiment triplet extraction","authors":"Mingwei Tang ,&nbsp;Kun Yang ,&nbsp;Linping Tao ,&nbsp;Mingfeng Zhao ,&nbsp;Wei Zhou","doi":"10.1016/j.bdr.2025.100506","DOIUrl":"10.1016/j.bdr.2025.100506","url":null,"abstract":"<div><div>Aspect Sentiment Triple Extraction (ASTE) is an emerging sentiment analysis task, which describes both aspect terms and their sentiment polarity, as well as opinion terms that represent sentiment polarity. Some models have been presented to analyze sentence sentiment more accurately. Nonetheless, previous models have had problems, like inconsistent sentiment predictions for one-to-many, many-to-one, and sequence annotation. In addition, part-of-speech and contextual semantic information are ignored, resulting in the inability to identify complete multi-word aspect terms and opinion terms. To address these problems, we propose a <em>Multi-granularity Enhanced Graph Convolutional Network</em> (MGEGCN) to solve the problem of inaccurate multi-word term recognition. First, we propose a dual-channel enhanced graph convolutional network, which simultaneously analyzes syntactic structure and part-of-speech information and uses the combined effect of the two to enhance the deep semantic information of aspect terms and opinion terms. Second, we also design a multi-scale attention, which combines self-attention with deep separable convolution to enhance attention to aspect terms and opinion terms. In addition, a convolutional decoding strategy is used in the decoding stage to extract triples by directly detecting and classifying the relational regions in the table. In the experimental part, we conduct analysis on two public datasets (ASTE-DATA-v1 and ASTE-DATA-v2) to prove that the model improves the performance of ASTE tasks. In four subsets (14res, 14lap, 15res, and 16res), the F1 scores of the MGEGCN method are 75.65%, 61.62%, 67.62%, 74.12% and 74.69%, 62.10%, 68.18%, 74.00%, respectively.</div></div>","PeriodicalId":56017,"journal":{"name":"Big Data Research","volume":"39 ","pages":"Article 100506"},"PeriodicalIF":3.5,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143092348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Positional-attention based bidirectional deep stacked AutoEncoder for aspect based sentimental analysis 基于位置注意力的双向深度堆叠自编码器,用于面向情感分析
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-16 DOI: 10.1016/j.bdr.2024.100505
S. Anjali Devi , M. Sitha Ram , Pulugu Dileep , Sasibhushana Rao Pappu , T. Subha Mastan Rao , Mula Malyadri
With the rapid growth of Internet technology and social networks, the generation of text-based information on the web is increased. To ease the Natural Language Processing (NLP) tasks, analyzing the sentiments behind the provided input text is highly important. To effectively analyze the polarities of sentiments (positive, negative and neutral), categorizing the aspects in the text is an essential task. Several existing studies have attempted to accurately classify aspects based on sentiments in text inputs. However, the existing methods attained limited performance because of reduced aspect coverage, inefficiency in handling ambiguous language, inappropriate feature extraction, lack of contextual understanding and overfitting issues. Thus, the proposed study intends to develop an effective word embedding scheme with a novel hybrid deep learning technique for performing aspect-based sentimental analysis in a social media text. Initially, the collected raw input text data are pre-processed to reduce the undesirable data by initiating tokenization, stemming, lemmatization, duplicate removal, stop words removal, empty sets removal and empty rows removal. The required information from the pre-processed text is extracted using three varied word-level embedding methods: Scored-Lexicon based Word2Vec, Glove modelling and Extended Bidirectional Encoder Representation from Transformers (E-BERT). After extracting sufficient features, the aspects are analyzed, and the exact sentimental polarities are classified through a novel Positional-Attention-based Bidirectional Deep Stacked AutoEncoder (PA_BiDSAE) model. In this proposed classification, the BiLSTM network is hybridized with a deep stacked autoencoder (DSAE) model to categorize sentiment. The experimental analysis is done by using Python software, and the proposed model is simulated with three publicly available datasets: SemEval Challenge 2014 (Restaurant), SemEval Challenge 2014 (Laptop) and SemEval Challenge 2015 (Restaurant). The performance analysis proves that the proposed hybrid deep learning model obtains improved classification performance in accuracy, precision, recall, specificity, F1 score and kappa measure.
随着互联网技术和社交网络的快速发展,网络上基于文本的信息的产生越来越多。为了简化自然语言处理(NLP)任务,分析提供的输入文本背后的情感是非常重要的。为了有效地分析情感的极性(积极、消极和中性),对文本中的极性进行分类是一项必不可少的工作。现有的一些研究试图根据文本输入中的情感对方面进行准确分类。然而,现有的方法由于方面覆盖率低、处理歧义语言效率低、特征提取不当、缺乏上下文理解和过拟合等问题而性能有限。因此,本研究旨在开发一种有效的词嵌入方案,采用一种新的混合深度学习技术,在社交媒体文本中进行基于方面的情感分析。首先,对收集到的原始输入文本数据进行预处理,通过启动标记化、词干提取、词序化、重复删除、停止词删除、空集删除和空行删除来减少不需要的数据。从预处理文本中提取所需信息使用三种不同的词级嵌入方法:基于评分词典的Word2Vec,手套建模和来自变形金刚的扩展双向编码器表示(E-BERT)。在提取足够的特征后,对这些方面进行分析,并通过一种新的基于位置-注意力的双向深度堆叠自动编码器(PA_BiDSAE)模型分类出准确的情感极性。在该分类中,BiLSTM网络与深度堆叠自编码器(DSAE)模型相结合,对情感进行分类。实验分析使用Python软件完成,并使用三个公开可用的数据集模拟所提出的模型:SemEval Challenge 2014 (Restaurant), SemEval Challenge 2014 (Laptop)和SemEval Challenge 2015 (Restaurant)。性能分析表明,所提出的混合深度学习模型在准确率、精密度、召回率、特异性、F1评分和kappa测度等方面都取得了较好的分类性能。
{"title":"Positional-attention based bidirectional deep stacked AutoEncoder for aspect based sentimental analysis","authors":"S. Anjali Devi ,&nbsp;M. Sitha Ram ,&nbsp;Pulugu Dileep ,&nbsp;Sasibhushana Rao Pappu ,&nbsp;T. Subha Mastan Rao ,&nbsp;Mula Malyadri","doi":"10.1016/j.bdr.2024.100505","DOIUrl":"10.1016/j.bdr.2024.100505","url":null,"abstract":"<div><div>With the rapid growth of Internet technology and social networks, the generation of text-based information on the web is increased. To ease the Natural Language Processing (NLP) tasks, analyzing the sentiments behind the provided input text is highly important. To effectively analyze the polarities of sentiments (positive, negative and neutral), categorizing the aspects in the text is an essential task. Several existing studies have attempted to accurately classify aspects based on sentiments in text inputs. However, the existing methods attained limited performance because of reduced aspect coverage, inefficiency in handling ambiguous language, inappropriate feature extraction, lack of contextual understanding and overfitting issues. Thus, the proposed study intends to develop an effective word embedding scheme with a novel hybrid deep learning technique for performing aspect-based sentimental analysis in a social media text. Initially, the collected raw input text data are pre-processed to reduce the undesirable data by initiating tokenization, stemming, lemmatization, duplicate removal, stop words removal, empty sets removal and empty rows removal. The required information from the pre-processed text is extracted using three varied word-level embedding methods: Scored-Lexicon based Word2Vec, Glove modelling and Extended Bidirectional Encoder Representation from Transformers (E-BERT). After extracting sufficient features, the aspects are analyzed, and the exact sentimental polarities are classified through a novel Positional-Attention-based Bidirectional Deep Stacked AutoEncoder (PA_BiDSAE) model. In this proposed classification, the BiLSTM network is hybridized with a deep stacked autoencoder (DSAE) model to categorize sentiment. The experimental analysis is done by using Python software, and the proposed model is simulated with three publicly available datasets: SemEval Challenge 2014 (Restaurant), SemEval Challenge 2014 (Laptop) and SemEval Challenge 2015 (Restaurant). The performance analysis proves that the proposed hybrid deep learning model obtains improved classification performance in accuracy, precision, recall, specificity, F1 score and kappa measure.</div></div>","PeriodicalId":56017,"journal":{"name":"Big Data Research","volume":"39 ","pages":"Article 100505"},"PeriodicalIF":3.5,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143092347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Principal component analysis of multivariate spatial functional data 多元空间函数数据的主成分分析
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-16 DOI: 10.1016/j.bdr.2024.100504
Idris Si-ahmed , Leila Hamdad , Christelle Judith Agonkoui , Yoba Kande , Sophie Dabo-Niang
This paper is devoted to the study of dimension reduction techniques for multivariate spatially indexed functional data and defined on different domains. We present a method called Spatial Multivariate Functional Principal Component Analysis (SMFPCA), which performs principal component analysis for multivariate spatial functional data. In contrast to Multivariate Karhunen-Loève approach for independent data, SMFPCA is notably adept at effectively capturing spatial dependencies among multiple functions. SMFPCA applies spectral functional component analysis to multivariate functional spatial data, focusing on data points arranged on a regular grid. The methodological framework and algorithm of SMFPCA have been developed to tackle the challenges arising from the lack of appropriate methods for managing this type of data. The performance of the proposed method has been verified through finite sample properties using simulated datasets and sea-surface temperature dataset. Additionally, we conducted comparative studies of SMFPCA against some existing methods providing valuable insights into the properties of multivariate spatial functional data within a finite sample.
本文研究了定义在不同域上的多元空间索引函数数据的降维技术。本文提出了一种空间多元功能主成分分析(SMFPCA)方法,该方法对多变量空间功能数据进行主成分分析。与独立数据的多元karhunen - lo方法相比,SMFPCA特别擅长于有效捕获多个函数之间的空间依赖关系。SMFPCA将光谱功能成分分析应用于多元功能空间数据,重点关注排列在规则网格上的数据点。SMFPCA的方法框架和算法是为了解决由于缺乏管理这类数据的适当方法而产生的挑战而开发的。通过模拟数据集和海面温度数据集的有限样本特性,验证了该方法的性能。此外,我们还将SMFPCA与一些现有方法进行了比较研究,为有限样本内多元空间函数数据的特性提供了有价值的见解。
{"title":"Principal component analysis of multivariate spatial functional data","authors":"Idris Si-ahmed ,&nbsp;Leila Hamdad ,&nbsp;Christelle Judith Agonkoui ,&nbsp;Yoba Kande ,&nbsp;Sophie Dabo-Niang","doi":"10.1016/j.bdr.2024.100504","DOIUrl":"10.1016/j.bdr.2024.100504","url":null,"abstract":"<div><div>This paper is devoted to the study of dimension reduction techniques for multivariate spatially indexed functional data and defined on different domains. We present a method called Spatial Multivariate Functional Principal Component Analysis (SMFPCA), which performs principal component analysis for multivariate spatial functional data. In contrast to Multivariate Karhunen-Loève approach for independent data, SMFPCA is notably adept at effectively capturing spatial dependencies among multiple functions. SMFPCA applies spectral functional component analysis to multivariate functional spatial data, focusing on data points arranged on a regular grid. The methodological framework and algorithm of SMFPCA have been developed to tackle the challenges arising from the lack of appropriate methods for managing this type of data. The performance of the proposed method has been verified through finite sample properties using simulated datasets and sea-surface temperature dataset. Additionally, we conducted comparative studies of SMFPCA against some existing methods providing valuable insights into the properties of multivariate spatial functional data within a finite sample.</div></div>","PeriodicalId":56017,"journal":{"name":"Big Data Research","volume":"39 ","pages":"Article 100504"},"PeriodicalIF":3.5,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143092346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incomplete data classification via positive approximation based rough subspaces ensemble 通过基于正逼近的粗糙子空间集合进行不完整数据分类
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-14 DOI: 10.1016/j.bdr.2024.100496
Yuanting Yan , Meili Yang , Zhong Zheng , Hao Ge , Yiwen Zhang , Yanping Zhang
Classifying incomplete data using ensemble techniques is a prevalent method for addressing missing values, where multiple classifiers are trained on diverse subsets of features. However, current ensemble-based methods overlook the redundancy within feature subsets, presenting challenges for training robust prediction models, because the redundant features can hinder the learning of the underlying rules in the data. In this paper, we propose a Reduct-Missing Pattern Fusion (RMPF) method to address the aforementioned limitation. It leverages both the advantages of rough set theory and the effectiveness of missing patterns in classifying incomplete data. RMPF employs a heuristic algorithm to generate a set of positive approximation-based attribute reducts. Subsequently, it integrates the missing patterns with these reducts through a fusion strategy to minimize data redundancy. Finally, the optimized subsets are utilized to train a group of base classifiers, and a selective prediction procedure is applied to produce the ensembled prediction results. Experimental results show that our method is superior to the compared state-of-the-art methods in both performance and robustness. Especially, our method obtains significant superiority in the scenarios of data with high missing rates.
使用集合技术对不完整数据进行分类是解决缺失值问题的一种普遍方法,在这种方法中,多个分类器都是根据不同的特征子集进行训练的。然而,目前基于集合的方法忽视了特征子集中的冗余性,给训练稳健的预测模型带来了挑战,因为冗余特征会阻碍数据中潜在规则的学习。在本文中,我们提出了一种减少缺失模式融合(Reduct-Missing Pattern Fusion,RMPF)方法来解决上述局限性。它充分利用了粗糙集理论的优势和缺失模式在不完整数据分类中的有效性。RMPF 采用启发式算法生成一组基于正近似的属性还原。随后,它通过融合策略将缺失模式与这些还原整合在一起,以尽量减少数据冗余。最后,利用优化后的子集来训练一组基础分类器,并采用选择性预测程序来生成集合预测结果。实验结果表明,我们的方法在性能和鲁棒性方面都优于同类最先进的方法。特别是在数据缺失率较高的情况下,我们的方法取得了显著的优势。
{"title":"Incomplete data classification via positive approximation based rough subspaces ensemble","authors":"Yuanting Yan ,&nbsp;Meili Yang ,&nbsp;Zhong Zheng ,&nbsp;Hao Ge ,&nbsp;Yiwen Zhang ,&nbsp;Yanping Zhang","doi":"10.1016/j.bdr.2024.100496","DOIUrl":"10.1016/j.bdr.2024.100496","url":null,"abstract":"<div><div>Classifying incomplete data using ensemble techniques is a prevalent method for addressing missing values, where multiple classifiers are trained on diverse subsets of features. However, current ensemble-based methods overlook the redundancy within feature subsets, presenting challenges for training robust prediction models, because the redundant features can hinder the learning of the underlying rules in the data. In this paper, we propose a Reduct-Missing Pattern Fusion (RMPF) method to address the aforementioned limitation. It leverages both the advantages of rough set theory and the effectiveness of missing patterns in classifying incomplete data. RMPF employs a heuristic algorithm to generate a set of positive approximation-based attribute reducts. Subsequently, it integrates the missing patterns with these reducts through a fusion strategy to minimize data redundancy. Finally, the optimized subsets are utilized to train a group of base classifiers, and a selective prediction procedure is applied to produce the ensembled prediction results. Experimental results show that our method is superior to the compared state-of-the-art methods in both performance and robustness. Especially, our method obtains significant superiority in the scenarios of data with high missing rates.</div></div>","PeriodicalId":56017,"journal":{"name":"Big Data Research","volume":"38 ","pages":"Article 100496"},"PeriodicalIF":3.5,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142650891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint embedding in hierarchical distance and semantic representation learning for link prediction 分层距离和语义表征学习中的联合嵌入,用于链接预测
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-13 DOI: 10.1016/j.bdr.2024.100495
Jin Liu, Jianye Chen, Chongfeng Fan, Fengyu Zhou
The link prediction task aims to predict missing entities or relations in the knowledge graph and is essential for the downstream application. Existing well-known models deal with this task by mainly focusing on representing knowledge graph triplets in the distance space or semantic space. However, they can not fully capture the information of head and tail entities, nor even make good use of hierarchical level information. Thus, in this paper, we propose a novel knowledge graph embedding model for the link prediction task, namely, HIE, which models each triplet (h, r, t) into distance measurement space and semantic measurement space, simultaneously. Moreover, HIE is introduced into hierarchical-aware space to leverage rich hierarchical information of entities and relations for better representation learning. Specifically, we apply distance transformation operation on the head entity in distance space to obtain the tail entity instead of translation-based or rotation-based approaches. Experimental results of HIE on four real-world datasets show that HIE outperforms several existing state-of-the-art knowledge graph embedding methods on the link prediction task and deals with complex relations accurately.
链接预测任务旨在预测知识图谱中缺失的实体或关系,对于下游应用至关重要。现有的著名模型在处理这项任务时,主要侧重于在距离空间或语义空间中表示知识图谱三元组。但是,这些模型不能完全捕捉头部和尾部实体的信息,甚至不能很好地利用层次信息。因此,本文针对链接预测任务提出了一种新的知识图谱嵌入模型,即 HIE,它将每个三元组(h, r, t)同时建模到距离测量空间和语义测量空间中。此外,HIE 还引入了分层感知空间,以利用实体和关系的丰富分层信息进行更好的表征学习。具体来说,我们在距离空间中对头部实体进行距离变换操作,以获得尾部实体,而不是基于平移或旋转的方法。HIE 在四个真实世界数据集上的实验结果表明,HIE 在链接预测任务上的表现优于现有的几种最先进的知识图嵌入方法,并能准确处理复杂关系。
{"title":"Joint embedding in hierarchical distance and semantic representation learning for link prediction","authors":"Jin Liu,&nbsp;Jianye Chen,&nbsp;Chongfeng Fan,&nbsp;Fengyu Zhou","doi":"10.1016/j.bdr.2024.100495","DOIUrl":"10.1016/j.bdr.2024.100495","url":null,"abstract":"<div><div>The link prediction task aims to predict missing entities or relations in the knowledge graph and is essential for the downstream application. Existing well-known models deal with this task by mainly focusing on representing knowledge graph triplets in the distance space or semantic space. However, they can not fully capture the information of head and tail entities, nor even make good use of hierarchical level information. Thus, in this paper, we propose a novel knowledge graph embedding model for the link prediction task, namely, HIE, which models each triplet (<em>h</em>, <em>r</em>, <em>t</em>) into distance measurement space and semantic measurement space, simultaneously. Moreover, HIE is introduced into hierarchical-aware space to leverage rich hierarchical information of entities and relations for better representation learning. Specifically, we apply distance transformation operation on the head entity in distance space to obtain the tail entity instead of translation-based or rotation-based approaches. Experimental results of HIE on four real-world datasets show that HIE outperforms several existing state-of-the-art knowledge graph embedding methods on the link prediction task and deals with complex relations accurately.</div></div>","PeriodicalId":56017,"journal":{"name":"Big Data Research","volume":"38 ","pages":"Article 100495"},"PeriodicalIF":3.5,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142650890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep semantics-preserving cross-modal hashing 深度语义保全跨模态散列
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-07 DOI: 10.1016/j.bdr.2024.100494
Zhihui Lai , Xiaomei Fang , Heng Kong
Cross-modal hashing has been paid widespread attention in recent years due to its outstanding performance in cross-modal data retrieval. Cross-modal hashing can be decomposed into two steps, i.e., the feature learning and the binarization. However, most existing cross-modal hash methods do not take the supervisory information of the data into consideration during binary quantization, and thus often fail to adequately preserve semantic information. To solve these problems, this paper proposes a novel deep cross-modal hashing method called deep semantics-preserving cross-modal hashing (DSCMH), which makes full use of intra and inter-modal semantic information to improve the model's performance. Moreover, by designing a label network for semantic alignment during the binarization process, DSCMH's performance can be further improved. In order to verify the performance of the proposed method, extensive experiments were conducted on four big datasets. The results show that the proposed method is better than most of the existing cross-modal hashing methods. In addition, the ablation experiment shows that the proposed new regularized terms all have positive effects on the model's performances in cross-modal retrieval. The code of this paper can be downloaded from http://www.scholat.com/laizhihui.
近年来,跨模态散列因其在跨模态数据检索中的出色表现而受到广泛关注。跨模态散列可以分解为两个步骤,即特征学习和二值化。然而,现有的大多数跨模态哈希方法在二进制量化时没有考虑数据的监督信息,因此往往不能充分保留语义信息。为了解决这些问题,本文提出了一种新颖的深度跨模态哈希方法,即深度语义保留跨模态哈希(DSCMH),它能充分利用模态内和模态间的语义信息来提高模型的性能。此外,通过在二值化过程中设计用于语义对齐的标签网络,DSCMH 的性能还能得到进一步提高。为了验证所提方法的性能,我们在四个大数据集上进行了大量实验。结果表明,所提出的方法优于大多数现有的跨模态哈希方法。此外,消融实验表明,所提出的新正则化项都对模型在跨模态检索中的性能产生了积极影响。本文代码可从 http://www.scholat.com/laizhihui 下载。
{"title":"Deep semantics-preserving cross-modal hashing","authors":"Zhihui Lai ,&nbsp;Xiaomei Fang ,&nbsp;Heng Kong","doi":"10.1016/j.bdr.2024.100494","DOIUrl":"10.1016/j.bdr.2024.100494","url":null,"abstract":"<div><div>Cross-modal hashing has been paid widespread attention in recent years due to its outstanding performance in cross-modal data retrieval. Cross-modal hashing can be decomposed into two steps, i.e., the feature learning and the binarization. However, most existing cross-modal hash methods do not take the supervisory information of the data into consideration during binary quantization, and thus often fail to adequately preserve semantic information. To solve these problems, this paper proposes a novel deep cross-modal hashing method called deep semantics-preserving cross-modal hashing (DSCMH), which makes full use of intra and inter-modal semantic information to improve the model's performance. Moreover, by designing a label network for semantic alignment during the binarization process, DSCMH's performance can be further improved. In order to verify the performance of the proposed method, extensive experiments were conducted on four big datasets. The results show that the proposed method is better than most of the existing cross-modal hashing methods. In addition, the ablation experiment shows that the proposed new regularized terms all have positive effects on the model's performances in cross-modal retrieval. The code of this paper can be downloaded from <span><span>http://www.scholat.com/laizhihui</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":56017,"journal":{"name":"Big Data Research","volume":"38 ","pages":"Article 100494"},"PeriodicalIF":3.5,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142650889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Big Data Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1