首页 > 最新文献

Frontiers in Big Data最新文献

英文 中文
ULBERT: a domain-adapted BERT model for bilingual information retrieval from Pakistan's constitution. 一个适用于巴基斯坦宪法中双语信息检索的领域适应BERT模型。
IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-22 eCollection Date: 2025-01-01 DOI: 10.3389/fdata.2025.1448785
Qaiser Abbas, Waqas Nawaz, Sadia Niazi, Muhammad Awais

Introduction: Navigating legal texts like a national constitution is notoriously difficult due to specialized jargon and complex internal references. For the Constitution of Pakistan, no automated, user-friendly search tool existed to address this challenge. This paper introduces ULBERT, a novel AI-powered information retrieval framework designed to make the constitution accessible to all users, from legal experts to ordinary citizens, in both English and Urdu.

Methods: The system is built around a custom AI model that moves beyond keyword matching to understand the semantic meaning of a user's query. It processes questions in English or Urdu and compares them to the constitutional text, identifying the most relevant passages based on contextual and semantic similarity.

Results: In performance testing, the ULBERT framework proved highly effective. It successfully retrieved the correct constitutional information with an accuracy of 86% for English queries and 73% for Urdu queries.

Discussion: These results demonstrate a significant breakthrough in enhancing the accessibility of foundational legal documents through artificial intelligence. The framework provides an effective and intuitive tool for legal inquiry, empowering a broader audience to understand the Constitution of Pakistan.

导言:浏览像国家宪法这样的法律文本是出了名的困难,因为有专门的术语和复杂的内部参考。对于巴基斯坦宪法,没有自动的、用户友好的搜索工具来解决这一挑战。本文介绍了ULBERT,这是一种新型的人工智能信息检索框架,旨在使从法律专家到普通公民的所有用户都可以使用英语和乌尔都语访问宪法。方法:该系统是围绕一个自定义的人工智能模型构建的,该模型超越了关键字匹配,以理解用户查询的语义。它处理英语或乌尔都语的问题,并将它们与宪法文本进行比较,根据上下文和语义相似性识别出最相关的段落。结果:在性能测试中,ULBERT框架是非常有效的。它成功地检索了正确的宪法信息,英语查询的准确率为86%,乌尔都语查询的准确率为73%。讨论:这些结果表明,人工智能在增强基础法律文件可及性方面取得了重大突破。该框架为法律调查提供了一个有效和直观的工具,使更广泛的受众能够了解巴基斯坦宪法。
{"title":"ULBERT: a domain-adapted BERT model for bilingual information retrieval from Pakistan's constitution.","authors":"Qaiser Abbas, Waqas Nawaz, Sadia Niazi, Muhammad Awais","doi":"10.3389/fdata.2025.1448785","DOIUrl":"10.3389/fdata.2025.1448785","url":null,"abstract":"<p><strong>Introduction: </strong>Navigating legal texts like a national constitution is notoriously difficult due to specialized jargon and complex internal references. For the Constitution of Pakistan, no automated, user-friendly search tool existed to address this challenge. This paper introduces ULBERT, a novel AI-powered information retrieval framework designed to make the constitution accessible to all users, from legal experts to ordinary citizens, in both English and Urdu.</p><p><strong>Methods: </strong>The system is built around a custom AI model that moves beyond keyword matching to understand the semantic meaning of a user's query. It processes questions in English or Urdu and compares them to the constitutional text, identifying the most relevant passages based on contextual and semantic similarity.</p><p><strong>Results: </strong>In performance testing, the ULBERT framework proved highly effective. It successfully retrieved the correct constitutional information with an accuracy of 86% for English queries and 73% for Urdu queries.</p><p><strong>Discussion: </strong>These results demonstrate a significant breakthrough in enhancing the accessibility of foundational legal documents through artificial intelligence. The framework provides an effective and intuitive tool for legal inquiry, empowering a broader audience to understand the Constitution of Pakistan.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1448785"},"PeriodicalIF":2.4,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12497596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145245803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing intelligence source performance management through two-stage stochastic programming and machine learning techniques. 通过两阶段随机规划和机器学习技术加强情报源性能管理。
IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-22 eCollection Date: 2025-01-01 DOI: 10.3389/fdata.2025.1640539
Lucas Wafula Wekesa, Stephen Korir

Introduction: The effectiveness of intelligence operations depends heavily on the reliability and performance of human intelligence (HUMINT) sources. Yet, source behavior is often unpredictable, deceptive or shaped by operational context, complicating resource allocation and tasking decisions.

Methods: This study developed a hybrid framework combining Machine Learning (ML) techniques and Two-Stage Stochastic Programming (TSSP) for HUMINT source performance management under uncertainty. A synthetic dataset reflecting HUMINT operational patterns was generated and used to train classification and regression models. The extreme Gradient Boosting (XGBoost) and Support Vector Machines (SVM) were applied for behavioral classification and prediction of reliability and deception scores. The predictive outputs were then transformed into scenario probabilities and integrated into the TSSP model to optimize task allocation under varying behavioral uncertainties.

Results: The classifiers achieved 98% overall accuracy, with XGBoost exhibiting higher precision and SVM demonstrating superior recall for rare but operationally significant categories. The regression models achieved R-squared scores of 93% for reliability and 81% for deception. These predictive outputs were transformed into scenario probabilities for integration into the TSSP model, optimizing task allocation under varying behavioral risks. When compared to a deterministic optimization baseline, the hybrid framework delivered a 16.8% reduction in expected tasking costs and a 19.3% improvement in mission success rates.

Discussion and conclusion: The findings demonstrated that scenario-based probabilistic planning offers significant advantages over static heuristics in managing uncertainty in HUMINT operations. While the simulation results are promising, validation through field data is required before operational deployment.

情报行动的有效性在很大程度上取决于人力情报(HUMINT)来源的可靠性和性能。然而,源行为通常是不可预测的、具有欺骗性的或受操作环境影响的,这使资源分配和任务决策变得复杂。方法:本研究开发了一个结合机器学习(ML)技术和两阶段随机规划(TSSP)的混合框架,用于不确定条件下的人力资源性能管理。生成了反映HUMINT操作模式的合成数据集,并用于训练分类和回归模型。采用极端梯度增强(XGBoost)和支持向量机(SVM)对信度和欺骗分数进行行为分类和预测。然后将预测输出转化为情景概率,并将其集成到TSSP模型中,以优化不同行为不确定性下的任务分配。结果:分类器达到了98%的总体准确率,XGBoost表现出更高的精度,支持向量机在罕见但操作重要的类别中表现出更高的召回率。回归模型在可靠性方面的r平方得分为93%,在欺骗方面的r平方得分为81%。将这些预测输出转化为情景概率,整合到TSSP模型中,优化不同行为风险下的任务分配。与确定性优化基线相比,混合框架的预期任务成本降低了16.8%,任务成功率提高了19.3%。讨论和结论:研究结果表明,在管理人工智能操作中的不确定性方面,基于场景的概率规划比静态启发式具有显著优势。虽然模拟结果很有希望,但在实际部署之前需要通过现场数据进行验证。
{"title":"Enhancing intelligence source performance management through two-stage stochastic programming and machine learning techniques.","authors":"Lucas Wafula Wekesa, Stephen Korir","doi":"10.3389/fdata.2025.1640539","DOIUrl":"10.3389/fdata.2025.1640539","url":null,"abstract":"<p><strong>Introduction: </strong>The effectiveness of intelligence operations depends heavily on the reliability and performance of human intelligence (HUMINT) sources. Yet, source behavior is often unpredictable, deceptive or shaped by operational context, complicating resource allocation and tasking decisions.</p><p><strong>Methods: </strong>This study developed a hybrid framework combining Machine Learning (ML) techniques and Two-Stage Stochastic Programming (TSSP) for HUMINT source performance management under uncertainty. A synthetic dataset reflecting HUMINT operational patterns was generated and used to train classification and regression models. The extreme Gradient Boosting (XGBoost) and Support Vector Machines (SVM) were applied for behavioral classification and prediction of reliability and deception scores. The predictive outputs were then transformed into scenario probabilities and integrated into the TSSP model to optimize task allocation under varying behavioral uncertainties.</p><p><strong>Results: </strong>The classifiers achieved 98% overall accuracy, with XGBoost exhibiting higher precision and SVM demonstrating superior recall for rare but operationally significant categories. The regression models achieved R-squared scores of 93% for reliability and 81% for deception. These predictive outputs were transformed into scenario probabilities for integration into the TSSP model, optimizing task allocation under varying behavioral risks. When compared to a deterministic optimization baseline, the hybrid framework delivered a 16.8% reduction in expected tasking costs and a 19.3% improvement in mission success rates.</p><p><strong>Discussion and conclusion: </strong>The findings demonstrated that scenario-based probabilistic planning offers significant advantages over static heuristics in managing uncertainty in HUMINT operations. While the simulation results are promising, validation through field data is required before operational deployment.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1640539"},"PeriodicalIF":2.4,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12498342/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145245750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multistakeholder fairness in tourism: what can algorithms learn from tourism management? 旅游中的多利益相关者公平:算法能从旅游管理中学到什么?
IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-18 eCollection Date: 2025-01-01 DOI: 10.3389/fdata.2025.1632766
Peter Müllner, Anna Schreuer, Simone Kopeinik, Bernhard Wieser, Dominik Kowald

Algorithmic decision-support systems, i.e., recommender systems, are popular digital tools that help tourists decide which places and attractions to explore. However, algorithms often unintentionally direct tourist streams in a way that negatively affects the environment, local communities, or other stakeholders. This issue can be partly attributed to the computer science community's limited understanding of the complex relationships and trade-offs among stakeholders in the real world. In this work, we draw on the practical findings and methods from tourism management to inform research on multistakeholder fairness in algorithmic decision-support. Leveraging a semi-systematic literature review, we synthesize literature from tourism management as well as literature from computer science. Our findings suggest that tourism management actively tries to identify the specific needs of stakeholders and utilizes qualitative, inclusive and participatory methods to study fairness from a normative and holistic research perspective. In contrast, computer science lacks sufficient understanding of the stakeholder needs and primarily considers fairness through descriptive factors, such as measureable discrimination, while heavily relying on few mathematically formalized fairness criteria that fail to capture the multidimensional nature of fairness in tourism. With the results of this work, we aim to illustrate the shortcomings of purely algorithmic research and stress the potential and particular need for future interdisciplinary collaboration. We believe such a collaboration is a fundamental and necessary step to enhance algorithmic decision-support systems toward understanding and supporting true multistakeholder fairness in tourism.

算法决策支持系统,即推荐系统,是一种流行的数字工具,可以帮助游客决定探索哪些地方和景点。然而,算法往往无意中引导游客流,对环境、当地社区或其他利益相关者产生负面影响。这个问题可以部分归因于计算机科学界对现实世界中利益相关者之间复杂关系和权衡的理解有限。在这项工作中,我们借鉴了旅游管理的实践成果和方法,为算法决策支持中的多利益相关者公平性研究提供了信息。利用半系统的文献综述,我们综合了旅游管理方面的文献和计算机科学方面的文献。研究结果表明,旅游管理者积极尝试识别利益相关者的特定需求,并利用定性、包容性和参与性的方法从规范和整体的研究视角来研究公平。相比之下,计算机科学缺乏对利益相关者需求的充分理解,主要通过描述性因素(如可测量的歧视)来考虑公平性,同时严重依赖少数数学上形式化的公平标准,这些标准未能捕捉到旅游业公平的多维性。通过这项工作的结果,我们旨在说明纯算法研究的缺点,并强调未来跨学科合作的潜力和特别需要。我们认为,这种合作是增强算法决策支持系统的基础和必要步骤,有助于理解和支持旅游业中真正的多方利益相关者公平。
{"title":"Multistakeholder fairness in tourism: what can algorithms learn from tourism management?","authors":"Peter Müllner, Anna Schreuer, Simone Kopeinik, Bernhard Wieser, Dominik Kowald","doi":"10.3389/fdata.2025.1632766","DOIUrl":"10.3389/fdata.2025.1632766","url":null,"abstract":"<p><p>Algorithmic decision-support systems, i.e., recommender systems, are popular digital tools that help tourists decide which places and attractions to explore. However, algorithms often unintentionally direct tourist streams in a way that negatively affects the environment, local communities, or other stakeholders. This issue can be partly attributed to the computer science community's limited understanding of the complex relationships and trade-offs among stakeholders in the real world. In this work, we draw on the practical findings and methods from tourism management to inform research on multistakeholder fairness in algorithmic decision-support. Leveraging a semi-systematic literature review, we synthesize literature from tourism management as well as literature from computer science. Our findings suggest that tourism management actively tries to identify the specific needs of stakeholders and utilizes qualitative, inclusive and participatory methods to study fairness from a normative and holistic research perspective. In contrast, computer science lacks sufficient understanding of the stakeholder needs and primarily considers fairness through descriptive factors, such as measureable discrimination, while heavily relying on few mathematically formalized fairness criteria that fail to capture the multidimensional nature of fairness in tourism. With the results of this work, we aim to illustrate the shortcomings of purely algorithmic research and stress the potential and particular need for future interdisciplinary collaboration. We believe such a collaboration is a fundamental and necessary step to enhance algorithmic decision-support systems toward understanding and supporting true multistakeholder fairness in tourism.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1632766"},"PeriodicalIF":2.4,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12488424/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145234040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FAST-framework for AI-based surgical transformation. 基于人工智能的手术转化fast框架。
IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-12 eCollection Date: 2025-01-01 DOI: 10.3389/fdata.2025.1655260
Harmehr Sekhon, Farid Al Zoubi, Paul E Beaulé, Pascal Fallavollita

Background: The use of machine learning (ML) in surgery till date has largely focused on predication of surgical variables, which has not been found to significantly improve operating room efficiencies and surgical success rates (SSR). Due to the long surgery wait times, limited health care resources and an increased population need, innovative ML models are needed. Thus, the Framework for AI-based Surgical Transformation (FAST) was created to make real time recommendations to improve OR efficiency.

Methods: The FAST model was developed and evaluated using a dataset of n=4796 orthopedic cases that utilizes surgery and team specific variables (e.g. specific team composition, OR turnover time, procedure duration), along with regular positive deviance seminars with the stakeholders for adherence and uptake. FAST was created using six ML algorithms, including decision trees and neural networks. The FAST was implemented in orthopedic surgeries at a hospital in Canada's capital (Ottawa).

Results: FAST was found to be feasible and implementable in the hospital orthopedic OR, with good team engagement due to the PD seminars. FAST led to a SSR of 93% over 23 weeks (57 arthroplasty surgery days) compared to 39% at baseline. Key variables impacting SSR included starting the first surgery on time, turnover time, and team composition.

Conclusions: FAST is a novel ML framework that can provide real time feedback for improving OR efficiency and SSR. Stakeholder integration is key in its success in uptake and adherence. This unique framework can be implemented in different hospitals and for diverse surgeries, offering a novel and innovative application of ML for improving OR efficiency without additional resources.

背景:迄今为止,机器学习(ML)在外科手术中的应用主要集中在手术变量的预测上,尚未发现其能显著提高手术室效率和手术成功率(SSR)。由于手术等待时间长、医疗资源有限和人口需求增加,需要创新的ML模型。因此,基于人工智能的手术转化框架(FAST)被创建,以提供实时建议,以提高手术室效率。方法:使用n=4796个骨科病例的数据集开发和评估FAST模型,该数据集利用手术和团队特定变量(例如特定团队组成,手术室更换时间,手术持续时间),以及与利益相关者定期举行的积极偏差研讨会,以确保依从性和吸收性。FAST使用六种机器学习算法创建,包括决策树和神经网络。FAST在加拿大首都(渥太华)一家医院的骨科手术中实施。结果:FAST在医院骨科手术室是可行和可实施的,通过PD研讨会,团队参与良好。FAST在23周(57个关节置换术天)内的SSR为93%,而基线时为39%。影响SSR的关键变量包括第一次手术按时开始,周转时间和团队组成。结论:FAST是一种新颖的ML框架,可以为提高OR效率和SSR提供实时反馈。利益相关者的整合是其在吸收和遵守方面取得成功的关键。这种独特的框架可以在不同的医院和不同的手术中实施,提供了一种新颖和创新的机器学习应用,可以在不增加资源的情况下提高手术室效率。
{"title":"FAST-framework for AI-based surgical transformation.","authors":"Harmehr Sekhon, Farid Al Zoubi, Paul E Beaulé, Pascal Fallavollita","doi":"10.3389/fdata.2025.1655260","DOIUrl":"10.3389/fdata.2025.1655260","url":null,"abstract":"<p><strong>Background: </strong>The use of machine learning (ML) in surgery till date has largely focused on predication of surgical variables, which has not been found to significantly improve operating room efficiencies and surgical success rates (SSR). Due to the long surgery wait times, limited health care resources and an increased population need, innovative ML models are needed. Thus, the Framework for AI-based Surgical Transformation (FAST) was created to make real time recommendations to improve OR efficiency.</p><p><strong>Methods: </strong>The FAST model was developed and evaluated using a dataset of n=4796 orthopedic cases that utilizes surgery and team specific variables (e.g. specific team composition, OR turnover time, procedure duration), along with regular positive deviance seminars with the stakeholders for adherence and uptake. FAST was created using six ML algorithms, including decision trees and neural networks. The FAST was implemented in orthopedic surgeries at a hospital in Canada's capital (Ottawa).</p><p><strong>Results: </strong>FAST was found to be feasible and implementable in the hospital orthopedic OR, with good team engagement due to the PD seminars. FAST led to a SSR of 93% over 23 weeks (57 arthroplasty surgery days) compared to 39% at baseline. Key variables impacting SSR included starting the first surgery on time, turnover time, and team composition.</p><p><strong>Conclusions: </strong>FAST is a novel ML framework that can provide real time feedback for improving OR efficiency and SSR. Stakeholder integration is key in its success in uptake and adherence. This unique framework can be implemented in different hospitals and for diverse surgeries, offering a novel and innovative application of ML for improving OR efficiency without additional resources.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1655260"},"PeriodicalIF":2.4,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12463642/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145187517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure aggregation of sufficiently many private inputs. 足够多的私有输入的安全聚合。
IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-10 eCollection Date: 2025-01-01 DOI: 10.3389/fdata.2025.1638307
Thijs Veugen, Gabriele Spini, Frank Muller

Secure aggregation of distributed inputs is a well-studied problem. In this study, anonymity of inputs is achieved by assuring a minimal quota before publishing the outcome. We design and implement an efficient cryptographic protocol that mitigates the most important security risks and show its application in the cyber threat intelligence (CTI) domain. Our approach allows for generic aggregation and quota functions. With 20 inputs from different parties, we can do three secure and anonymous aggregations per second, and in a CTI community of 100 partners, 10, 000 aggregations could be performed during one night.

分布式输入的安全聚合是一个研究得很好的问题。在本研究中,输入的匿名性是通过在公布结果之前保证最小的配额来实现的。我们设计并实现了一种有效的加密协议,降低了最重要的安全风险,并展示了其在网络威胁情报(CTI)领域的应用。我们的方法允许通用聚合和配额函数。使用来自不同方的20个输入,我们每秒可以进行3次安全且匿名的聚合,并且在一个由100个合作伙伴组成的CTI社区中,一个晚上可以执行10,000次聚合。
{"title":"Secure aggregation of sufficiently many private inputs.","authors":"Thijs Veugen, Gabriele Spini, Frank Muller","doi":"10.3389/fdata.2025.1638307","DOIUrl":"https://doi.org/10.3389/fdata.2025.1638307","url":null,"abstract":"<p><p>Secure aggregation of distributed inputs is a well-studied problem. In this study, anonymity of inputs is achieved by assuring a minimal quota before publishing the outcome. We design and implement an efficient cryptographic protocol that mitigates the most important security risks and show its application in the cyber threat intelligence (CTI) domain. Our approach allows for generic aggregation and quota functions. With 20 inputs from different parties, we can do three secure and anonymous aggregations per second, and in a CTI community of 100 partners, 10, 000 aggregations could be performed during one night.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1638307"},"PeriodicalIF":2.4,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12457162/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward more realistic career path prediction: evaluation and methods. 走向更现实的职业道路预测:评价与方法。
IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-08-25 eCollection Date: 2025-01-01 DOI: 10.3389/fdata.2025.1564521
Elena Senger, Yuri Campbell, Rob van der Goot, Barbara Plank

Predicting career trajectories is a complex yet impactful task, offering significant benefits for personalized career counseling, recruitment optimization, and workforce planning. However, effective career path prediction (CPP) modeling faces challenges including highly variable career trajectories, free-text resume data, and limited publicly available benchmark datasets. In this study, we present a comprehensive comparative evaluation of CPP models-linear projection, multilayer perceptron (MLP), LSTM, and large language models (LLMs)-across multiple input settings and two recently introduced public datasets. Our contributions are threefold: (1) we propose novel model variants, including an MLP extension and a standardized LLM approach, (2) we systematically evaluate model performance across input types (titles only vs. title+description, standardized vs. free-text), and (3) we investigate the role of synthetic data and fine-tuning strategies in addressing data scarcity and improving model generalization. Additionally, we provide a detailed qualitative analysis of prediction behaviors across industries, career lengths, and transitions. Our findings establish new baselines, reveal the trade-offs of different modeling strategies, and offer practical insights for deploying CPP systems in real-world settings.

预测职业轨迹是一项复杂而又有影响力的任务,它为个性化的职业咨询、招聘优化和劳动力规划提供了巨大的好处。然而,有效的职业路径预测(CPP)建模面临着各种挑战,包括高度可变的职业轨迹、自由文本简历数据和有限的公开基准数据集。在这项研究中,我们对CPP模型——线性投影、多层感知器(MLP)、LSTM和大型语言模型(llm)——在多个输入设置和两个最近引入的公共数据集上进行了全面的比较评估。我们的贡献有三个方面:(1)我们提出了新的模型变体,包括MLP扩展和标准化的LLM方法;(2)我们系统地评估了不同输入类型(仅标题vs标题+描述,标准化vs自由文本)的模型性能;(3)我们研究了合成数据和微调策略在解决数据稀缺性和提高模型泛化方面的作用。此外,我们提供了一个详细的定性分析预测行为跨行业,职业生涯长度,和过渡。我们的发现建立了新的基线,揭示了不同建模策略的权衡,并为在现实环境中部署CPP系统提供了实际的见解。
{"title":"Toward more realistic career path prediction: evaluation and methods.","authors":"Elena Senger, Yuri Campbell, Rob van der Goot, Barbara Plank","doi":"10.3389/fdata.2025.1564521","DOIUrl":"https://doi.org/10.3389/fdata.2025.1564521","url":null,"abstract":"<p><p>Predicting career trajectories is a complex yet impactful task, offering significant benefits for personalized career counseling, recruitment optimization, and workforce planning. However, effective career path prediction (CPP) modeling faces challenges including highly variable career trajectories, free-text resume data, and limited publicly available benchmark datasets. In this study, we present a comprehensive comparative evaluation of CPP models-linear projection, multilayer perceptron (MLP), LSTM, and large language models (LLMs)-across multiple input settings and two recently introduced public datasets. Our contributions are threefold: (1) we propose novel model variants, including an MLP extension and a standardized LLM approach, (2) we systematically evaluate model performance across input types (titles only vs. title+description, standardized vs. free-text), and (3) we investigate the role of synthetic data and fine-tuning strategies in addressing data scarcity and improving model generalization. Additionally, we provide a detailed qualitative analysis of prediction behaviors across industries, career lengths, and transitions. Our findings establish new baselines, reveal the trade-offs of different modeling strategies, and offer practical insights for deploying CPP systems in real-world settings.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1564521"},"PeriodicalIF":2.4,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12415007/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145030713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated road surface classification in OpenStreetMap using MaskCNN and aerial imagery. OpenStreetMap中使用MaskCNN和航空图像的自动路面分类。
IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-08-13 eCollection Date: 2025-01-01 DOI: 10.3389/fdata.2025.1657320
R Parvathi, V Pattabiraman, Nancy Saxena, Aakarsh Mishra, Utkarsh Mishra, Ansh Pandey

Introduction: OpenStreetMap (OSM) road surface data is critical for navigation, infrastructure monitoring, and urban planning but is often incomplete or inconsistent. This study addresses the need for automated validation and classification of road surfaces by leveraging high-resolution aerial imagery and deep learning techniques.

Methods: We propose a MaskCNN-based deep learning model enhanced with attention mechanisms and a hierarchical loss function to classify road surfaces into four types: asphalt, concrete, gravel, and dirt. The model uses NAIP (National Agriculture Imagery Program) aerial imagery aligned with OSM labels. Preprocessing includes georeferencing, data augmentation, label cleaning, and class balancing. The architecture comprises a ResNet-50 encoder with squeeze-and-excitation blocks and a U-Net-style decoder with spatial attention. Evaluation metrics include accuracy, mIoU, precision, recall, and F1-score.

Results: The proposed model achieved an overall accuracy of 92.3% and a mean Intersection over Union (mIoU) of 83.7%, outperforming baseline models such as SVM (81.2% accuracy), Random Forest (83.7%), and standard U-Net (89.6%). Class-wise performance showed high precision and recall even for challenging surface types like gravel and dirt. Comparative evaluations against state-of-the-art models (COANet, SA-UNet, MMFFNet) also confirmed superior performance.

Discussion: The results demonstrate that combining NAIP imagery with attention-guided CNN architectures and hierarchical loss functions significantly improves road surface classification. The model is robust across varied terrains and visual conditions and shows potential for real-world applications such as OSM data enhancement, infrastructure analysis, and autonomous navigation. Limitations include label noise in OSM and class imbalance, which can be addressed through future work involving semi-supervised learning and multimodal data integration.

OpenStreetMap (OSM)的路面数据对导航、基础设施监测和城市规划至关重要,但往往不完整或不一致。本研究通过利用高分辨率航空图像和深度学习技术解决了路面自动验证和分类的需求。方法:我们提出了一个基于maskcnn的深度学习模型,增强了注意机制和分层损失函数,将路面分为四种类型:沥青、混凝土、砾石和污垢。该模型使用与OSM标签对齐的NAIP(国家农业图像计划)航空图像。预处理包括地理参考、数据增强、标签清理和类平衡。该架构包括一个具有压缩和激励块的ResNet-50编码器和一个具有空间注意力的u - net风格解码器。评估指标包括准确性、mIoU、精度、召回率和f1分数。结果:该模型总体准确率为92.3%,平均mIoU准确率为83.7%,优于SVM(准确率81.2%)、Random Forest(准确率83.7%)和标准U-Net(准确率89.6%)等基准模型。即使在砾石和污垢等具有挑战性的表面类型上,同级性能也具有很高的精度和召回率。与最先进的模型(COANet, SA-UNet, MMFFNet)的比较评估也证实了优越的性能。讨论:结果表明,将NAIP图像与注意力引导的CNN架构和分层损失函数相结合,显著提高了路面分类能力。该模型在各种地形和视觉条件下都具有鲁棒性,并显示出实际应用的潜力,例如OSM数据增强、基础设施分析和自主导航。局限性包括OSM中的标签噪声和类不平衡,这可以通过未来涉及半监督学习和多模态数据集成的工作来解决。
{"title":"Automated road surface classification in OpenStreetMap using MaskCNN and aerial imagery.","authors":"R Parvathi, V Pattabiraman, Nancy Saxena, Aakarsh Mishra, Utkarsh Mishra, Ansh Pandey","doi":"10.3389/fdata.2025.1657320","DOIUrl":"10.3389/fdata.2025.1657320","url":null,"abstract":"<p><strong>Introduction: </strong>OpenStreetMap (OSM) road surface data is critical for navigation, infrastructure monitoring, and urban planning but is often incomplete or inconsistent. This study addresses the need for automated validation and classification of road surfaces by leveraging high-resolution aerial imagery and deep learning techniques.</p><p><strong>Methods: </strong>We propose a MaskCNN-based deep learning model enhanced with attention mechanisms and a hierarchical loss function to classify road surfaces into four types: asphalt, concrete, gravel, and dirt. The model uses NAIP (National Agriculture Imagery Program) aerial imagery aligned with OSM labels. Preprocessing includes georeferencing, data augmentation, label cleaning, and class balancing. The architecture comprises a ResNet-50 encoder with squeeze-and-excitation blocks and a U-Net-style decoder with spatial attention. Evaluation metrics include accuracy, mIoU, precision, recall, and F1-score.</p><p><strong>Results: </strong>The proposed model achieved an overall accuracy of 92.3% and a mean Intersection over Union (mIoU) of 83.7%, outperforming baseline models such as SVM (81.2% accuracy), Random Forest (83.7%), and standard U-Net (89.6%). Class-wise performance showed high precision and recall even for challenging surface types like gravel and dirt. Comparative evaluations against state-of-the-art models (COANet, SA-UNet, MMFFNet) also confirmed superior performance.</p><p><strong>Discussion: </strong>The results demonstrate that combining NAIP imagery with attention-guided CNN architectures and hierarchical loss functions significantly improves road surface classification. The model is robust across varied terrains and visual conditions and shows potential for real-world applications such as OSM data enhancement, infrastructure analysis, and autonomous navigation. Limitations include label noise in OSM and class imbalance, which can be addressed through future work involving semi-supervised learning and multimodal data integration.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1657320"},"PeriodicalIF":2.4,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12382388/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144978127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Interdisciplinary approaches to complex systems: highlights from FRCCS 2023/24. 编辑:复杂系统的跨学科方法:FRCCS 2023/24的亮点。
IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-08-12 eCollection Date: 2025-01-01 DOI: 10.3389/fdata.2025.1666305
Roberto Interdonato, Hocine Cherifi
{"title":"Editorial: Interdisciplinary approaches to complex systems: highlights from FRCCS 2023/24.","authors":"Roberto Interdonato, Hocine Cherifi","doi":"10.3389/fdata.2025.1666305","DOIUrl":"10.3389/fdata.2025.1666305","url":null,"abstract":"","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1666305"},"PeriodicalIF":2.4,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12382158/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144978069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence for surgical outcome prediction in glaucoma: a systematic review. 人工智能在青光眼手术预后预测中的应用综述。
IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-08-08 eCollection Date: 2025-01-01 DOI: 10.3389/fdata.2025.1605018
Zeena Kailani, Lauren Kim, Joshua Bierbrier, Michael Balas, David J Mathew

Introduction: Glaucoma is a leading cause of irreversible blindness, and its rising global prevalence has led to a significant increase in glaucoma surgeries. However, predicting postoperative outcomes remains challenging due to the complex interplay of patient factors, surgical techniques, and postoperative care. Artificial intelligence (AI) has emerged as a promising tool for enhancing predictive accuracy in clinical decision-making.

Methods: This systematic review was conducted to evaluate the current evidence on the use of AI to predict surgical outcomes in glaucoma patients. A comprehensive search of Medline, Embase, Web of Science, and Scopus was performed. Studies were included if they applied AI models to glaucoma surgery outcome prediction.

Results: Six studies met inclusion criteria, collectively analyzing 4,630 surgeries. A variety of algorithms were applied, including random forests, support vector machines, and neural networks. Overall, AI models consistently outperformed traditional statistical approaches, with the best-performing model achieving an accuracy of 87.5%. Key predictors of outcomes included demographic factors (e.g., age), systemic health indicators (e.g., smoking status and body mass index), and ophthalmic parameters (e.g., baseline intraocular pressure, central corneal thickness, mitomycin C use).

Discussion: While AI models demonstrated superior performance to traditional statistical approaches, the lack of external validation and standardized surgical success definitions limit their clinical applicability. This review highlights both the promise and the current limitations of artificial intelligence in glaucoma surgery outcome prediction, emphasizing the need for prospective, multicenter studies, publicly available datasets, and standardized evaluation metrics to enhance the generalizability and clinical utility of future models.

Systematic review registration: https://www.crd.york.ac.uk/PROSPERO/view/CRD42024621758, identifier: CRD42024621758.

青光眼是不可逆失明的主要原因,其全球患病率的上升导致青光眼手术的显著增加。然而,由于患者因素、手术技术和术后护理的复杂相互作用,预测术后结果仍然具有挑战性。人工智能(AI)已成为提高临床决策预测准确性的有前途的工具。方法:本系统综述旨在评估目前使用人工智能预测青光眼患者手术结果的证据。对Medline、Embase、Web of Science和Scopus进行了综合检索。将人工智能模型应用于青光眼手术结果预测的研究被纳入。结果:6项研究符合纳入标准,共分析了4630例手术。应用了各种算法,包括随机森林、支持向量机和神经网络。总体而言,人工智能模型的表现一直优于传统的统计方法,表现最好的模型达到了87.5%的准确率。结果的主要预测因素包括人口统计学因素(如年龄)、全身健康指标(如吸烟状况和体重指数)和眼科参数(如基线眼压、角膜中央厚度、丝裂霉素C的使用)。讨论:虽然人工智能模型表现出优于传统统计方法的性能,但缺乏外部验证和标准化的手术成功定义限制了其临床适用性。这篇综述强调了人工智能在青光眼手术结果预测中的前景和局限性,强调需要前瞻性、多中心研究、公开可用的数据集和标准化的评估指标,以提高未来模型的普遍性和临床实用性。系统综述注册:https://www.crd.york.ac.uk/PROSPERO/view/CRD42024621758,标识符:CRD42024621758。
{"title":"Artificial intelligence for surgical outcome prediction in glaucoma: a systematic review.","authors":"Zeena Kailani, Lauren Kim, Joshua Bierbrier, Michael Balas, David J Mathew","doi":"10.3389/fdata.2025.1605018","DOIUrl":"10.3389/fdata.2025.1605018","url":null,"abstract":"<p><strong>Introduction: </strong>Glaucoma is a leading cause of irreversible blindness, and its rising global prevalence has led to a significant increase in glaucoma surgeries. However, predicting postoperative outcomes remains challenging due to the complex interplay of patient factors, surgical techniques, and postoperative care. Artificial intelligence (AI) has emerged as a promising tool for enhancing predictive accuracy in clinical decision-making.</p><p><strong>Methods: </strong>This systematic review was conducted to evaluate the current evidence on the use of AI to predict surgical outcomes in glaucoma patients. A comprehensive search of Medline, Embase, Web of Science, and Scopus was performed. Studies were included if they applied AI models to glaucoma surgery outcome prediction.</p><p><strong>Results: </strong>Six studies met inclusion criteria, collectively analyzing 4,630 surgeries. A variety of algorithms were applied, including random forests, support vector machines, and neural networks. Overall, AI models consistently outperformed traditional statistical approaches, with the best-performing model achieving an accuracy of 87.5%. Key predictors of outcomes included demographic factors (e.g., age), systemic health indicators (e.g., smoking status and body mass index), and ophthalmic parameters (e.g., baseline intraocular pressure, central corneal thickness, mitomycin C use).</p><p><strong>Discussion: </strong>While AI models demonstrated superior performance to traditional statistical approaches, the lack of external validation and standardized surgical success definitions limit their clinical applicability. This review highlights both the promise and the current limitations of artificial intelligence in glaucoma surgery outcome prediction, emphasizing the need for prospective, multicenter studies, publicly available datasets, and standardized evaluation metrics to enhance the generalizability and clinical utility of future models.</p><p><strong>Systematic review registration: </strong>https://www.crd.york.ac.uk/PROSPERO/view/CRD42024621758, identifier: CRD42024621758.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1605018"},"PeriodicalIF":2.4,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12370750/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144977903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A fashion product recommendation based on adaptive VPKNN-NET algorithm without fuzzy similar image. 基于自适应VPKNN-NET算法的无模糊相似图像时尚产品推荐。
IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-08-07 eCollection Date: 2025-01-01 DOI: 10.3389/fdata.2025.1557779
R Sabitha, D Sundar

Introduction: Recommender systems are essential in e-commerce for assisting users in navigating large product catalogs, particularly in visually driven domains like fashion. Traditional keyword-based systems often struggle to capture subjective style preferences.

Methods: This study proposes a novel fashion recommendation framework using an Adaptive VPKNN-net algorithm. The model integrates deep visual feature extraction using a pre-trained VGG16 Convolutional Neural Network (CNN), dimensionality reduction through Principal Component Analysis (PCA), and a modified K-Nearest Neighbors (KNN) algorithm that combines Euclidean and cosine similarity metrics to enhance visual similarity assessment.

Results: Experiments were conducted using the "Fashion Product Images (Small)" dataset from Kaggle. The proposed system achieved high accuracy (98.69%) and demonstrated lower RMSE (0.8213) and MAE (0.6045) compared to baseline models such as Random Forest, SVM, and standard KNN.

Discussion: The proposed Adaptive VPKNN-net framework significantly improves the precision, interpretability, and efficiency of visual fashion recommendations. It eliminates the limitations of fuzzy similarity models and offers a scalable solution for visually oriented e-commerce platforms, particularly in cold-start scenarios and low-data conditions.

简介:推荐系统在电子商务中是必不可少的,它可以帮助用户浏览大型产品目录,特别是在时尚等视觉驱动的领域。传统的基于关键字的系统往往难以捕捉主观风格偏好。方法:本研究提出了一种基于自适应VPKNN-net算法的时尚推荐框架。该模型集成了使用预训练的VGG16卷积神经网络(CNN)进行深度视觉特征提取,通过主成分分析(PCA)进行降维,以及结合欧氏和余弦相似度度量的改进k -近邻(KNN)算法来增强视觉相似性评估。结果:使用Kaggle的“Fashion Product Images (Small)”数据集进行实验。与随机森林、支持向量机和标准KNN等基线模型相比,该系统具有较高的准确率(98.69%),RMSE(0.8213)和MAE(0.6045)较低。讨论:提出的自适应VPKNN-net框架显著提高了视觉时尚推荐的精度、可解释性和效率。它消除了模糊相似模型的限制,并为面向视觉的电子商务平台提供了可扩展的解决方案,特别是在冷启动场景和低数据条件下。
{"title":"A fashion product recommendation based on adaptive VPKNN-NET algorithm without fuzzy similar image.","authors":"R Sabitha, D Sundar","doi":"10.3389/fdata.2025.1557779","DOIUrl":"10.3389/fdata.2025.1557779","url":null,"abstract":"<p><strong>Introduction: </strong>Recommender systems are essential in e-commerce for assisting users in navigating large product catalogs, particularly in visually driven domains like fashion. Traditional keyword-based systems often struggle to capture subjective style preferences.</p><p><strong>Methods: </strong>This study proposes a novel fashion recommendation framework using an Adaptive VPKNN-net algorithm. The model integrates deep visual feature extraction using a pre-trained VGG16 Convolutional Neural Network (CNN), dimensionality reduction through Principal Component Analysis (PCA), and a modified K-Nearest Neighbors (KNN) algorithm that combines Euclidean and cosine similarity metrics to enhance visual similarity assessment.</p><p><strong>Results: </strong>Experiments were conducted using the \"Fashion Product Images (Small)\" dataset from Kaggle. The proposed system achieved high accuracy (98.69%) and demonstrated lower RMSE (0.8213) and MAE (0.6045) compared to baseline models such as Random Forest, SVM, and standard KNN.</p><p><strong>Discussion: </strong>The proposed Adaptive VPKNN-net framework significantly improves the precision, interpretability, and efficiency of visual fashion recommendations. It eliminates the limitations of fuzzy similarity models and offers a scalable solution for visually oriented e-commerce platforms, particularly in cold-start scenarios and low-data conditions.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"8 ","pages":"1557779"},"PeriodicalIF":2.4,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12367692/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144977884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Big Data
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1