首页 > 最新文献

Frontiers in Big Data最新文献

英文 中文
Efficient enhancement of low-rank tensor completion via thin QR decomposition. 通过薄 QR 分解有效增强低等级张量补全。
IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-02 eCollection Date: 2024-01-01 DOI: 10.3389/fdata.2024.1382144
Yan Wu, Yunzhi Jin

Low-rank tensor completion (LRTC), which aims to complete missing entries from tensors with partially observed terms by utilizing the low-rank structure of tensors, has been widely used in various real-world issues. The core tensor nuclear norm minimization (CTNM) method based on Tucker decomposition is one of common LRTC methods. However, the CTNM methods based on Tucker decomposition often have a large computing cost due to the fact that the general factor matrix solving technique involves multiple singular value decompositions (SVDs) in each loop. To address this problem, this article enhances the method and proposes an effective CTNM method based on thin QR decomposition (CTNM-QR) with lower computing complexity. The proposed method extends the CTNM by introducing tensor versions of the auxiliary variables instead of matrices, while using the thin QR decomposition to solve the factor matrix rather than the SVD, which can save the computational complexity and improve the tensor completion accuracy. In addition, the CTNM-QR method's convergence and complexity are analyzed further. Numerous experiments in synthetic data, real color images, and brain MRI data at different missing rates demonstrate that the proposed method not only outperforms in terms of completion accuracy and visualization, but also conducts more efficiently than most state-of-the-art LRTC methods.

低秩张量补全(LRTC)旨在利用张量的低秩结构,补全张量中部分观测项的缺失项,已被广泛应用于各种实际问题中。基于塔克分解的核心张量核规范最小化(CTNM)方法是常见的 LRTC 方法之一。然而,由于一般的因子矩阵求解技术在每个循环中都要进行多次奇异值分解(SVD),因此基于 Tucker 分解的 CTNM 方法通常计算成本较高。针对这一问题,本文对该方法进行了改进,提出了一种计算复杂度更低的基于薄 QR 分解的有效 CTNM 方法(CTNM-QR)。该方法通过引入辅助变量的张量版本而不是矩阵来扩展 CTNM,同时使用薄 QR 分解而不是 SVD 来求解因子矩阵,从而节省了计算复杂度并提高了张量补全精度。此外,还进一步分析了 CTNM-QR 方法的收敛性和复杂性。在合成数据、真实彩色图像和不同缺失率的脑磁共振成像数据中进行的大量实验表明,所提出的方法不仅在补全精度和可视化方面表现出色,而且比大多数最先进的 LRTC 方法更高效。
{"title":"Efficient enhancement of low-rank tensor completion via thin QR decomposition.","authors":"Yan Wu, Yunzhi Jin","doi":"10.3389/fdata.2024.1382144","DOIUrl":"10.3389/fdata.2024.1382144","url":null,"abstract":"<p><p>Low-rank tensor completion (LRTC), which aims to complete missing entries from tensors with partially observed terms by utilizing the low-rank structure of tensors, has been widely used in various real-world issues. The core tensor nuclear norm minimization (CTNM) method based on Tucker decomposition is one of common LRTC methods. However, the CTNM methods based on Tucker decomposition often have a large computing cost due to the fact that the general factor matrix solving technique involves multiple singular value decompositions (SVDs) in each loop. To address this problem, this article enhances the method and proposes an effective CTNM method based on thin QR decomposition (CTNM-QR) with lower computing complexity. The proposed method extends the CTNM by introducing tensor versions of the auxiliary variables instead of matrices, while using the thin QR decomposition to solve the factor matrix rather than the SVD, which can save the computational complexity and improve the tensor completion accuracy. In addition, the CTNM-QR method's convergence and complexity are analyzed further. Numerous experiments in synthetic data, real color images, and brain MRI data at different missing rates demonstrate that the proposed method not only outperforms in terms of completion accuracy and visualization, but also conducts more efficiently than most state-of-the-art LRTC methods.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"7 ","pages":"1382144"},"PeriodicalIF":2.4,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11250652/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141629268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Random kernel k-nearest neighbors regression. 随机核 k 近邻回归
IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-01 eCollection Date: 2024-01-01 DOI: 10.3389/fdata.2024.1402384
Patchanok Srisuradetchai, Korn Suksrikran

The k-nearest neighbors (KNN) regression method, known for its nonparametric nature, is highly valued for its simplicity and its effectiveness in handling complex structured data, particularly in big data contexts. However, this method is susceptible to overfitting and fit discontinuity, which present significant challenges. This paper introduces the random kernel k-nearest neighbors (RK-KNN) regression as a novel approach that is well-suited for big data applications. It integrates kernel smoothing with bootstrap sampling to enhance prediction accuracy and the robustness of the model. This method aggregates multiple predictions using random sampling from the training dataset and selects subsets of input variables for kernel KNN (K-KNN). A comprehensive evaluation of RK-KNN on 15 diverse datasets, employing various kernel functions including Gaussian and Epanechnikov, demonstrates its superior performance. When compared to standard KNN and the random KNN (R-KNN) models, it significantly reduces the root mean square error (RMSE) and mean absolute error, as well as improving R-squared values. The RK-KNN variant that employs a specific kernel function yielding the lowest RMSE will be benchmarked against state-of-the-art methods, including support vector regression, artificial neural networks, and random forests.

k 近邻(KNN)回归方法因其非参数性质而闻名,因其简单性和处理复杂结构数据的有效性而备受推崇,尤其是在大数据背景下。然而,这种方法容易出现过拟合和拟合不连续的问题,带来了巨大的挑战。本文介绍了随机核 k 近邻(RK-KNN)回归法,这是一种非常适合大数据应用的新方法。它将核平滑与自举采样相结合,以提高预测的准确性和模型的鲁棒性。该方法使用从训练数据集随机抽样的方法汇总多个预测结果,并为核 KNN(K-KNN)选择输入变量子集。在 15 个不同的数据集上对 RK-KNN 进行了全面评估,采用了包括高斯和 Epanechnikov 在内的各种核函数,结果表明 RK-KNN 性能优越。与标准 KNN 和随机 KNN(R-KNN)模型相比,它显著降低了均方根误差(RMSE)和平均绝对误差,并提高了 R 平方值。RK-KNN 变体采用了特定的核函数,RMSE 最低,将与支持向量回归、人工神经网络和随机森林等最先进的方法进行比较。
{"title":"Random kernel k-nearest neighbors regression.","authors":"Patchanok Srisuradetchai, Korn Suksrikran","doi":"10.3389/fdata.2024.1402384","DOIUrl":"10.3389/fdata.2024.1402384","url":null,"abstract":"<p><p>The k-nearest neighbors (KNN) regression method, known for its nonparametric nature, is highly valued for its simplicity and its effectiveness in handling complex structured data, particularly in big data contexts. However, this method is susceptible to overfitting and fit discontinuity, which present significant challenges. This paper introduces the random kernel k-nearest neighbors (RK-KNN) regression as a novel approach that is well-suited for big data applications. It integrates kernel smoothing with bootstrap sampling to enhance prediction accuracy and the robustness of the model. This method aggregates multiple predictions using random sampling from the training dataset and selects subsets of input variables for kernel KNN (K-KNN). A comprehensive evaluation of RK-KNN on 15 diverse datasets, employing various kernel functions including Gaussian and Epanechnikov, demonstrates its superior performance. When compared to standard KNN and the random KNN (R-KNN) models, it significantly reduces the root mean square error (RMSE) and mean absolute error, as well as improving R-squared values. The RK-KNN variant that employs a specific kernel function yielding the lowest RMSE will be benchmarked against state-of-the-art methods, including support vector regression, artificial neural networks, and random forests.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"7 ","pages":"1402384"},"PeriodicalIF":2.4,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11246867/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141622134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global explanation supervision for Graph Neural Networks. 图神经网络的全局解释监督
IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-01 eCollection Date: 2024-01-01 DOI: 10.3389/fdata.2024.1410424
Negar Etemadyrad, Yuyang Gao, Sai Manoj Pudukotai Dinakarrao, Liang Zhao

With the increasing popularity of Graph Neural Networks (GNNs) for predictive tasks on graph structured data, research on their explainability is becoming more critical and achieving significant progress. Although many methods are proposed to explain the predictions of GNNs, their focus is mainly on "how to generate explanations." However, other important research questions like "whether the GNN explanations are inaccurate," "what if the explanations are inaccurate," and "how to adjust the model to generate more accurate explanations" have gained little attention. Our previous GNN Explanation Supervision (GNES) framework demonstrated effectiveness on improving the reasonability of the local explanation while still keep or even improve the backbone GNNs model performance. In many applications instead of per sample explanations, we need to find global explanations which are reasonable and faithful to the domain data. Simply learning to explain GNNs locally is not an optimal solution to a global understanding of the model. To improve the explainability power of the GNES framework, we propose the Global GNN Explanation Supervision (GGNES) technique which uses a basic trained GNN and a global extension of the loss function used in the GNES framework. This GNN creates local explanations which are fed to a Global Logic-based GNN Explainer, an existing technique that can learn the global Explanation in terms of a logic formula. These two frameworks are then trained iteratively to generate reasonable global explanations. Extensive experiments demonstrate the effectiveness of the proposed model on improving the global explanations while keeping the performance similar or even increase the model prediction power.

随着用于图结构数据预测任务的图神经网络(GNN)越来越受欢迎,对其可解释性的研究也变得越来越重要,并取得了重大进展。虽然人们提出了很多方法来解释图神经网络的预测,但其重点主要集中在 "如何生成解释 "上。然而,"GNN 解释是否不准确"、"如果解释不准确怎么办 "以及 "如何调整模型以生成更准确的解释 "等其他重要研究问题却鲜有人关注。我们之前的 GNN 解释监督(GNES)框架在提高局部解释合理性的同时,仍能保持甚至提高骨干 GNN 模型的性能,这一点已得到证实。在许多应用中,我们需要找到合理且忠实于领域数据的全局解释,而不是按样本解释。仅仅学习对 GNN 进行局部解释,并不是实现对模型全局理解的最佳方案。为了提高 GNES 框架的可解释性,我们提出了全局 GNN 解释监督(Global GNN Explanation Supervision,GGNES)技术,该技术使用基本训练过的 GNN 和 GNES 框架中使用的损失函数的全局扩展。该 GNN 创建局部解释,并将其输入基于全局逻辑的 GNN 解释器,这是一种可以根据逻辑公式学习全局解释的现有技术。然后对这两个框架进行迭代训练,以生成合理的全局解释。广泛的实验证明了所提出的模型在改进全局解释方面的有效性,同时保持了相似的性能,甚至提高了模型的预测能力。
{"title":"Global explanation supervision for Graph Neural Networks.","authors":"Negar Etemadyrad, Yuyang Gao, Sai Manoj Pudukotai Dinakarrao, Liang Zhao","doi":"10.3389/fdata.2024.1410424","DOIUrl":"10.3389/fdata.2024.1410424","url":null,"abstract":"<p><p>With the increasing popularity of Graph Neural Networks (GNNs) for predictive tasks on graph structured data, research on their explainability is becoming more critical and achieving significant progress. Although many methods are proposed to explain the predictions of GNNs, their focus is mainly on \"how to generate explanations.\" However, other important research questions like \"whether the GNN explanations are inaccurate,\" \"what if the explanations are inaccurate,\" and \"how to adjust the model to generate more accurate explanations\" have gained little attention. Our previous GNN Explanation Supervision (GNES) framework demonstrated effectiveness on improving the reasonability of the local explanation while still keep or even improve the backbone GNNs model performance. In many applications instead of per sample explanations, we need to find global explanations which are reasonable and faithful to the domain data. Simply learning to explain GNNs locally is not an optimal solution to a global understanding of the model. To improve the explainability power of the GNES framework, we propose the Global GNN Explanation Supervision (GGNES) technique which uses a basic trained GNN and a global extension of the loss function used in the GNES framework. This GNN creates local explanations which are fed to a Global Logic-based GNN Explainer, an existing technique that can learn the global Explanation in terms of a logic formula. These two frameworks are then trained iteratively to generate reasonable global explanations. Extensive experiments demonstrate the effectiveness of the proposed model on improving the global explanations while keeping the performance similar or even increase the model prediction power.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"7 ","pages":"1410424"},"PeriodicalIF":2.4,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11246961/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141621733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
YOLOv8's advancements in tuberculosis identification from chest images. YOLOv8 在从胸部图像识别肺结核方面取得的进展。
IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-27 eCollection Date: 2024-01-01 DOI: 10.3389/fdata.2024.1401981
Mohamudha Parveen Rahamathulla, W R Sam Emmanuel, A Bindhu, Mohamed Mustaq Ahmed

Tuberculosis (TB) is a chronic and pathogenic disease that leads to life-threatening situations like death. Many people have been affected by TB owing to inaccuracy, late diagnosis, and deficiency of treatment. The early detection of TB is important to protect people from the severity of the disease and its threatening consequences. Traditionally, different manual methods have been used for TB prediction, such as chest X-rays and CT scans. Nevertheless, these approaches are identified as time-consuming and ineffective for achieving optimal results. To resolve this problem, several researchers have focused on TB prediction. Conversely, it results in a lack of accuracy, overfitting of data, and speed. For improving TB prediction, the proposed research employs the Selection Focal Fusion (SFF) block in the You Look Only Once v8 (YOLOv8, Ultralytics software company, Los Angeles, United States) object detection model with attention mechanism through the Kaggle TBX-11k dataset. The YOLOv8 is used for its ability to detect multiple objects in a single pass. However, it struggles with small objects and finds it impossible to perform fine-grained classifications. To evade this problem, the proposed research incorporates the SFF technique to improve detection performance and decrease small object missed detection rates. Correspondingly, the efficacy of the projected mechanism is calculated utilizing various performance metrics such as recall, precision, F1Score, and mean Average Precision (mAP) to estimate the performance of the proposed framework. Furthermore, the comparison of existing models reveals the efficiency of the proposed research. The present research is envisioned to contribute to the medical world and assist radiologists in identifying tuberculosis using the YOLOv8 model to obtain an optimal outcome.

肺结核(TB)是一种慢性致病性疾病,可导致死亡等危及生命的情况。由于诊断不准确、晚期诊断和缺乏治疗,许多人受到结核病的影响。结核病的早期检测对于保护人们免受疾病的严重性及其威胁性后果的影响非常重要。传统上,人们使用不同的人工方法来预测结核病,如胸部 X 光和 CT 扫描。然而,这些方法都被认为费时费力,无法达到最佳效果。为了解决这一问题,一些研究人员将重点放在结核病预测上。然而,这些方法的缺点是缺乏准确性、数据过度拟合和速度过快。为了改进结核病预测,本研究建议通过 Kaggle TBX-11k 数据集,在带有注意力机制的 You Look Only Once v8(YOLOv8,Ultralytics 软件公司,美国洛杉矶)物体检测模型中使用选择焦点融合(SFF)模块。YOLOv8 能够一次性检测多个物体。然而,它在处理小物体时会遇到困难,无法进行细粒度分类。为了解决这个问题,拟议的研究采用了 SFF 技术来提高检测性能,降低小物体的漏检率。相应地,利用各种性能指标(如召回率、精确度、F1Score 和平均精确度 (mAP) 等)来计算预测机制的功效,以估算拟议框架的性能。此外,与现有模型的比较也揭示了拟议研究的效率。本研究旨在为医学界做出贡献,协助放射科医生使用 YOLOv8 模型识别肺结核,以获得最佳结果。
{"title":"YOLOv8's advancements in tuberculosis identification from chest images.","authors":"Mohamudha Parveen Rahamathulla, W R Sam Emmanuel, A Bindhu, Mohamed Mustaq Ahmed","doi":"10.3389/fdata.2024.1401981","DOIUrl":"10.3389/fdata.2024.1401981","url":null,"abstract":"<p><p>Tuberculosis (TB) is a chronic and pathogenic disease that leads to life-threatening situations like death. Many people have been affected by TB owing to inaccuracy, late diagnosis, and deficiency of treatment. The early detection of TB is important to protect people from the severity of the disease and its threatening consequences. Traditionally, different manual methods have been used for TB prediction, such as chest X-rays and CT scans. Nevertheless, these approaches are identified as time-consuming and ineffective for achieving optimal results. To resolve this problem, several researchers have focused on TB prediction. Conversely, it results in a lack of accuracy, overfitting of data, and speed. For improving TB prediction, the proposed research employs the Selection Focal Fusion (SFF) block in the You Look Only Once v8 (YOLOv8, Ultralytics software company, Los Angeles, United States) object detection model with attention mechanism through the Kaggle TBX-11k dataset. The YOLOv8 is used for its ability to detect multiple objects in a single pass. However, it struggles with small objects and finds it impossible to perform fine-grained classifications. To evade this problem, the proposed research incorporates the SFF technique to improve detection performance and decrease small object missed detection rates. Correspondingly, the efficacy of the projected mechanism is calculated utilizing various performance metrics such as recall, precision, F1Score, and mean Average Precision (mAP) to estimate the performance of the proposed framework. Furthermore, the comparison of existing models reveals the efficiency of the proposed research. The present research is envisioned to contribute to the medical world and assist radiologists in identifying tuberculosis using the YOLOv8 model to obtain an optimal outcome.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"7 ","pages":"1401981"},"PeriodicalIF":2.4,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11236731/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141592057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MedT5SQL: a transformers-based large language model for text-to-SQL conversion in the healthcare domain. MedT5SQL:基于转换器的大型语言模型,用于医疗保健领域文本到 SQL 的转换。
IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-26 eCollection Date: 2024-01-01 DOI: 10.3389/fdata.2024.1371680
Alaa Marshan, Anwar Nais Almutairi, Athina Ioannou, David Bell, Asmat Monaghan, Mahir Arzoky

Introduction: In response to the increasing prevalence of electronic medical records (EMRs) stored in databases, healthcare staff are encountering difficulties retrieving these records due to their limited technical expertise in database operations. As these records are crucial for delivering appropriate medical care, there is a need for an accessible method for healthcare staff to access EMRs.

Methods: To address this, natural language processing (NLP) for Text-to-SQL has emerged as a solution, enabling non-technical users to generate SQL queries using natural language text. This research assesses existing work on Text-to-SQL conversion and proposes the MedT5SQL model specifically designed for EMR retrieval. The proposed model utilizes the Text-to-Text Transfer Transformer (T5) model, a Large Language Model (LLM) commonly used in various text-based NLP tasks. The model is fine-tuned on the MIMICSQL dataset, the first Text-to-SQL dataset for the healthcare domain. Performance evaluation involves benchmarking the MedT5SQL model on two optimizers, varying numbers of training epochs, and using two datasets, MIMICSQL and WikiSQL.

Results: For MIMICSQL dataset, the model demonstrates considerable effectiveness in generating question-SQL pairs achieving accuracy of 80.63%, 98.937%, and 90% for exact match accuracy matrix, approximate string-matching, and manual evaluation, respectively. When testing the performance of the model on WikiSQL dataset, the model demonstrates efficiency in generating SQL queries, with an accuracy of 44.2% on WikiSQL and 94.26% for approximate string-matching.

Discussion: Results indicate improved performance with increased training epochs. This work highlights the potential of fine-tuned T5 model to convert medical-related questions written in natural language to Structured Query Language (SQL) in healthcare domain, providing a foundation for future research in this area.

导言:随着存储在数据库中的电子病历(EMR)的日益普及,医护人员由于数据库操作方面的专业技术有限,在检索这些病历时遇到了困难。由于这些记录对提供适当的医疗服务至关重要,因此需要一种便于医护人员访问 EMR 的方法:为解决这一问题,文本到 SQL 的自然语言处理(NLP)已成为一种解决方案,使非技术用户能够使用自然语言文本生成 SQL 查询。本研究评估了现有的文本到 SQL 转换工作,并提出了专为 EMR 检索设计的 MedT5SQL 模型。所提议的模型利用了文本到文本转换器(T5)模型,这是一种常用于各种基于文本的 NLP 任务的大型语言模型(LLM)。该模型在 MIMICSQL 数据集上进行了微调,这是医疗保健领域首个文本到 SQL 数据集。性能评估包括在两个优化器上对 MedT5SQL 模型进行基准测试,使用两个数据集(MIMICSQL 和 WikiSQL)进行不同数量的训练历时:对于MIMICSQL数据集,该模型在生成问题-SQL对方面表现出了相当高的效率,在精确匹配精度矩阵、近似字符串匹配和人工评估方面的准确率分别达到了80.63%、98.937%和90%。在 WikiSQL 数据集上测试该模型的性能时,该模型显示出生成 SQL 查询的效率,在 WikiSQL 数据集上的准确率为 44.2%,近似字符串匹配的准确率为 94.26%:讨论:结果表明,随着训练历时的增加,性能也有所提高。这项工作凸显了微调 T5 模型将医疗保健领域中以自然语言编写的医学相关问题转换为结构化查询语言(SQL)的潜力,为该领域的未来研究奠定了基础。
{"title":"MedT5SQL: a transformers-based large language model for text-to-SQL conversion in the healthcare domain.","authors":"Alaa Marshan, Anwar Nais Almutairi, Athina Ioannou, David Bell, Asmat Monaghan, Mahir Arzoky","doi":"10.3389/fdata.2024.1371680","DOIUrl":"10.3389/fdata.2024.1371680","url":null,"abstract":"<p><strong>Introduction: </strong>In response to the increasing prevalence of electronic medical records (EMRs) stored in databases, healthcare staff are encountering difficulties retrieving these records due to their limited technical expertise in database operations. As these records are crucial for delivering appropriate medical care, there is a need for an accessible method for healthcare staff to access EMRs.</p><p><strong>Methods: </strong>To address this, natural language processing (NLP) for Text-to-SQL has emerged as a solution, enabling non-technical users to generate SQL queries using natural language text. This research assesses existing work on Text-to-SQL conversion and proposes the MedT5SQL model specifically designed for EMR retrieval. The proposed model utilizes the Text-to-Text Transfer Transformer (T5) model, a Large Language Model (LLM) commonly used in various text-based NLP tasks. The model is fine-tuned on the MIMICSQL dataset, the first Text-to-SQL dataset for the healthcare domain. Performance evaluation involves benchmarking the MedT5SQL model on two optimizers, varying numbers of training epochs, and using two datasets, MIMICSQL and WikiSQL.</p><p><strong>Results: </strong>For MIMICSQL dataset, the model demonstrates considerable effectiveness in generating question-SQL pairs achieving accuracy of 80.63%, 98.937%, and 90% for exact match accuracy matrix, approximate string-matching, and manual evaluation, respectively. When testing the performance of the model on WikiSQL dataset, the model demonstrates efficiency in generating SQL queries, with an accuracy of 44.2% on WikiSQL and 94.26% for approximate string-matching.</p><p><strong>Discussion: </strong>Results indicate improved performance with increased training epochs. This work highlights the potential of fine-tuned T5 model to convert medical-related questions written in natural language to Structured Query Language (SQL) in healthcare domain, providing a foundation for future research in this area.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"7 ","pages":"1371680"},"PeriodicalIF":2.4,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11233734/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141581493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Source-free domain adaptation for semantic image segmentation using internal representations. 利用内部表征进行语义图像分割的无源域适应。
IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-18 eCollection Date: 2024-01-01 DOI: 10.3389/fdata.2024.1359317
Serban Stan, Mohammad Rostami

Semantic segmentation models trained on annotated data fail to generalize well when the input data distribution changes over extended time period, leading to requiring re-training to maintain performance. Classic unsupervised domain adaptation (UDA) attempts to address a similar problem when there is target domain with no annotated data points through transferring knowledge from a source domain with annotated data. We develop an online UDA algorithm for semantic segmentation of images that improves model generalization on unannotated domains in scenarios where source data access is restricted during adaptation. We perform model adaptation by minimizing the distributional distance between the source latent features and the target features in a shared embedding space. Our solution promotes a shared domain-agnostic latent feature space between the two domains, which allows for classifier generalization on the target dataset. To alleviate the need of access to source samples during adaptation, we approximate the source latent feature distribution via an appropriate surrogate distribution, in this case a Gaussian mixture model (GMM).

当输入数据分布在较长时间内发生变化时,根据注释数据训练的语义分割模型无法很好地泛化,导致需要重新训练才能保持性能。经典的无监督领域适应(UDA)试图通过从有注释数据的源领域转移知识来解决目标领域无注释数据点的类似问题。我们开发了一种用于图像语义分割的在线 UDA 算法,在适应过程中源数据访问受限的情况下,该算法提高了模型在无注释领域的泛化能力。我们通过最小化源潜在特征与共享嵌入空间中目标特征之间的分布距离来执行模型适配。我们的解决方案促进了两个领域之间共享的领域无关潜特征空间,从而实现了分类器在目标数据集上的泛化。为了减轻适应过程中对源样本的访问需求,我们通过适当的代理分布来近似源潜在特征分布,在本例中就是高斯混合模型(GMM)。
{"title":"Source-free domain adaptation for semantic image segmentation using internal representations.","authors":"Serban Stan, Mohammad Rostami","doi":"10.3389/fdata.2024.1359317","DOIUrl":"10.3389/fdata.2024.1359317","url":null,"abstract":"<p><p>Semantic segmentation models trained on annotated data fail to generalize well when the input data distribution changes over extended time period, leading to requiring re-training to maintain performance. Classic unsupervised domain adaptation (UDA) attempts to address a similar problem when there is target domain with no annotated data points through transferring knowledge from a source domain with annotated data. We develop an online UDA algorithm for semantic segmentation of images that improves model generalization on unannotated domains in scenarios where source data access is restricted during adaptation. We perform model adaptation by minimizing the distributional distance between the source latent features and the target features in a shared embedding space. Our solution promotes a shared domain-agnostic latent feature space between the two domains, which allows for classifier generalization on the target dataset. To alleviate the need of access to source samples during adaptation, we approximate the source latent feature distribution via an appropriate surrogate distribution, in this case a Gaussian mixture model (GMM).</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"7 ","pages":"1359317"},"PeriodicalIF":2.4,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11217319/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward the design of persuasive systems for a healthy workplace: a real-time posture detection. 面向健康工作场所的说服系统设计:实时姿势检测。
IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-17 eCollection Date: 2024-01-01 DOI: 10.3389/fdata.2024.1359906
Grace Ataguba, Rita Orji

Persuasive technologies, in connection with human factor engineering requirements for healthy workplaces, have played a significant role in ensuring a change in human behavior. Healthy workplaces suggest different best practices applicable to body posture, proximity to the computer system, movement, lighting conditions, computer system layout, and other significant psychological and cognitive aspects. Most importantly, body posture suggests how users should sit or stand in workplaces in line with best and healthy practices. In this study, we developed two study phases (pilot and main) using two deep learning models: convolutional neural networks (CNN) and Yolo-V3. To train the two models, we collected posture datasets from creative common license YouTube videos and Kaggle. We classified the dataset into comfortable and uncomfortable postures. Results show that our YOLO-V3 model outperformed CNN model with a mean average precision of 92%. Based on this finding, we recommend that YOLO-V3 model be integrated in the design of persuasive technologies for a healthy workplace. Additionally, we provide future implications for integrating proximity detection taking into consideration the ideal number of centimeters users should maintain in a healthy workplace.

说服技术与健康工作场所的人因工程要求相结合,在确保改变人类行为方面发挥了重要作用。健康工作场所提出了适用于身体姿势、接近计算机系统、移动、照明条件、计算机系统布局以及其他重要心理和认知方面的不同最佳做法。最重要的是,身体姿势建议用户在工作场所如何坐或站才能符合最佳健康实践。在本研究中,我们使用两种深度学习模型:卷积神经网络(CNN)和 Yolo-V3,开发了两个研究阶段(试验阶段和主要阶段)。为了训练这两个模型,我们从创意通用许可的 YouTube 视频和 Kaggle 收集了姿势数据集。我们将数据集分为舒适姿势和不舒适姿势。结果显示,YOLO-V3 模型的平均精确度为 92%,优于 CNN 模型。基于这一发现,我们建议将 YOLO-V3 模型集成到健康工作场所说服技术的设计中。此外,考虑到健康工作场所中用户应保持的理想厘米数,我们还提出了整合近距离检测的未来意义。
{"title":"Toward the design of persuasive systems for a healthy workplace: a real-time posture detection.","authors":"Grace Ataguba, Rita Orji","doi":"10.3389/fdata.2024.1359906","DOIUrl":"10.3389/fdata.2024.1359906","url":null,"abstract":"<p><p>Persuasive technologies, in connection with human factor engineering requirements for healthy workplaces, have played a significant role in ensuring a change in human behavior. Healthy workplaces suggest different best practices applicable to body posture, proximity to the computer system, movement, lighting conditions, computer system layout, and other significant psychological and cognitive aspects. Most importantly, body posture suggests how users should sit or stand in workplaces in line with best and healthy practices. In this study, we developed two study phases (pilot and main) using two deep learning models: convolutional neural networks (CNN) and Yolo-V3. To train the two models, we collected posture datasets from creative common license YouTube videos and Kaggle. We classified the dataset into comfortable and uncomfortable postures. Results show that our YOLO-V3 model outperformed CNN model with a mean average precision of 92%. Based on this finding, we recommend that YOLO-V3 model be integrated in the design of persuasive technologies for a healthy workplace. Additionally, we provide future implications for integrating proximity detection taking into consideration the ideal number of centimeters users should maintain in a healthy workplace.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"7 ","pages":"1359906"},"PeriodicalIF":2.4,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11215059/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141477886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An encoding framework for binarized images using hyperdimensional computing. 使用超维计算的二值化图像编码框架。
IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-14 eCollection Date: 2024-01-01 DOI: 10.3389/fdata.2024.1371518
Laura Smets, Werner Van Leekwijck, Ing Jyh Tsang, Steven Latré

Introduction: Hyperdimensional Computing (HDC) is a brain-inspired and lightweight machine learning method. It has received significant attention in the literature as a candidate to be applied in the wearable Internet of Things, near-sensor artificial intelligence applications, and on-device processing. HDC is computationally less complex than traditional deep learning algorithms and typically achieves moderate to good classification performance. A key aspect that determines the performance of HDC is encoding the input data to the hyperdimensional (HD) space.

Methods: This article proposes a novel lightweight approach relying only on native HD arithmetic vector operations to encode binarized images that preserves the similarity of patterns at nearby locations by using point of interest selection and local linear mapping.

Results: The method reaches an accuracy of 97.92% on the test set for the MNIST data set and 84.62% for the Fashion-MNIST data set.

Discussion: These results outperform other studies using native HDC with different encoding approaches and are on par with more complex hybrid HDC models and lightweight binarized neural networks. The proposed encoding approach also demonstrates higher robustness to noise and blur compared to the baseline encoding.

简介超维计算(HDC)是一种受大脑启发的轻量级机器学习方法。作为一种可应用于可穿戴物联网、近传感人工智能应用和设备处理的候选方法,它在文献中受到了极大关注。与传统的深度学习算法相比,HDC 的计算复杂度较低,通常能实现中等到良好的分类性能。决定 HDC 性能的一个关键因素是将输入数据编码到超维(HD)空间:本文提出了一种新颖的轻量级方法,该方法仅依靠原生高清算术向量运算来编码二值化图像,通过兴趣点选择和局部线性映射来保留附近位置的模式相似性:该方法在 MNIST 数据集测试集上的准确率达到 97.92%,在时尚-MNIST 数据集上的准确率达到 84.62%:这些结果优于使用不同编码方法的本地 HDC 的其他研究,与更复杂的混合 HDC 模型和轻量级二值化神经网络相当。与基线编码相比,拟议的编码方法对噪声和模糊的鲁棒性也更强。
{"title":"An encoding framework for binarized images using hyperdimensional computing.","authors":"Laura Smets, Werner Van Leekwijck, Ing Jyh Tsang, Steven Latré","doi":"10.3389/fdata.2024.1371518","DOIUrl":"10.3389/fdata.2024.1371518","url":null,"abstract":"<p><strong>Introduction: </strong>Hyperdimensional Computing (HDC) is a brain-inspired and lightweight machine learning method. It has received significant attention in the literature as a candidate to be applied in the wearable Internet of Things, near-sensor artificial intelligence applications, and on-device processing. HDC is computationally less complex than traditional deep learning algorithms and typically achieves moderate to good classification performance. A key aspect that determines the performance of HDC is encoding the input data to the hyperdimensional (HD) space.</p><p><strong>Methods: </strong>This article proposes a novel lightweight approach relying only on native HD arithmetic vector operations to encode binarized images that preserves the similarity of patterns at nearby locations by using point of interest selection and <i>local linear mapping</i>.</p><p><strong>Results: </strong>The method reaches an accuracy of 97.92% on the test set for the MNIST data set and 84.62% for the Fashion-MNIST data set.</p><p><strong>Discussion: </strong>These results outperform other studies using native HDC with different encoding approaches and are on par with more complex hybrid HDC models and lightweight binarized neural networks. The proposed encoding approach also demonstrates higher robustness to noise and blur compared to the baseline encoding.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"7 ","pages":"1371518"},"PeriodicalIF":2.4,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11214273/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141472535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stable tensor neural networks for efficient deep learning. 用于高效深度学习的稳定张量神经网络
IF 3.1 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-05-30 eCollection Date: 2024-01-01 DOI: 10.3389/fdata.2024.1363978
Elizabeth Newman, Lior Horesh, Haim Avron, Misha E Kilmer

Learning from complex, multidimensional data has become central to computational mathematics, and among the most successful high-dimensional function approximators are deep neural networks (DNNs). Training DNNs is posed as an optimization problem to learn network weights or parameters that well-approximate a mapping from input to target data. Multiway data or tensors arise naturally in myriad ways in deep learning, in particular as input data and as high-dimensional weights and features extracted by the network, with the latter often being a bottleneck in terms of speed and memory. In this work, we leverage tensor representations and processing to efficiently parameterize DNNs when learning from high-dimensional data. We propose tensor neural networks (t-NNs), a natural extension of traditional fully-connected networks, that can be trained efficiently in a reduced, yet more powerful parameter space. Our t-NNs are built upon matrix-mimetic tensor-tensor products, which retain algebraic properties of matrix multiplication while capturing high-dimensional correlations. Mimeticity enables t-NNs to inherit desirable properties of modern DNN architectures. We exemplify this by extending recent work on stable neural networks, which interpret DNNs as discretizations of differential equations, to our multidimensional framework. We provide empirical evidence of the parametric advantages of t-NNs on dimensionality reduction using autoencoders and classification using fully-connected and stable variants on benchmark imaging datasets MNIST and CIFAR-10.

从复杂的多维数据中学习已成为计算数学的核心,其中最成功的高维函数近似器是深度神经网络(DNN)。训练 DNNs 是一个优化问题,即学习网络权重或参数,以很好地逼近从输入数据到目标数据的映射。多路数据或张量在深度学习中自然会以各种方式出现,特别是作为输入数据和网络提取的高维权重和特征,而后者往往是速度和内存方面的瓶颈。在这项工作中,我们利用张量表示和处理,在从高维数据中学习时高效地为 DNN 设置参数。我们提出的张量神经网络(t-NNs)是传统全连接网络的自然扩展,可以在更小但功能更强大的参数空间内进行高效训练。我们的 t-NN 建立在矩阵仿真张量-张量乘积的基础上,既保留了矩阵乘法的代数特性,又捕捉到了高维相关性。模仿性使 t-NNs 能够继承现代 DNN 架构的理想特性。我们将最近关于稳定神经网络的研究成果(将 DNN 解释为微分方程的离散化)扩展到我们的多维框架,以此为例进行说明。我们在基准成像数据集 MNIST 和 CIFAR-10 上使用自动编码器降低维度,并使用全连接和稳定变体进行分类,从而提供了 t-NNs 在参数方面优势的经验证据。
{"title":"Stable tensor neural networks for efficient deep learning.","authors":"Elizabeth Newman, Lior Horesh, Haim Avron, Misha E Kilmer","doi":"10.3389/fdata.2024.1363978","DOIUrl":"10.3389/fdata.2024.1363978","url":null,"abstract":"<p><p>Learning from complex, multidimensional data has become central to computational mathematics, and among the most successful high-dimensional function approximators are deep neural networks (DNNs). Training DNNs is posed as an optimization problem to learn network weights or parameters that well-approximate a mapping from input to target data. Multiway data or tensors arise naturally in myriad ways in deep learning, in particular as input data and as high-dimensional weights and features extracted by the network, with the latter often being a bottleneck in terms of speed and memory. In this work, we leverage tensor representations and processing to efficiently parameterize DNNs when learning from high-dimensional data. We propose tensor neural networks (t-NNs), a natural extension of traditional fully-connected networks, that can be trained efficiently in a reduced, yet more powerful parameter space. Our t-NNs are built upon matrix-mimetic tensor-tensor products, which retain algebraic properties of matrix multiplication while capturing high-dimensional correlations. Mimeticity enables t-NNs to inherit desirable properties of modern DNN architectures. We exemplify this by extending recent work on stable neural networks, which interpret DNNs as discretizations of differential equations, to our multidimensional framework. We provide empirical evidence of the parametric advantages of t-NNs on dimensionality reduction using autoencoders and classification using fully-connected and stable variants on benchmark imaging datasets MNIST and CIFAR-10.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"7 ","pages":"1363978"},"PeriodicalIF":3.1,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11170703/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141318951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From theory to practice: insights and hurdles in collecting social media data for social science research. 从理论到实践:为社会科学研究收集社交媒体数据的见解和障碍。
IF 3.1 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-05-30 eCollection Date: 2024-01-01 DOI: 10.3389/fdata.2024.1379921
Yan Chen, Kate Sherren, Kyung Young Lee, Lori McCay-Peet, Shan Xue, Michael Smit

Social media has profoundly changed our modes of self-expression, communication, and participation in public discourse, generating volumes of conversations and content that cover every aspect of our social lives. Social media platforms have thus become increasingly important as data sources to identify social trends and phenomena. In recent years, academics have steadily lost ground on access to social media data as technology companies have set more restrictions on Application Programming Interfaces (APIs) or entirely closed public APIs. This circumstance halts the work of many social scientists who have used such data to study issues of public good. We considered the viability of eight approaches for image-based social media data collection: data philanthropy organizations, data repositories, data donation, third-party data companies, homegrown tools, and various web scraping tools and scripts. This paper discusses the advantages and challenges of these approaches from literature and from the authors' experience. We conclude the paper by discussing mechanisms for improving social media data collection that will enable this future frontier of social science research.

社交媒体深刻地改变了我们的自我表达、交流和参与公共讨论的模式,产生了大量的对话和内容,涵盖了我们社会生活的方方面面。因此,社交媒体平台作为识别社会趋势和现象的数据来源变得越来越重要。近年来,随着技术公司对应用程序编程接口(API)设置更多限制或完全关闭公共 API,学术界在获取社交媒体数据方面逐渐失去了优势。在这种情况下,许多利用此类数据研究公益问题的社会科学家的工作被迫中断。我们考虑了八种基于图像的社交媒体数据收集方法的可行性:数据慈善组织、数据存储库、数据捐赠、第三方数据公司、自制工具以及各种网络刮擦工具和脚本。本文从文献和作者的经验出发,讨论了这些方法的优势和挑战。最后,我们讨论了改进社交媒体数据收集的机制,这些机制将使这一社会科学研究的未来前沿得以实现。
{"title":"From theory to practice: insights and hurdles in collecting social media data for social science research.","authors":"Yan Chen, Kate Sherren, Kyung Young Lee, Lori McCay-Peet, Shan Xue, Michael Smit","doi":"10.3389/fdata.2024.1379921","DOIUrl":"10.3389/fdata.2024.1379921","url":null,"abstract":"<p><p>Social media has profoundly changed our modes of self-expression, communication, and participation in public discourse, generating volumes of conversations and content that cover every aspect of our social lives. Social media platforms have thus become increasingly important as data sources to identify social trends and phenomena. In recent years, academics have steadily lost ground on access to social media data as technology companies have set more restrictions on Application Programming Interfaces (APIs) or entirely closed public APIs. This circumstance halts the work of many social scientists who have used such data to study issues of public good. We considered the viability of eight approaches for image-based social media data collection: data philanthropy organizations, data repositories, data donation, third-party data companies, homegrown tools, and various web scraping tools and scripts. This paper discusses the advantages and challenges of these approaches from literature and from the authors' experience. We conclude the paper by discussing mechanisms for improving social media data collection that will enable this future frontier of social science research.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"7 ","pages":"1379921"},"PeriodicalIF":3.1,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11169574/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Big Data
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1