首页 > 最新文献

Journal of Pathology Informatics最新文献

英文 中文
Erratum Regarding Previously Published Articles 关于先前发表的文章的勘误。
Q2 Medicine Pub Date : 2024-12-01 DOI: 10.1016/j.jpi.2024.100365
{"title":"Erratum Regarding Previously Published Articles","authors":"","doi":"10.1016/j.jpi.2024.100365","DOIUrl":"10.1016/j.jpi.2024.100365","url":null,"abstract":"","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100365"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11662270/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142878057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pathology Visions 2023 Overview 2023 年病理学愿景概述
Q2 Medicine Pub Date : 2024-12-01 DOI: 10.1016/j.jpi.2024.100362
{"title":"Pathology Visions 2023 Overview","authors":"","doi":"10.1016/j.jpi.2024.100362","DOIUrl":"10.1016/j.jpi.2024.100362","url":null,"abstract":"","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100362"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139832128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning to predict prostate cancer recurrence from tissue images 学习从组织图像预测前列腺癌复发
Q2 Medicine Pub Date : 2024-12-01 DOI: 10.1016/j.jpi.2023.100344
Mahtab Farrokh , Neeraj Kumar , Peter H. Gann , Russell Greiner
Roughly 30% of men with prostate cancer who undergo radical prostatectomy will suffer biochemical cancer recurrence (BCR). Accurately predicting which patients will experience BCR could identify who would benefit from increased surveillance or adjuvant therapy. Unfortunately, no current method can effectively predict this. We develop and evaluate PathCLR, a novel semi-supervised method that learns a model that can use hematoxylin and eosin (H&E)-stained tissue microarrays (TMAs) to predict prostate cancer recurrence within 5 years after diagnosis. The learning process involves 2 sequential steps: PathCLR (a) first employs self-supervised learning to generate effective feature representations of the input images, then (b) feeds these learned features into a fully supervised neural network classifier to learn a model for predicting BCR. We conducted training and evaluation using 2 large prostate cancer datasets: (1) the Cooperative Prostate Cancer Tissue Resource (CPCTR) with 374 patients, including 189 who experienced BCR, and (2) the Johns Hopkins University (JHU) prostate cancer dataset of 646 patients, with 451 patients having BCR. PathCLR’s (10-fold cross-validation) F1 score was 0.61 for CPCTR and 0.85 for JHU. This was statistically superior (paired t-test with P < .05) to the best-learned model that relied solely on clinicopathological features, including PSA level, primary and secondary Gleason Grade, etc. We attribute the improvement of PathCLR over models using only clinicopathological features to its utilization of both learned latent representations of tissue core images and clinicopathological features. This finding suggests that there is essential predictive information in tissue images at the time of surgery that goes beyond the knowledge obtained from reported clinicopathological features, helping predict the patient’s 5-year outcome.
大约30%的前列腺癌患者接受根治性前列腺切除术后会出现生化癌复发(BCR)。准确预测哪些患者将经历BCR,可以确定哪些患者将受益于加强监测或辅助治疗。不幸的是,目前没有任何方法可以有效地预测这一点。我们开发并评估了PathCLR,这是一种新颖的半监督方法,可以学习一个模型,该模型可以使用苏木精和伊红(H&E)染色的组织微阵列(TMAs)来预测前列腺癌诊断后5年内的复发。学习过程包括两个连续的步骤:PathCLR (a)首先使用自监督学习来生成输入图像的有效特征表示,然后(b)将这些学习到的特征输入到一个完全监督的神经网络分类器中,以学习预测BCR的模型。我们使用两个大型前列腺癌数据集进行培训和评估:(1)合作前列腺癌组织资源(CPCTR)的374例患者,其中189例经历过BCR;(2)约翰霍普金斯大学(JHU)的646例患者的前列腺癌数据集,其中451例患有BCR。CPCTR的PathCLR(10倍交叉验证)F1评分为0.61,JHU为0.85。这在统计学上优于仅依赖临床病理特征(包括PSA水平、原发性和继发性Gleason分级等)的最佳学习模型(配对t检验P< 0.05)。我们将PathCLR优于仅使用临床病理特征的模型归因于其对组织核心图像和临床病理特征的习得潜在表征的利用。这一发现表明,在手术时的组织图像中有必要的预测信息,这些信息超出了从报告的临床病理特征中获得的知识,有助于预测患者的5年预后。
{"title":"Learning to predict prostate cancer recurrence from tissue images","authors":"Mahtab Farrokh ,&nbsp;Neeraj Kumar ,&nbsp;Peter H. Gann ,&nbsp;Russell Greiner","doi":"10.1016/j.jpi.2023.100344","DOIUrl":"10.1016/j.jpi.2023.100344","url":null,"abstract":"<div><div>Roughly 30% of men with prostate cancer who undergo radical prostatectomy will suffer biochemical cancer recurrence (BCR). Accurately predicting which patients will experience BCR could identify who would benefit from increased surveillance or adjuvant therapy. Unfortunately, no current method can effectively predict this. We develop and evaluate PathCLR, a novel semi-supervised method that learns a model that can use hematoxylin and eosin (H&amp;E)-stained tissue microarrays (TMAs) to predict prostate cancer recurrence within 5 years after diagnosis. The learning process involves 2 sequential steps: PathCLR (a) first employs self-supervised learning to generate effective feature representations of the input images, then (b) feeds these learned features into a fully supervised neural network classifier to learn a model for predicting BCR. We conducted training and evaluation using 2 large prostate cancer datasets: (1) the Cooperative Prostate Cancer Tissue Resource (CPCTR) with 374 patients, including 189 who experienced BCR, and (2) the Johns Hopkins University (JHU) prostate cancer dataset of 646 patients, with 451 patients having BCR. PathCLR’s (10-fold cross-validation) F1 score was 0.61 for CPCTR and 0.85 for JHU. This was statistically superior (paired t-test with <em>P &lt;</em> <em>.</em>05) to the best-learned model that relied solely on clinicopathological features, including PSA level, primary and secondary Gleason Grade, etc. We attribute the improvement of PathCLR over models using only clinicopathological features to its utilization of both learned latent representations of tissue core images and clinicopathological features. This finding suggests that there is essential predictive information in tissue images at the time of surgery that goes beyond the knowledge obtained from reported clinicopathological features, helping predict the patient’s 5-year outcome.</div></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100344"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135455776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging deep learning for identification and segmentation of “CAF-1/p60-positive” nuclei in oral squamous cell carcinoma tissue samples 利用深度学习在口腔鳞状细胞癌组织样本中识别和分割“ca -1/p60阳性”细胞核。
Q2 Medicine Pub Date : 2024-12-01 DOI: 10.1016/j.jpi.2024.100407
Silvia Varricchio , Gennaro Ilardi , Daniela Russo , Rosa Maria Di Crescenzo , Angela Crispino , Stefania Staibano , Francesco Merolla
In the current study, we introduced a unique method for identifying and segmenting oral squamous cell carcinoma (OSCC) nuclei, concentrating on those predicted to have significant CAF-1/p60 protein expression. Our suggested model uses the StarDist architecture, a deep-learning framework designed for biomedical image segmentation tasks. The training dataset comprises painstakingly annotated masks created from tissue sections previously stained with hematoxylin and eosin (H&E) and then restained with immunohistochemistry (IHC) for p60 protein. Our algorithm uses subtle morphological and colorimetric H&E cellular characteristics to predict CAF-1/p60 IHC expression in OSCC nuclei. The StarDist-based architecture performs exceptionally well in localizing and segmenting H&E nuclei, previously identified by IHC-based ground truth. In summary, our innovative approach harnesses deep learning and multimodal information to advance the automated analysis of OSCC nuclei exhibiting specific protein expression patterns. This methodology holds promise for expediting accurate pathological assessment and gaining deeper insights into the role of CAF-1/p60 protein within the context of oral cancer progression.
在目前的研究中,我们介绍了一种独特的方法来识别和分割口腔鳞状细胞癌(OSCC)细胞核,集中在那些预测有显著的ca -1/p60蛋白表达。我们建议的模型使用StarDist架构,这是一种为生物医学图像分割任务设计的深度学习框架。训练数据集包括精心注释的掩膜,这些掩膜是由先前用苏木精和伊红(H&E)染色的组织切片创建的,然后用免疫组织化学(IHC)保留p60蛋白。我们的算法使用细微的形态学和比色H&E细胞特征来预测ca -1/p60 IHC在OSCC细胞核中的表达。基于stardist的架构在定位和分割H&E核方面表现得非常好,以前是由基于ihc的ground truth识别的。总之,我们的创新方法利用深度学习和多模态信息来推进显示特定蛋白质表达模式的OSCC核的自动分析。这种方法有望加快准确的病理评估,并更深入地了解ca -1/p60蛋白在口腔癌进展中的作用。
{"title":"Leveraging deep learning for identification and segmentation of “CAF-1/p60-positive” nuclei in oral squamous cell carcinoma tissue samples","authors":"Silvia Varricchio ,&nbsp;Gennaro Ilardi ,&nbsp;Daniela Russo ,&nbsp;Rosa Maria Di Crescenzo ,&nbsp;Angela Crispino ,&nbsp;Stefania Staibano ,&nbsp;Francesco Merolla","doi":"10.1016/j.jpi.2024.100407","DOIUrl":"10.1016/j.jpi.2024.100407","url":null,"abstract":"<div><div>In the current study, we introduced a unique method for identifying and segmenting oral squamous cell carcinoma (OSCC) nuclei, concentrating on those predicted to have significant CAF-1/p60 protein expression. Our suggested model uses the StarDist architecture, a deep-learning framework designed for biomedical image segmentation tasks. The training dataset comprises painstakingly annotated masks created from tissue sections previously stained with hematoxylin and eosin (H&amp;E) and then restained with immunohistochemistry (IHC) for p60 protein. Our algorithm uses subtle morphological and colorimetric H&amp;E cellular characteristics to predict CAF-1/p60 IHC expression in OSCC nuclei. The StarDist-based architecture performs exceptionally well in localizing and segmenting H&amp;E nuclei, previously identified by IHC-based ground truth. In summary, our innovative approach harnesses deep learning and multimodal information to advance the automated analysis of OSCC nuclei exhibiting specific protein expression patterns. This methodology holds promise for expediting accurate pathological assessment and gaining deeper insights into the role of CAF-1/p60 protein within the context of oral cancer progression.</div></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100407"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11653155/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142855784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving the generalizability of white blood cell classification with few-shot domain adaptation 利用少注射域自适应提高白细胞分类的通用性
Q2 Medicine Pub Date : 2024-12-01 DOI: 10.1016/j.jpi.2024.100405
Manon Chossegros , François Delhommeau , Daniel Stockholm , Xavier Tannier
The morphological classification of nucleated blood cells is fundamental for the diagnosis of hematological diseases. Many Deep Learning algorithms have been implemented to automatize this classification task, but most of the time they fail to classify images coming from different sources. This is known as “domain shift”. Whereas some research has been conducted in this area, domain adaptation techniques are often computationally expensive and can introduce significant modifications to initial cell images. In this article, we propose an easy-to-implement workflow where we trained a model to classify images from two datasets, and tested it on images coming from eight other datasets. An EfficientNet model was trained on a source dataset comprising images from two different datasets. It was afterwards fine-tuned on each of the eight target datasets by using 100 or less-annotated images from these datasets. Images from both the source and the target dataset underwent a color transform to put them into a standardized color style. The importance of color transform and fine-tuning was evaluated through an ablation study and visually assessed with scatter plots, and an extensive error analysis was carried out. The model achieved an accuracy higher than 80% for every dataset and exceeded 90% for more than half of the datasets. The presented workflow yielded promising results in terms of generalizability, significantly improving performance on target datasets, whereas keeping low computational cost and maintaining consistent color transformations. Source code is available at: https://github.com/mc2295/WBC_Generalization
有核血细胞的形态学分类是血液病诊断的基础。许多深度学习算法已经实现了自动分类任务,但大多数情况下,它们无法对来自不同来源的图像进行分类。这就是所谓的“域移”。虽然在这一领域已经进行了一些研究,但区域适应技术通常在计算上昂贵,并且可以对初始细胞图像进行重大修改。在本文中,我们提出了一个易于实现的工作流程,其中我们训练了一个模型来对来自两个数据集的图像进行分类,并对来自其他八个数据集的图像进行了测试。在包含来自两个不同数据集的图像的源数据集上训练了一个EfficientNet模型。随后,通过使用来自这些数据集的100张或更少的注释图像,对8个目标数据集中的每一个进行微调。源数据集和目标数据集的图像都进行了颜色转换,以使它们变成标准化的颜色样式。通过消融研究和散点图视觉评估来评估颜色变换和微调的重要性,并进行了广泛的误差分析。该模型对每个数据集的准确率都超过80%,对一半以上的数据集的准确率超过90%。所提出的工作流在通用性方面取得了令人满意的结果,显著提高了目标数据集的性能,同时保持了较低的计算成本和保持一致的颜色转换。源代码可从https://github.com/mc2295/WBC_Generalization获得
{"title":"Improving the generalizability of white blood cell classification with few-shot domain adaptation","authors":"Manon Chossegros ,&nbsp;François Delhommeau ,&nbsp;Daniel Stockholm ,&nbsp;Xavier Tannier","doi":"10.1016/j.jpi.2024.100405","DOIUrl":"10.1016/j.jpi.2024.100405","url":null,"abstract":"<div><div>The morphological classification of nucleated blood cells is fundamental for the diagnosis of hematological diseases. Many Deep Learning algorithms have been implemented to automatize this classification task, but most of the time they fail to classify images coming from different sources. This is known as “domain shift”. Whereas some research has been conducted in this area, domain adaptation techniques are often computationally expensive and can introduce significant modifications to initial cell images. In this article, we propose an easy-to-implement workflow where we trained a model to classify images from two datasets, and tested it on images coming from eight other datasets. An EfficientNet model was trained on a source dataset comprising images from two different datasets. It was afterwards fine-tuned on each of the eight target datasets by using 100 or less-annotated images from these datasets. Images from both the source and the target dataset underwent a color transform to put them into a standardized color style. The importance of color transform and fine-tuning was evaluated through an ablation study and visually assessed with scatter plots, and an extensive error analysis was carried out. The model achieved an accuracy higher than 80% for every dataset and exceeded 90% for more than half of the datasets. The presented workflow yielded promising results in terms of generalizability, significantly improving performance on target datasets, whereas keeping low computational cost and maintaining consistent color transformations. Source code is available at: <span><span>https://github.com/mc2295/WBC_Generalization</span><svg><path></path></svg></span></div></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100405"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pathology Informatics Summit 2024 Abstracts Ann Arbor Marriott at Eagle Crest Resort May 20-23, 2024 Ann Arbor, Michigan 2024 年病理信息学峰会摘要 密歇根州安阿伯市鹰岭度假村万豪酒店 2024 年 5 月 20-23 日 密歇根州安阿伯市
Q2 Medicine Pub Date : 2024-11-19 DOI: 10.1016/j.jpi.2024.100392
{"title":"Pathology Informatics Summit 2024 Abstracts Ann Arbor Marriott at Eagle Crest Resort May 20-23, 2024 Ann Arbor, Michigan","authors":"","doi":"10.1016/j.jpi.2024.100392","DOIUrl":"10.1016/j.jpi.2024.100392","url":null,"abstract":"","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100392"},"PeriodicalIF":0.0,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142697926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual pathology reports for communication of final margin status in laryngeal cancer surgery 在喉癌手术中传达最终边缘状态的可视病理报告
Q2 Medicine Pub Date : 2024-10-28 DOI: 10.1016/j.jpi.2024.100404
Marina Aweeda , Carly Fassler , Alexander N. Perez , Alexis Miller , Kavita Prasad , Kayvon F. Sharif , James S. Lewis Jr , Kim A. Ely , Mitra Mehrad , Sarah L. Rohde , Alexander J. Langerman , Kyle Mannion , Robert J. Sinard , James L. Netterville , Eben L. Rosenthal , Michael C. Topf

Background

Positive margins are frequently observed in total laryngectomy (TL) specimens. Effective communication of margin sampling sites and final margin status between surgeons and pathologists is crucial. In this study, we evaluate the utility of multimedia visual pathology reports to facilitate interdisciplinary discussion of margin status in laryngeal cancer surgery.

Methods

Ex vivo laryngeal cancer surgical specimens were three-dimensional (3D) scanned before standard of care pathological analysis. Using computer-aided design software, the 3D model was annotated to reflect inking, sectioning, and margin sampling sites, generating a visual pathology report. These reports were distributed to head and neck surgeons and pathologists postoperatively.

Results

Fifteen laryngeal cancer surgical specimens were 3D scanned and virtually annotated from January 2022 to December 2023. Most specimens (73.3%) were squamous cell carcinomas (SCCs). Among the cases, 26.7% had final positive surgical margins, whereas 13.3% had close margins, defined as <5 mm. The visual pathology report demonstrated sites of close or positive margins on the 3D specimens and was used to facilitate postoperative communication between surgeons and pathologists in 85.7% of these cases. Visual pathology reports were presented in multidisciplinary tumor board discussions (20%), email correspondences (13.3%), and teleconferences (6.7%), and were referenced in the final written pathology reports (26.7%).

Conclusions

3D scanning and virtual annotation of laryngeal cancer specimens for the creation of visual pathology reports is an innovative approach for postoperative pathology documentation, margin analysis, and surgeon–pathologist communication.
背景全喉切除术(TL)标本中经常观察到边缘阳性。外科医生和病理学家之间就边缘取样部位和最终边缘状态进行有效沟通至关重要。在这项研究中,我们评估了多媒体可视化病理报告在促进喉癌手术边缘状态跨学科讨论方面的效用。方法在进行标准病理分析之前,对活体喉癌手术标本进行三维(3D)扫描。使用计算机辅助设计软件对三维模型进行注释,以反映着墨、切片和边缘取样部位,生成可视化病理报告。这些报告在术后分发给头颈部外科医生和病理学家。结果从2022年1月到2023年12月,对15例喉癌手术标本进行了三维扫描和虚拟注释。大多数标本(73.3%)为鳞状细胞癌(SCC)。在这些病例中,26.7%的病例最终手术切缘呈阳性,而13.3%的病例切缘接近,定义为<5 mm。可视病理报告显示了三维标本上边缘接近或阳性的部位,在这些病例中,85.7%的病例使用了可视病理报告来促进外科医生和病理学家之间的术后沟通。视觉病理报告在多学科肿瘤委员会讨论(20%)、电子邮件通信(13.3%)和电话会议(6.7%)中进行了展示,并在最终的书面病理报告中进行了引用(26.7%)。结论 对喉癌标本进行三维扫描和虚拟注释以创建视觉病理报告是一种创新方法,可用于术后病理记录、边缘分析以及外科医生与病理学家之间的交流。
{"title":"Visual pathology reports for communication of final margin status in laryngeal cancer surgery","authors":"Marina Aweeda ,&nbsp;Carly Fassler ,&nbsp;Alexander N. Perez ,&nbsp;Alexis Miller ,&nbsp;Kavita Prasad ,&nbsp;Kayvon F. Sharif ,&nbsp;James S. Lewis Jr ,&nbsp;Kim A. Ely ,&nbsp;Mitra Mehrad ,&nbsp;Sarah L. Rohde ,&nbsp;Alexander J. Langerman ,&nbsp;Kyle Mannion ,&nbsp;Robert J. Sinard ,&nbsp;James L. Netterville ,&nbsp;Eben L. Rosenthal ,&nbsp;Michael C. Topf","doi":"10.1016/j.jpi.2024.100404","DOIUrl":"10.1016/j.jpi.2024.100404","url":null,"abstract":"<div><h3>Background</h3><div>Positive margins are frequently observed in total laryngectomy (TL) specimens. Effective communication of margin sampling sites and final margin status between surgeons and pathologists is crucial. In this study, we evaluate the utility of multimedia visual pathology reports to facilitate interdisciplinary discussion of margin status in laryngeal cancer surgery.</div></div><div><h3>Methods</h3><div>Ex vivo laryngeal cancer surgical specimens were three-dimensional (3D) scanned before standard of care pathological analysis. Using computer-aided design software, the 3D model was annotated to reflect inking, sectioning, and margin sampling sites, generating a visual pathology report. These reports were distributed to head and neck surgeons and pathologists postoperatively.</div></div><div><h3>Results</h3><div>Fifteen laryngeal cancer surgical specimens were 3D scanned and virtually annotated from January 2022 to December 2023. Most specimens (73.3%) were squamous cell carcinomas (SCCs). Among the cases, 26.7% had final positive surgical margins, whereas 13.3% had close margins, defined as &lt;5 mm. The visual pathology report demonstrated sites of close or positive margins on the 3D specimens and was used to facilitate postoperative communication between surgeons and pathologists in 85.7% of these cases. Visual pathology reports were presented in multidisciplinary tumor board discussions (20%), email correspondences (13.3%), and teleconferences (6.7%), and were referenced in the final written pathology reports (26.7%).</div></div><div><h3>Conclusions</h3><div>3D scanning and virtual annotation of laryngeal cancer specimens for the creation of visual pathology reports is an innovative approach for postoperative pathology documentation, margin analysis, and surgeon–pathologist communication.</div></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100404"},"PeriodicalIF":0.0,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142697925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Presenting the framework of the whole slide image file Babel fish: An OCR-based file labeling tool 提出整个幻灯片图像文件巴别鱼的框架:基于 OCR 的文件标签工具
Q2 Medicine Pub Date : 2024-10-23 DOI: 10.1016/j.jpi.2024.100402
Nils Englert , Constantin Schwab , Maximilian Legnar , Cleo-Aron Weis

Introduction

Metadata extraction from digitized slides or whole slide image files is a frequent, laborious, and tedious task. In this work, we present a tool to automatically extract all relevant slide information, such as case number, year, slide number, block number, and staining from the macro-images of the scanned slide.
We named the tool Babel fish as it helps translate relevant information printed on the slide. It is written to contain certain basic assumptions regarding, for example, the location of certain information. This can be adapted to the respective location. The extracted metadata can then be used to sort digital slides into databases or to link them with associated case IDs from laboratory information systems.

Material and methods

The tool is based on optical character recognition (OCR). For most information, the easyOCR tool is used. For the block number and cases with insufficient results in the first OCR round, a second OCR with pytesseract is applied.
Two datasets are used: one for tool development has 342 slides; and another for one for testing has 110 slides.

Results

For the testing set, the overall accuracy for retrieving all relevant information per slide is 0.982. Of note, the accuracy for most information parts is 1.000, whereas the accuracy for the block number detection is 0.982.

Conclusion

The Babel fish tool can be used to rename vast amounts of whole slide image files in an image analysis pipeline. Furthermore, it could be an essential part of DICOM conversion pipelines, as it extracts relevant metadata like case number, year, block ID, and staining.
导言:从数字化幻灯片或整个幻灯片图像文件中提取元数据是一项频繁、费力且繁琐的工作。我们将该工具命名为巴别鱼,因为它有助于翻译印在玻片上的相关信息。我们将该工具命名为 "巴别鱼",因为它可以帮助翻译印在幻灯片上的相关信息。它的编写包含了一些基本假设,例如,某些信息的位置。这可以根据相应的位置进行调整。提取的元数据可用于将数字幻灯片分类到数据库中,或将其与实验室信息系统中的相关病例 ID 相链接。对于大多数信息,使用 easyOCR 工具。结果对于测试集,每张幻灯片检索所有相关信息的总体准确率为 0.982。值得注意的是,大多数信息部分的准确率为 1.000,而块号检测的准确率为 0.982。此外,它还可以提取病例号、年份、区块 ID 和染色等相关元数据,是 DICOM 转换管道的重要组成部分。
{"title":"Presenting the framework of the whole slide image file Babel fish: An OCR-based file labeling tool","authors":"Nils Englert ,&nbsp;Constantin Schwab ,&nbsp;Maximilian Legnar ,&nbsp;Cleo-Aron Weis","doi":"10.1016/j.jpi.2024.100402","DOIUrl":"10.1016/j.jpi.2024.100402","url":null,"abstract":"<div><h3>Introduction</h3><div>Metadata extraction from digitized slides or whole slide image files is a frequent, laborious, and tedious task. In this work, we present a tool to automatically extract all relevant slide information, such as case number, year, slide number, block number, and staining from the macro-images of the scanned slide.</div><div>We named the tool Babel fish as it helps translate relevant information printed on the slide. It is written to contain certain basic assumptions regarding, for example, the location of certain information. This can be adapted to the respective location. The extracted metadata can then be used to sort digital slides into databases or to link them with associated case IDs from laboratory information systems.</div></div><div><h3>Material and methods</h3><div>The tool is based on optical character recognition (OCR). For most information, the easyOCR tool is used. For the block number and cases with insufficient results in the first OCR round, a second OCR with pytesseract is applied.</div><div>Two datasets are used: one for tool development has 342 slides; and another for one for testing has 110 slides.</div></div><div><h3>Results</h3><div>For the testing set, the overall accuracy for retrieving all relevant information per slide is 0.982. Of note, the accuracy for most information parts is 1.000, whereas the accuracy for the block number detection is 0.982.</div></div><div><h3>Conclusion</h3><div>The Babel fish tool can be used to rename vast amounts of whole slide image files in an image analysis pipeline. Furthermore, it could be an essential part of DICOM conversion pipelines, as it extracts relevant metadata like case number, year, block ID, and staining.</div></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100402"},"PeriodicalIF":0.0,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142697924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple Instance Learning for WSI: A comparative analysis of attention-based approaches 针对 WSI 的多实例学习:基于注意力的方法比较分析
Q2 Medicine Pub Date : 2024-10-20 DOI: 10.1016/j.jpi.2024.100403
Martim Afonso , Praphulla M.S. Bhawsar , Monjoy Saha , Jonas S. Almeida , Arlindo L. Oliveira
Whole slide images (WSI), obtained by high-resolution digital scanning of microscope slides at multiple scales, are the cornerstone of modern Digital Pathology. However, they represent a particular challenge to artificial intelligence (AI)-based/AI-mediated analysis because pathology labeling is typically done at slide-level, instead of tile-level. It is not just that medical diagnostics is recorded at the specimen level, the detection of oncogene mutation is also experimentally obtained, and recorded by initiatives like The Cancer Genome Atlas (TCGA), at the slide level. This configures a dual challenge: (a) accurately predicting the overall cancer phenotype and (b) finding out what cellular morphologies are associated with it at the tile level. To better understand and address these challenges, two existing weakly supervised Multiple Instance Learning (MIL) approaches were explored and compared: Attention MIL (AMIL) and Additive MIL (AdMIL). These architectures were analyzed on tumor detection (a task where these models obtained good results previously) and TP53 mutation detection (a much less explored task). For tumor detection, we built a dataset from Lung Squamous Cell Carcinoma (TCGA-LUSC) slides, with 349 positive and 349 negative slides. The patches were extracted from 5× magnification. For TP53 mutation detection, we explored a dataset built from Invasive Breast Carcinoma (TCGA-BRCA) slides, with 347 positive and 347 negative slides. In this case, we explored three different magnification levels: 5×, 10×, and 20×. Our results show that a modified additive implementation of MIL matched the performance of reference implementation (AUC 0.96), and was only slightly outperformed by AMIL (AUC 0.97) on the tumor detection task. TP53 mutation was most sensitive to features at the higher applications where cellular morphology is resolved. More interestingly from the perspective of the molecular pathologist, we highlight the possible ability of these MIL architectures to identify distinct sensitivities to morphological features (through the detection of regions of interest, ROIs) at different amplification levels. This ability for models to obtain tile-level ROIs is very appealing to pathologists as it provides the possibility for these algorithms to be integrated in a digital staining application for analysis, facilitating the navigation through these high-dimensional images and the diagnostic process.
全玻片图像(WSI)是通过对显微镜玻片进行多尺度高分辨率数字扫描获得的,是现代数字病理学的基石。然而,它们对基于人工智能(AI)/人工智能介导的分析是一个特殊的挑战,因为病理标记通常是在玻片级而不是平片级完成的。医学诊断不仅记录在标本层面,肿瘤基因突变的检测也是通过实验获得的,并由癌症基因组图谱(TCGA)等计划记录在玻片层面。这就构成了双重挑战:(a)准确预测整体癌症表型;(b)在切片层面找出与之相关的细胞形态。为了更好地理解和应对这些挑战,我们探索并比较了两种现有的弱监督多实例学习 (MIL) 方法:注意力 MIL (AMIL) 和添加式 MIL (AdMIL)。我们在肿瘤检测(这些模型之前在这项任务中取得了很好的结果)和 TP53 突变检测(这是一项探索较少的任务)中对这些架构进行了分析。在肿瘤检测方面,我们建立了一个来自肺鳞状细胞癌(TCGA-LUSC)切片的数据集,其中包括 349 张阳性切片和 349 张阴性切片。斑块是从 5 倍放大镜下提取的。在 TP53 突变检测方面,我们利用侵袭性乳腺癌(TCGA-BRCA)切片建立了一个数据集,其中有 347 张阳性切片和 347 张阴性切片。在这种情况下,我们探索了三种不同的放大倍数:5 倍、10 倍和 20 倍。结果表明,在肿瘤检测任务上,MIL 的改进加法实现与参考实现的性能相当(AUC 0.96),仅略高于 AMIL(AUC 0.97)。TP53 突变对细胞形态解析度较高的应用特征最为敏感。从分子病理学家的角度来看,更有趣的是,我们强调了这些 MIL 架构在不同扩增水平下识别形态特征(通过检测感兴趣区,ROI)的不同敏感性的可能能力。模型获得瓦片级 ROI 的这种能力对病理学家来说非常有吸引力,因为它提供了将这些算法集成到数字染色应用中进行分析的可能性,从而为浏览这些高维图像和诊断过程提供了便利。
{"title":"Multiple Instance Learning for WSI: A comparative analysis of attention-based approaches","authors":"Martim Afonso ,&nbsp;Praphulla M.S. Bhawsar ,&nbsp;Monjoy Saha ,&nbsp;Jonas S. Almeida ,&nbsp;Arlindo L. Oliveira","doi":"10.1016/j.jpi.2024.100403","DOIUrl":"10.1016/j.jpi.2024.100403","url":null,"abstract":"<div><div>Whole slide images (WSI), obtained by high-resolution digital scanning of microscope slides at multiple scales, are the cornerstone of modern Digital Pathology. However, they represent a particular challenge to artificial intelligence (AI)-based/AI-mediated analysis because pathology labeling is typically done at slide-level, instead of tile-level. It is not just that medical diagnostics is recorded at the specimen level, the detection of oncogene mutation is also experimentally obtained, and recorded by initiatives like The Cancer Genome Atlas (TCGA), at the slide level. This configures a dual challenge: (a) accurately predicting the overall cancer phenotype and (b) finding out what cellular morphologies are associated with it at the tile level. To better understand and address these challenges, two existing weakly supervised Multiple Instance Learning (MIL) approaches were explored and compared: Attention MIL (AMIL) and Additive MIL (AdMIL). These architectures were analyzed on tumor detection (a task where these models obtained good results previously) and TP53 mutation detection (a much less explored task). For tumor detection, we built a dataset from Lung Squamous Cell Carcinoma (TCGA-LUSC) slides, with 349 positive and 349 negative slides. The patches were extracted from 5× magnification. For TP53 mutation detection, we explored a dataset built from Invasive Breast Carcinoma (TCGA-BRCA) slides, with 347 positive and 347 negative slides. In this case, we explored three different magnification levels: 5×, 10×, and 20×. Our results show that a modified additive implementation of MIL matched the performance of reference implementation (AUC 0.96), and was only slightly outperformed by AMIL (AUC 0.97) on the tumor detection task. TP53 mutation was most sensitive to features at the higher applications where cellular morphology is resolved. More interestingly from the perspective of the molecular pathologist, we highlight the possible ability of these MIL architectures to identify distinct sensitivities to morphological features (through the detection of regions of interest, ROIs) at different amplification levels. This ability for models to obtain tile-level ROIs is very appealing to pathologists as it provides the possibility for these algorithms to be integrated in a digital staining application for analysis, facilitating the navigation through these high-dimensional images and the diagnostic process.</div></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100403"},"PeriodicalIF":0.0,"publicationDate":"2024-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multicenter study to evaluate the analytical precision by pathologists using the Aperio GT 450 DX 一项多中心研究,评估病理学家使用 Aperio GT 450 DX 的分析精度
Q2 Medicine Pub Date : 2024-10-09 DOI: 10.1016/j.jpi.2024.100401
Thomas W. Bauer , Matthew G. Hanna , Kelly D. Smith , S. Joseph Sirintrapun , Meera R. Hameed , Deepti Reddi , Bernard S. Chang , Orly Ardon , Xiaozhi Zhou , Jenny V. Lewis , Shubham Dayal , Joseph Chiweshe , David Ferber , Aysegul Ergin Sutcu , Michael White

Background

Digital pathology systems (DPS) are emerging as capable technologies for clinical practice. Studies have analyzed pathologists' diagnostic concordance by comparing reviews of whole slide images (WSIs) to glass slides (e.g., accuracy). This observational study evaluated the reproducibility of pathologists' diagnostic reviews using the Aperio GT 450 DX under slightly different conditions (precision).

Method

Diagnostic precision was tested in three conditions: intra-system (within systems), inter-system/site (between systems/sites), and intra- and inter-pathologist (within and between pathologists). A total of five study/reading pathologists (one pathologist each for intra-system, inter-system/site, and three for intra-pathologist/inter-pathologist analyses) were assigned to the respective sub-studies.
A panel of 69 glass slides with 23 unique histological features was used to evaluate the WSI system's precision. Each glass slide was scanned to generate a unique WSI. From each WSI, the field of view (FOV) was generated (at least 2 FOVs/WSI), which included the selected features (1–3 features/FOV). Each pathologist reviewed the digital slides and identified which morphological features, if any, were present in each defined FOV. To minimize recall bias, an additional 12 wild card slides from different organ types were used for which FOVs were extracted. The pathologists also read these wild card slides FOVs; however, the corresponding feature identification was not included in the final data analysis.

Results

Each measured endpoint met the pre-defined acceptance criteria of the lower bound of the 95% confidence interval (CI) overall agreement (OA) rate being ≥85% for each sub-study. The lower bound of the 95% CI for the intra-system OA rate was 95.8%; for inter-system analysis, it was 94.9%; for intra-pathologist analysis, 92.4%, whereas for inter-pathologist analyses, the lower bound of the 95% CI of the OA was 90.6%.

Conclusion

The study results indicate that pathologists using the Aperio GT 450 DX WSI system can precisely identify histological features that may be required for accurately diagnosing anatomic pathology cases.
背景数字病理系统(DPS)正在成为临床实践的有效技术。有研究通过比较全切片图像(WSI)和玻璃切片的审查结果(如准确性)来分析病理学家的诊断一致性。本观察性研究评估了病理学家在略有不同的条件下使用 Aperio GT 450 DX 进行诊断审查的可重复性(精确性)。方法在三种条件下测试了诊断精确性:系统内(系统内部)、系统间/站点间(系统/站点之间)以及病理学家内部和病理学家之间(病理学家内部和病理学家之间)。共有五名研究/阅片病理学家(系统内、系统间/病理点各一名,病理学家内/病理学家间分析各三名)被分配到相应的子研究中。对每张玻璃玻片进行扫描,生成唯一的 WSI。从每个 WSI 生成视场(FOV)(至少 2 个 FOV/WSI),其中包括选定的特征(1-3 个特征/FOV)。每位病理学家查看数字切片,确定每个定义的视场(FOV)中存在哪些形态特征(如果有的话)。为了尽量减少回忆偏差,病理学家还使用了另外 12 张来自不同器官类型的通配切片来提取 FOV。结果每一项测量终点均符合预先设定的接受标准,即每项子研究的95%置信区间(CI)总体一致性(OA)率下限≥85%。系统内 OA 率的 95% CI 下限为 95.8%;系统间分析的 OA 率为 94.9%;病理学家内部分析的 OA 率为 92.4%,而病理学家间分析的 OA 率的 95% CI 下限为 90.6%。
{"title":"A multicenter study to evaluate the analytical precision by pathologists using the Aperio GT 450 DX","authors":"Thomas W. Bauer ,&nbsp;Matthew G. Hanna ,&nbsp;Kelly D. Smith ,&nbsp;S. Joseph Sirintrapun ,&nbsp;Meera R. Hameed ,&nbsp;Deepti Reddi ,&nbsp;Bernard S. Chang ,&nbsp;Orly Ardon ,&nbsp;Xiaozhi Zhou ,&nbsp;Jenny V. Lewis ,&nbsp;Shubham Dayal ,&nbsp;Joseph Chiweshe ,&nbsp;David Ferber ,&nbsp;Aysegul Ergin Sutcu ,&nbsp;Michael White","doi":"10.1016/j.jpi.2024.100401","DOIUrl":"10.1016/j.jpi.2024.100401","url":null,"abstract":"<div><h3>Background</h3><div>Digital pathology systems (DPS) are emerging as capable technologies for clinical practice. Studies have analyzed pathologists' diagnostic concordance by comparing reviews of whole slide images (WSIs) to glass slides (e.g., accuracy). This observational study evaluated the reproducibility of pathologists' diagnostic reviews using the Aperio GT 450 DX under slightly different conditions (precision).</div></div><div><h3>Method</h3><div>Diagnostic precision was tested in three conditions: intra-system (within systems), inter-system/site (between systems/sites), and intra- and inter-pathologist (within and between pathologists). A total of five study/reading pathologists (one pathologist each for intra-system, inter-system/site, and three for intra-pathologist/inter-pathologist analyses) were assigned to the respective sub-studies.</div><div>A panel of 69 glass slides with 23 unique histological features was used to evaluate the WSI system's precision. Each glass slide was scanned to generate a unique WSI. From each WSI, the field of view (FOV) was generated (at least 2 FOVs/WSI), which included the selected features (1–3 features/FOV). Each pathologist reviewed the digital slides and identified which morphological features, if any, were present in each defined FOV. To minimize recall bias, an additional 12 wild card slides from different organ types were used for which FOVs were extracted. The pathologists also read these wild card slides FOVs; however, the corresponding feature identification was not included in the final data analysis.</div></div><div><h3>Results</h3><div>Each measured endpoint met the pre-defined acceptance criteria of the lower bound of the 95% confidence interval (CI) overall agreement (OA) rate being ≥85% for each sub-study. The lower bound of the 95% CI for the intra-system OA rate was 95.8%; for inter-system analysis, it was 94.9%; for intra-pathologist analysis, 92.4%, whereas for inter-pathologist analyses, the lower bound of the 95% CI of the OA was 90.6%.</div></div><div><h3>Conclusion</h3><div>The study results indicate that pathologists using the Aperio GT 450 DX WSI system can precisely identify histological features that may be required for accurately diagnosing anatomic pathology cases.</div></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100401"},"PeriodicalIF":0.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Pathology Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1