首页 > 最新文献

International Journal on Document Analysis and Recognition最新文献

英文 中文
Handwritten stenography recognition and the LION dataset 手写速记识别和 LION 数据集
IF 2.3 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-15 DOI: 10.1007/s10032-024-00479-6
Raphaela Heil, Malin Nauwerck

In this paper, we establish the first baseline for handwritten stenography recognition, using the novel LION dataset, and investigate the impact of including selected aspects of stenographic theory into the recognition process. We make the LION dataset publicly available with the aim of encouraging future research in handwritten stenography recognition. A state-of-the-art text recognition model is trained to establish a baseline. Stenographic domain knowledge is integrated by transforming the target sequences into representations which approximate diplomatic transcriptions, wherein each symbol in the script is represented by its own character in the transliteration, as opposed to corresponding combinations of characters from the Swedish alphabet. Four such encoding schemes are evaluated and results are further improved by integrating a pre-training scheme, based on synthetic data. The baseline model achieves an average test character error rate (CER) of 29.81% and a word error rate (WER) of 55.14%. Test error rates are reduced significantly (p< 0.01) by combining stenography-specific target sequence encodings with pre-training and fine-tuning, yielding CERs in the range of 24.5–26% and WERs of 44.8–48.2%. An analysis of selected recognition errors illustrates the challenges that the stenographic writing system poses to text recognition. This work establishes the first baseline for handwritten stenography recognition. Our proposed combination of integrating stenography-specific knowledge, in conjunction with pre-training and fine-tuning on synthetic data, yields considerable improvements. Together with our precursor study on the subject, this is the first work to apply modern handwritten text recognition to stenography. The dataset and our code are publicly available via Zenodo.

在本文中,我们利用新颖的 LION 数据集建立了手写速记识别的第一条基准线,并研究了将速记理论的某些方面纳入识别过程的影响。我们公开 LION 数据集,旨在鼓励未来的手写速记识别研究。我们对最先进的文本识别模型进行了训练,以建立基线。通过将目标序列转换为近似外交转写的表示形式,整合了速记领域的知识,其中文字中的每个符号在音译中都由各自的字符表示,而不是瑞典语字母表中的相应字符组合。我们评估了四种这样的编码方案,并通过整合基于合成数据的预训练方案进一步改进了结果。基线模型的平均测试字符错误率 (CER) 为 29.81%,单词错误率 (WER) 为 55.14%。通过将速记特定目标序列编码与预训练和微调相结合,测试错误率大幅降低(p< 0.01),CER 为 24.5%-26%,WER 为 44.8%-48.2%。对部分识别错误的分析表明了速记书写系统对文本识别带来的挑战。这项工作为手写速记识别建立了第一条基准线。我们建议将速记特定知识与合成数据的预训练和微调相结合,从而取得了显著的改进。连同我们在这一主题上的先行研究,这是第一项将现代手写文本识别应用于速记的工作。数据集和我们的代码可通过 Zenodo 公开获取。
{"title":"Handwritten stenography recognition and the LION dataset","authors":"Raphaela Heil, Malin Nauwerck","doi":"10.1007/s10032-024-00479-6","DOIUrl":"https://doi.org/10.1007/s10032-024-00479-6","url":null,"abstract":"<p>In this paper, we establish the first baseline for handwritten stenography recognition, using the novel LION dataset, and investigate the impact of including selected aspects of stenographic theory into the recognition process. We make the LION dataset publicly available with the aim of encouraging future research in handwritten stenography recognition. A state-of-the-art text recognition model is trained to establish a baseline. Stenographic domain knowledge is integrated by transforming the target sequences into representations which approximate diplomatic transcriptions, wherein each symbol in the script is represented by its own character in the transliteration, as opposed to corresponding combinations of characters from the Swedish alphabet. Four such encoding schemes are evaluated and results are further improved by integrating a pre-training scheme, based on synthetic data. The baseline model achieves an average test character error rate (CER) of 29.81% and a word error rate (WER) of 55.14%. Test error rates are reduced significantly (<i>p</i>&lt; 0.01) by combining stenography-specific target sequence encodings with pre-training and fine-tuning, yielding CERs in the range of 24.5–26% and WERs of 44.8–48.2%. An analysis of selected recognition errors illustrates the challenges that the stenographic writing system poses to text recognition. This work establishes the first baseline for handwritten stenography recognition. Our proposed combination of integrating stenography-specific knowledge, in conjunction with pre-training and fine-tuning on synthetic data, yields considerable improvements. Together with our precursor study on the subject, this is the first work to apply modern handwritten text recognition to stenography. The dataset and our code are publicly available via Zenodo.</p>","PeriodicalId":50277,"journal":{"name":"International Journal on Document Analysis and Recognition","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compactnet: a lightweight convolutional neural network for one-shot online signature verification Compactnet:用于一次在线签名验证的轻量级卷积神经网络
IF 2.3 4区 计算机科学 Q1 Computer Science Pub Date : 2024-05-27 DOI: 10.1007/s10032-024-00478-7
Napa Sae-Bae, Nida Chatwattanasiri, Somkait Udomhunsakul

This paper proposes a method for the online signature verification task that allows the signature to be verified effectively using a single enrolled signature sample. The method utilizes a neural network with two one-dimensional convolutional neural network (1D-CNN) components to extract the vector representation of an online signature. The first component is a global 1D-CNN with full-length kernels. The second component is the standard 1D-CNN with partial length kernels that have been successfully used in many time-series classification tasks. The network is trained from a set of online signature samples to extract the vector representation of unknown signatures. The experimental results demonstrated that when using a vector representation derived from the proposed network, a single unseen enrolled signature sample achieved an Equal Error Rate (EER) of 4.35% when tested against authentic signatures of other users. This result indicates the effectiveness of the network in accurately distinguishing between genuine signatures and those of different users.

本文针对在线签名验证任务提出了一种方法,该方法允许使用单个注册签名样本对签名进行有效验证。该方法利用一个包含两个一维卷积神经网络(1D-CNN)组件的神经网络来提取在线签名的向量表示。第一个组件是具有全长内核的全局一维卷积神经网络。第二个部分是标准的一维卷积神经网络(1D-CNN),其部分长度内核已成功用于许多时间序列分类任务中。该网络通过一组在线签名样本进行训练,以提取未知签名的向量表示。实验结果表明,当使用从所提出的网络中提取的向量表示时,在与其他用户的真实签名进行测试时,单个未见的注册签名样本的等效错误率(EER)为 4.35%。这一结果表明,该网络能有效准确地区分真实签名和不同用户的签名。
{"title":"Compactnet: a lightweight convolutional neural network for one-shot online signature verification","authors":"Napa Sae-Bae, Nida Chatwattanasiri, Somkait Udomhunsakul","doi":"10.1007/s10032-024-00478-7","DOIUrl":"https://doi.org/10.1007/s10032-024-00478-7","url":null,"abstract":"<p>This paper proposes a method for the online signature verification task that allows the signature to be verified effectively using a single enrolled signature sample. The method utilizes a neural network with two one-dimensional convolutional neural network (1D-CNN) components to extract the vector representation of an online signature. The first component is a global 1D-CNN with full-length kernels. The second component is the standard 1D-CNN with partial length kernels that have been successfully used in many time-series classification tasks. The network is trained from a set of online signature samples to extract the vector representation of unknown signatures. The experimental results demonstrated that when using a vector representation derived from the proposed network, a single unseen enrolled signature sample achieved an Equal Error Rate (EER) of 4.35% when tested against authentic signatures of other users. This result indicates the effectiveness of the network in accurately distinguishing between genuine signatures and those of different users.</p>","PeriodicalId":50277,"journal":{"name":"International Journal on Document Analysis and Recognition","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141166470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experimental study of rehearsal-based incremental classification of document streams 基于演练的文件流增量分类实验研究
IF 2.3 4区 计算机科学 Q1 Computer Science Pub Date : 2024-05-11 DOI: 10.1007/s10032-024-00467-w
Usman Malik, Muriel Visani, Nicolas Sidere, Mickael Coustaty, Aurelie Joseph

This research work proposes a novel protocol for rehearsal-based incremental learning models for the classification of business document streams using deep learning and, in particular, transformer-based natural language processing techniques. When implementing a rehearsal-based incremental classification model, the questions raised most often for parameterizing the model relate to the number of instances from “old” classes (learned in previous training iterations) which need to be kept in memory and the optimal number of new classes to be learned at each iteration. In this paper, we propose an incremental learning protocol that involves training incremental models using a weight-sharing strategy between transformer model layers across incremental training iterations. We provide a thorough experimental study that enables us to determine optimal ranges for various parameters in the context of incremental classification of business document streams. We also study the effect of the order in which the classes are presented to the model for learning and the effects of class imbalance on the model’s performances. Our results reveal no significant difference in the performances of our incrementally trained model and its statically trained counterpart after all training iterations (especially when, in the presence of class imbalance, the most represented classes are learned first). In addition, our proposed approach shows an improvement of 1.55% and 3.66% over a baseline model on two business documents dataset. Based on this experimental study, we provide a list of recommendations for researchers and developers for training rehearsal-based incremental classification models for business document streams. Our protocol can be further re-used for other final applications.

这项研究工作提出了一种基于演练的增量学习模型的新协议,该协议利用深度学习,特别是基于转换器的自然语言处理技术,对商业文档流进行分类。在实施基于演练的增量分类模型时,最常提出的模型参数化问题涉及需要保留在内存中的 "旧 "类(在以前的训练迭代中学习过)实例的数量,以及每次迭代中要学习的新类的最佳数量。在本文中,我们提出了一种增量学习协议,即在增量训练迭代中使用转换器模型层之间的权重共享策略来训练增量模型。我们通过深入的实验研究,确定了商业文档流增量分类中各种参数的最佳范围。我们还研究了向模型展示类别的学习顺序的影响,以及类别不平衡对模型性能的影响。结果表明,在所有训练迭代之后,我们的增量训练模型与静态训练模型的性能没有明显差异(尤其是在类不平衡的情况下,首先学习代表性最强的类)。此外,在两个商业文档数据集上,我们提出的方法比基准模型分别提高了 1.55% 和 3.66%。基于这项实验研究,我们为研究人员和开发人员提供了一系列建议,用于训练基于演练的商业文档流增量分类模型。我们的方案还可进一步用于其他最终应用。
{"title":"Experimental study of rehearsal-based incremental classification of document streams","authors":"Usman Malik, Muriel Visani, Nicolas Sidere, Mickael Coustaty, Aurelie Joseph","doi":"10.1007/s10032-024-00467-w","DOIUrl":"https://doi.org/10.1007/s10032-024-00467-w","url":null,"abstract":"<p>This research work proposes a novel protocol for rehearsal-based incremental learning models for the classification of business document streams using deep learning and, in particular, transformer-based natural language processing techniques. When implementing a rehearsal-based incremental classification model, the questions raised most often for parameterizing the model relate to the number of instances from “old” classes (learned in previous training iterations) which need to be kept in memory and the optimal number of new classes to be learned at each iteration. In this paper, we propose an incremental learning protocol that involves training incremental models using a weight-sharing strategy between transformer model layers across incremental training iterations. We provide a thorough experimental study that enables us to determine optimal ranges for various parameters in the context of incremental classification of business document streams. We also study the effect of the order in which the classes are presented to the model for learning and the effects of class imbalance on the model’s performances. Our results reveal no significant difference in the performances of our incrementally trained model and its statically trained counterpart after all training iterations (especially when, in the presence of class imbalance, the most represented classes are learned first). In addition, our proposed approach shows an improvement of 1.55% and 3.66% over a baseline model on two business documents dataset. Based on this experimental study, we provide a list of recommendations for researchers and developers for training rehearsal-based incremental classification models for business document streams. Our protocol can be further re-used for other final applications.</p>","PeriodicalId":50277,"journal":{"name":"International Journal on Document Analysis and Recognition","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140940763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deformity removal from handwritten text documents using variable cycle GAN 利用可变周期 GAN 从手写文本文档中去除畸形
IF 2.3 4区 计算机科学 Q1 Computer Science Pub Date : 2024-05-07 DOI: 10.1007/s10032-024-00466-x
Shivangi Nigam, Adarsh Prasad Behera, Shekhar Verma, P. Nagabhushan

Text recognition systems typically work well for printed documents but struggle with handwritten documents due to different writing styles, background complexities, added noise of image acquisition methods, and deformed text images such as strike-offs and underlines. These deformities change the structural information, making it difficult to restore the deformed images while maintaining the structural information and preserving the semantic dependencies of the local pixels. Current adversarial networks are unable to preserve the structural and semantic dependencies as they focus on individual pixel-to-pixel variation and encourage non-meaningful aspects of the images. To address this, we propose a Variable Cycle Generative Adversarial Network (VCGAN) that considers the perceptual quality of the images. By using a variable Content Loss (Top-k Variable Loss ((TV_{k})) ), VCGAN preserves the inter-dependence of spatially close pixels while removing the strike-off strokes. The similarity of the images is computed with (TV_{k}) considering the intensity variations that do not interfere with the semantic structures of the image. Our results show that VCGAN can remove most deformities with an elevated F1 score of (97.40 %) and outperforms current state-of-the-art algorithms with a character error rate of (7.64 %) and word accuracy of (81.53 %) when tested on the handwritten text recognition system

文本识别系统通常能很好地识别印刷文件,但在识别手写文件时却很困难,原因包括书写风格不同、背景复杂、图像采集方法增加了噪音,以及文本图像(如删除线和下划线)变形。这些变形改变了结构信息,因此很难在还原变形图像的同时保持结构信息和局部像素的语义依赖性。目前的对抗网络无法保留结构和语义依赖性,因为它们只关注单个像素间的变化,并鼓励图像的非意义方面。为此,我们提出了一种考虑图像感知质量的可变周期生成对抗网络(VCGAN)。通过使用可变内容损失(Top-k Variable Loss ((TV_{k})) ),VCGAN 保留了空间上相近像素的相互依存性,同时消除了剔除笔画。使用 (TV_{k}) 计算图像的相似度时,会考虑不干扰图像语义结构的强度变化。我们的研究结果表明,在手写文本识别系统上进行测试时,VCGAN 可以去除大多数变形,F1 分数高达 97.40 分,并且优于当前最先进的算法,其字符错误率为 7.64 分,单词准确率为 81.53 分。
{"title":"Deformity removal from handwritten text documents using variable cycle GAN","authors":"Shivangi Nigam, Adarsh Prasad Behera, Shekhar Verma, P. Nagabhushan","doi":"10.1007/s10032-024-00466-x","DOIUrl":"https://doi.org/10.1007/s10032-024-00466-x","url":null,"abstract":"<p>Text recognition systems typically work well for printed documents but struggle with handwritten documents due to different writing styles, background complexities, added noise of image acquisition methods, and deformed text images such as strike-offs and underlines. These deformities change the structural information, making it difficult to restore the deformed images while maintaining the structural information and preserving the semantic dependencies of the local pixels. Current adversarial networks are unable to preserve the structural and semantic dependencies as they focus on individual pixel-to-pixel variation and encourage non-meaningful aspects of the images. To address this, we propose a Variable Cycle Generative Adversarial Network (<i>VCGAN</i>) that considers the perceptual quality of the images. By using a variable Content Loss (Top-<i>k</i> Variable Loss (<span>(TV_{k})</span>) ), <i>VCGAN</i> preserves the inter-dependence of spatially close pixels while removing the strike-off strokes. The similarity of the images is computed with <span>(TV_{k})</span> considering the intensity variations that do not interfere with the semantic structures of the image. Our results show that <i>VCGAN</i> can remove most deformities with an elevated <i>F</i>1 score of <span>(97.40 %)</span> and outperforms current state-of-the-art algorithms with a character error rate of <span>(7.64 %)</span> and word accuracy of <span>(81.53 %)</span> when tested on the handwritten text recognition system</p>","PeriodicalId":50277,"journal":{"name":"International Journal on Document Analysis and Recognition","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140882780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated systems for diagnosis of dysgraphia in children: a survey and novel framework 儿童书写障碍自动诊断系统:调查与新框架
IF 2.3 4区 计算机科学 Q1 Computer Science Pub Date : 2024-04-15 DOI: 10.1007/s10032-024-00464-z
Jayakanth Kunhoth, Somaya Al-Maadeed, Suchithra Kunhoth, Younes Akbari, Moutaz Saleh

Learning disabilities, which primarily interfere with basic learning skills such as reading, writing, and math, are known to affect around 10% of children in the world. The poor motor skills and motor coordination as part of the neurodevelopmental disorder can become a causative factor for the difficulty in learning to write (dysgraphia), hindering the academic track of an individual. The signs and symptoms of dysgraphia include but are not limited to irregular handwriting, improper handling of writing medium, slow or labored writing, unusual hand position, etc. The widely accepted assessment criterion for all types of learning disabilities including dysgraphia has traditionally relied on examinations conducted by medical expert. However, in recent years, artificial intelligence has been employed to develop diagnostic systems for learning disabilities, utilizing diverse modalities of data, including handwriting analysis. This work presents a review of the existing automated dysgraphia diagnosis systems for children in the literature. The main focus of the work is to review artificial intelligence-based systems for dysgraphia diagnosis in children. This work discusses the data collection method, important handwriting features, and machine learning algorithms employed in the literature for the diagnosis of dysgraphia. Apart from that, this article discusses some of the non-artificial intelligence-based automated systems. Furthermore, this article discusses the drawbacks of existing systems and proposes a novel framework for dysgraphia diagnosis and assistance evaluation.

据了解,学习障碍主要影响阅读、写作和数学等基本学习技能,影响着全球约 10%的儿童。作为神经发育障碍的一部分,运动技能和运动协调能力差会成为学习书写(书写障碍)困难的致病因素,阻碍个人的学业。书写障碍的症状和体征包括但不限于书写不规范、书写媒介处理不当、书写缓慢或费力、手部姿势异常等。传统上,包括书写障碍在内的各类学习障碍的广泛认可的评估标准依赖于医学专家的检查。然而,近年来,人工智能已被用于开发学习障碍诊断系统,利用包括笔迹分析在内的各种数据模式。本作品对文献中现有的儿童自动书写障碍诊断系统进行了综述。这项工作的重点是回顾基于人工智能的儿童书写障碍诊断系统。本作品讨论了数据收集方法、重要的笔迹特征以及文献中用于诊断书写障碍的机器学习算法。除此之外,本文还讨论了一些非人工智能自动系统。此外,本文还讨论了现有系统的缺点,并提出了一个用于书写障碍诊断和辅助评估的新框架。
{"title":"Automated systems for diagnosis of dysgraphia in children: a survey and novel framework","authors":"Jayakanth Kunhoth, Somaya Al-Maadeed, Suchithra Kunhoth, Younes Akbari, Moutaz Saleh","doi":"10.1007/s10032-024-00464-z","DOIUrl":"https://doi.org/10.1007/s10032-024-00464-z","url":null,"abstract":"<p>Learning disabilities, which primarily interfere with basic learning skills such as reading, writing, and math, are known to affect around 10% of children in the world. The poor motor skills and motor coordination as part of the neurodevelopmental disorder can become a causative factor for the difficulty in learning to write (dysgraphia), hindering the academic track of an individual. The signs and symptoms of dysgraphia include but are not limited to irregular handwriting, improper handling of writing medium, slow or labored writing, unusual hand position, etc. The widely accepted assessment criterion for all types of learning disabilities including dysgraphia has traditionally relied on examinations conducted by medical expert. However, in recent years, artificial intelligence has been employed to develop diagnostic systems for learning disabilities, utilizing diverse modalities of data, including handwriting analysis. This work presents a review of the existing automated dysgraphia diagnosis systems for children in the literature. The main focus of the work is to review artificial intelligence-based systems for dysgraphia diagnosis in children. This work discusses the data collection method, important handwriting features, and machine learning algorithms employed in the literature for the diagnosis of dysgraphia. Apart from that, this article discusses some of the non-artificial intelligence-based automated systems. Furthermore, this article discusses the drawbacks of existing systems and proposes a novel framework for dysgraphia diagnosis and assistance evaluation.</p>","PeriodicalId":50277,"journal":{"name":"International Journal on Document Analysis and Recognition","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140882618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Children age group detection based on human–computer interaction and time series analysis 基于人机交互和时间序列分析的儿童年龄组检测
IF 2.3 4区 计算机科学 Q1 Computer Science Pub Date : 2024-03-06 DOI: 10.1007/s10032-024-00462-1
Juan Carlos Ruiz-Garcia, Carlos Hojas, Ruben Tolosana, Ruben Vera-Rodriguez, Aythami Morales, Julian Fierrez, Javier Ortega-Garcia, Jaime Herreros-Rodriguez

This article proposes a novel children–computer interaction (CCI) approach for the task of age group detection. This approach focuses on the automatic analysis of the time series generated from the interaction of the children with mobile devices. In particular, we extract a set of 25 time series related to spatial, pressure, and kinematic information of the children interaction while colouring a tree through a pen stylus tablet, a specific test from the large-scale public ChildCIdb database. A complete analysis of the proposed approach is carried out using different time series selection techniques to choose the most discriminative ones for the age group detection task: (i) a statistical analysis and (ii) an automatic algorithm called sequential forward search (SFS). In addition, different classification algorithms such as dynamic time warping barycenter averaging (DBA) and hidden Markov models (HMM) are studied. Accuracy results over 85% are achieved, outperforming previous approaches in the literature and in more challenging age group conditions. Finally, the approach presented in this study can benefit many children-related applications, for example, towards an age-appropriate environment with the technology.

本文针对年龄组检测任务提出了一种新颖的儿童与计算机互动(CCI)方法。该方法侧重于自动分析儿童与移动设备交互过程中产生的时间序列。特别是,我们从大规模公共 ChildCIdb 数据库中提取了一组 25 个与空间、压力和运动学信息相关的时间序列,用于儿童在通过手写笔给一棵树涂色时的交互。我们使用不同的时间序列选择技术对所提出的方法进行了全面分析,以便为年龄组检测任务选择最具辨别力的时间序列:(i) 统计分析和 (ii) 称为顺序前向搜索(SFS)的自动算法。此外,还研究了不同的分类算法,如动态时间扭曲平均法(DBA)和隐马尔可夫模型(HMM)。结果表明,在更具挑战性的年龄组条件下,准确率超过 85%,优于以往文献中的方法。最后,本研究中介绍的方法可以使许多与儿童相关的应用受益,例如,利用该技术营造一个与年龄相适应的环境。
{"title":"Children age group detection based on human–computer interaction and time series analysis","authors":"Juan Carlos Ruiz-Garcia, Carlos Hojas, Ruben Tolosana, Ruben Vera-Rodriguez, Aythami Morales, Julian Fierrez, Javier Ortega-Garcia, Jaime Herreros-Rodriguez","doi":"10.1007/s10032-024-00462-1","DOIUrl":"https://doi.org/10.1007/s10032-024-00462-1","url":null,"abstract":"<p>This article proposes a novel children–computer interaction (CCI) approach for the task of age group detection. This approach focuses on the automatic analysis of the time series generated from the interaction of the children with mobile devices. In particular, we extract a set of 25 time series related to spatial, pressure, and kinematic information of the children interaction while colouring a tree through a pen stylus tablet, a specific test from the large-scale public ChildCIdb database. A complete analysis of the proposed approach is carried out using different time series selection techniques to choose the most discriminative ones for the age group detection task: (i) a statistical analysis and (ii) an automatic algorithm called sequential forward search (SFS). In addition, different classification algorithms such as dynamic time warping barycenter averaging (DBA) and hidden Markov models (HMM) are studied. Accuracy results over 85% are achieved, outperforming previous approaches in the literature and in more challenging age group conditions. Finally, the approach presented in this study can benefit many children-related applications, for example, towards an age-appropriate environment with the technology.</p>","PeriodicalId":50277,"journal":{"name":"International Journal on Document Analysis and Recognition","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140056600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An unsupervised automatic organization method for Professor Shirakawa’s hand-notated documents of oracle bone inscriptions 白川教授手记甲骨文文献的无监督自动整理方法
IF 2.3 4区 计算机科学 Q1 Computer Science Pub Date : 2024-03-05 DOI: 10.1007/s10032-024-00463-0
Xuebin Yue, Ziming Wang, Ryuto Ishibashi, Hayata Kaneko, Lin Meng

As one of the most influential Chinese cultural researchers in the second half of the twentieth-century, Professor Shirakawa is active in the research field of ancient Chinese characters. He has left behind many valuable research documents, especially his hand-notated oracle bone inscriptions (OBIs) documents. OBIs are one of the world’s oldest characters and were used in the Shang Dynasty about 3600 years ago for divination and recording events. The organization of OBIs is not only helpful in better understanding Prof. Shirakawa’s research and further study of OBIs in general and their importance in ancient Chinese history. This paper proposes an unsupervised automatic organization method to organize Prof. Shirakawa’s OBIs and construct a handwritten OBIs data set for neural network learning. First, a suite of noise reduction is proposed to remove strangely shaped noise to reduce the data loss of OBIs. Secondly, a novel segmentation method based on the supervised classification of OBIs regions is proposed to reduce adverse effects between characters for more accurate OBIs segmentation. Thirdly, a unique unsupervised clustering method is proposed to classify the segmented characters. Finally, all the same characters in the hand-notated OBIs documents are organized together. The evaluation results show that noise reduction has been proposed to remove noises with an accuracy of 97.85%, which contains number information and closed-loop-like edges in the dataset. In addition, the accuracy of supervised classification of OBIs regions based on our model achieves 85.50%, which is higher than eight state-of-the-art deep learning models, and a particular preprocessing method we proposed improves the classification accuracy by nearly 11.50%. The accuracy of OBIs clustering based on supervised classification achieves 74.91%. These results demonstrate the effectiveness of our proposed unsupervised automatic organization of Prof. Shirakawa’s hand-notated OBIs documents. The code and datasets are available at http://www.ihpc.se.ritsumei.ac.jp/obidataset.html.

作为二十世纪下半叶最具影响力的中国文化研究者之一,白川教授活跃在中国古代文字研究领域。他留下了许多珍贵的研究文献,尤其是他手注的甲骨文文献。甲骨文是世界上最古老的文字之一,约在 3600 年前的商代用于占卜和记录事件。整理甲骨文不仅有助于更好地理解白川教授的研究,还有助于进一步研究甲骨文及其在中国古代历史中的重要性。本文提出了一种无监督自动组织方法来组织白川教授的口述历史,并构建了一个用于神经网络学习的手写口述历史数据集。首先,本文提出了一套降噪方法来去除奇形怪状的噪声,以减少 OBIs 的数据损失。其次,提出了一种基于 OBIs 区域监督分类的新型分割方法,以减少字符之间的不利影响,从而实现更准确的 OBIs 分割。第三,提出一种独特的无监督聚类方法,对分割后的字符进行分类。最后,将手注 OBIs 文档中所有相同的字符组织在一起。评估结果表明,提出的降噪方法去除噪音的准确率达到了 97.85%,其中包含了数据集中的数字信息和类似闭环的边缘。此外,基于我们的模型对OBIs区域进行监督分类的准确率达到了85.50%,高于8个最先进的深度学习模型,而我们提出的一种特殊预处理方法将分类准确率提高了近11.50%。基于监督分类的 OBIs 聚类准确率达到了 74.91%。这些结果证明了我们提出的对白川教授手注 OBIs 文档进行无监督自动组织的有效性。代码和数据集可在 http://www.ihpc.se.ritsumei.ac.jp/obidataset.html 上获取。
{"title":"An unsupervised automatic organization method for Professor Shirakawa’s hand-notated documents of oracle bone inscriptions","authors":"Xuebin Yue, Ziming Wang, Ryuto Ishibashi, Hayata Kaneko, Lin Meng","doi":"10.1007/s10032-024-00463-0","DOIUrl":"https://doi.org/10.1007/s10032-024-00463-0","url":null,"abstract":"<p>As one of the most influential Chinese cultural researchers in the second half of the twentieth-century, Professor Shirakawa is active in the research field of ancient Chinese characters. He has left behind many valuable research documents, especially his hand-notated oracle bone inscriptions (OBIs) documents. OBIs are one of the world’s oldest characters and were used in the Shang Dynasty about 3600 years ago for divination and recording events. The organization of OBIs is not only helpful in better understanding Prof. Shirakawa’s research and further study of OBIs in general and their importance in ancient Chinese history. This paper proposes an unsupervised automatic organization method to organize Prof. Shirakawa’s OBIs and construct a handwritten OBIs data set for neural network learning. First, a suite of noise reduction is proposed to remove strangely shaped noise to reduce the data loss of OBIs. Secondly, a novel segmentation method based on the supervised classification of OBIs regions is proposed to reduce adverse effects between characters for more accurate OBIs segmentation. Thirdly, a unique unsupervised clustering method is proposed to classify the segmented characters. Finally, all the same characters in the hand-notated OBIs documents are organized together. The evaluation results show that noise reduction has been proposed to remove noises with an accuracy of 97.85%, which contains number information and closed-loop-like edges in the dataset. In addition, the accuracy of supervised classification of OBIs regions based on our model achieves 85.50%, which is higher than eight state-of-the-art deep learning models, and a particular preprocessing method we proposed improves the classification accuracy by nearly 11.50%. The accuracy of OBIs clustering based on supervised classification achieves 74.91%. These results demonstrate the effectiveness of our proposed unsupervised automatic organization of Prof. Shirakawa’s hand-notated OBIs documents. The code and datasets are available at http://www.ihpc.se.ritsumei.ac.jp/obidataset.html.</p>","PeriodicalId":50277,"journal":{"name":"International Journal on Document Analysis and Recognition","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140046535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the improvement of handwritten text line recognition with octave convolutional recurrent neural networks 论八度卷积递归神经网络对手写文本行识别的改进
IF 2.3 4区 计算机科学 Q1 Computer Science Pub Date : 2024-02-20 DOI: 10.1007/s10032-024-00460-3
Dayvid Castro, Cleber Zanchettin, Luís A. Nunes Amaral

Off-line handwritten text recognition (HTR) poses a significant challenge due to the complexities of variable handwriting styles, background degradation, and unconstrained word sequences. This work tackles the handwritten text line recognition problem using octave convolutional recurrent neural networks (OctCRNN). Our approach requires no word segmentation, preprocessing, or explicit feature extraction and leverages octave convolutions to process multiscale features without increasing the number of learnable parameters. We investigate the OctCRNN under different settings, including an octave design that efficiently balances computational cost and recognition performance. We thoroughly investigate the OctCRNN under different settings by formulating an experimental pipeline with a visualization step to get intuitions about how the model works compared to a counterpart based on traditional convolutions. The system becomes complete by adding a language model to increase linguistic knowledge. Finally, we assess the performance of our solution using character and word error rates against established handwritten text recognition benchmarks: IAM, RIMES, and ICFHR 2016 READ. According to the results, our proposal achieves state-of-the-art performance while reducing the computational requirements. Our findings suggest that the architecture provides a robust framework for building HTR systems.

离线手写文本识别(HTR)因手写风格多变、背景退化和无约束单词序列等复杂问题而面临巨大挑战。这项研究利用八度卷积递归神经网络(OctCRNN)解决了手写文本行识别问题。我们的方法无需进行单词分割、预处理或显式特征提取,并利用倍频卷积处理多尺度特征,同时不增加可学习参数的数量。我们研究了不同设置下的 OctCRNN,包括有效平衡计算成本和识别性能的倍频程设计。我们通过制定一个具有可视化步骤的实验流水线,深入研究了 OctCRNN 在不同设置下的工作原理。通过添加语言模型来增加语言知识,系统变得更加完整。最后,我们使用字符和单词错误率评估了我们的解决方案与既定手写文本识别基准(IAM、RIMES 和 ICFHR 2016 READ)的性能。结果表明,我们的方案在降低计算要求的同时实现了最先进的性能。我们的研究结果表明,该架构为构建 HTR 系统提供了一个稳健的框架。
{"title":"On the improvement of handwritten text line recognition with octave convolutional recurrent neural networks","authors":"Dayvid Castro, Cleber Zanchettin, Luís A. Nunes Amaral","doi":"10.1007/s10032-024-00460-3","DOIUrl":"https://doi.org/10.1007/s10032-024-00460-3","url":null,"abstract":"<p>Off-line handwritten text recognition (HTR) poses a significant challenge due to the complexities of variable handwriting styles, background degradation, and unconstrained word sequences. This work tackles the handwritten text line recognition problem using octave convolutional recurrent neural networks (OctCRNN). Our approach requires no word segmentation, preprocessing, or explicit feature extraction and leverages octave convolutions to process multiscale features without increasing the number of learnable parameters. We investigate the OctCRNN under different settings, including an octave design that efficiently balances computational cost and recognition performance. We thoroughly investigate the OctCRNN under different settings by formulating an experimental pipeline with a visualization step to get intuitions about how the model works compared to a counterpart based on traditional convolutions. The system becomes complete by adding a language model to increase linguistic knowledge. Finally, we assess the performance of our solution using character and word error rates against established handwritten text recognition benchmarks: IAM, RIMES, and ICFHR 2016 READ. According to the results, our proposal achieves state-of-the-art performance while reducing the computational requirements. Our findings suggest that the architecture provides a robust framework for building HTR systems.</p>","PeriodicalId":50277,"journal":{"name":"International Journal on Document Analysis and Recognition","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139911073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Training transformer architectures on few annotated data: an application to historical handwritten text recognition 在少量注释数据上训练转换器架构:应用于历史手写文本识别
IF 2.3 4区 计算机科学 Q1 Computer Science Pub Date : 2024-01-25 DOI: 10.1007/s10032-023-00459-2
Killian Barrere, Yann Soullard, Aurélie Lemaitre, Bertrand Coüasnon

Transformer-based architectures show excellent results on the task of handwritten text recognition, becoming the standard architecture for modern datasets. However, they require a significant amount of annotated data to achieve competitive results. They typically rely on synthetic data to solve this problem. Historical handwritten text recognition represents a challenging task due to degradations, specific handwritings for which few examples are available and ancient languages that vary over time. These limitations also make it difficult to generate realistic synthetic data. Given sufficient and appropriate data, Transformer-based architectures could alleviate these concerns, thanks to their ability to have a global view of textual images and their language modeling capabilities. In this paper, we propose the use of a lightweight Transformer model to tackle the task of historical handwritten text recognition. To train the architecture, we introduce realistic looking synthetic data reproducing the style of historical handwritings. We present a specific strategy, both for training and prediction, to deal with historical documents, where only a limited amount of training data are available. We evaluate our approach on the ICFHR 2018 READ dataset which is dedicated to handwriting recognition in specific historical documents. The results show that our Transformer-based approach is able to outperform existing methods.

基于变换器的架构在手写文本识别任务中显示出卓越的效果,已成为现代数据集的标准架构。然而,它们需要大量的注释数据才能获得有竞争力的结果。它们通常依靠合成数据来解决这个问题。历史手写文本识别是一项具有挑战性的任务,原因包括退化、可用示例很少的特定手写体以及随时间变化的古代语言。这些局限性也使得生成真实的合成数据变得困难。如果有足够和适当的数据,基于变换器的架构可以缓解这些问题,这要归功于它们对文本图像的全局视图能力和语言建模能力。在本文中,我们建议使用轻量级 Transformer 模型来处理历史手写文本识别任务。为了训练该架构,我们引入了逼真的合成数据,再现了历史手写体的风格。我们提出了一种用于训练和预测的特定策略,以处理训练数据量有限的历史文件。我们在 ICFHR 2018 READ 数据集上评估了我们的方法,该数据集专门用于特定历史文件中的手写识别。结果表明,我们基于变换器的方法能够超越现有方法。
{"title":"Training transformer architectures on few annotated data: an application to historical handwritten text recognition","authors":"Killian Barrere, Yann Soullard, Aurélie Lemaitre, Bertrand Coüasnon","doi":"10.1007/s10032-023-00459-2","DOIUrl":"https://doi.org/10.1007/s10032-023-00459-2","url":null,"abstract":"<p>Transformer-based architectures show excellent results on the task of handwritten text recognition, becoming the standard architecture for modern datasets. However, they require a significant amount of annotated data to achieve competitive results. They typically rely on synthetic data to solve this problem. Historical handwritten text recognition represents a challenging task due to degradations, specific handwritings for which few examples are available and ancient languages that vary over time. These limitations also make it difficult to generate realistic synthetic data. Given sufficient and appropriate data, Transformer-based architectures could alleviate these concerns, thanks to their ability to have a global view of textual images and their language modeling capabilities. In this paper, we propose the use of a lightweight Transformer model to tackle the task of historical handwritten text recognition. To train the architecture, we introduce realistic looking synthetic data reproducing the style of historical handwritings. We present a specific strategy, both for training and prediction, to deal with historical documents, where only a limited amount of training data are available. We evaluate our approach on the ICFHR 2018 READ dataset which is dedicated to handwriting recognition in specific historical documents. The results show that our Transformer-based approach is able to outperform existing methods.</p>","PeriodicalId":50277,"journal":{"name":"International Journal on Document Analysis and Recognition","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139580505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Background grid extraction from historical hand-drawn cadastral maps 从历史手绘地籍图中提取背景网格
IF 2.3 4区 计算机科学 Q1 Computer Science Pub Date : 2023-12-08 DOI: 10.1007/s10032-023-00457-4
Tauseef Iftikhar, Nazar Khan

We tackle a novel problem of detecting background grids in hand-drawn cadastral maps. Grid extraction is necessary for accessing and contextualizing the actual map content. The problem is challenging since the background grid is the bottommost map layer that is severely occluded by subsequent map layers. We present a novel automatic method for robust, bottom-up extraction of background grid structures in historical cadastral maps. The proposed algorithm extracts grid structures under significant occlusion, missing information, and noise by iteratively providing an increasingly refined estimate of the grid structure. The key idea is to exploit periodicity of background grid lines to corroborate the existence of each other. We also present an automatic scheme for determining the ‘gridness’ of any detected grid so that the proposed method self-evaluates its result as being good or poor without using ground truth. We present empirical evidence to show that the proposed gridness measure is a good indicator of quality. On a dataset of 268 historical cadastral maps with resolution (1424times 2136) pixels, the proposed method detects grids in 247 images yielding an average root-mean-square error (RMSE) of 5.0 pixels and average intersection over union (IoU) of 0.990. On grids self-evaluated as being good, we report average RMSE of 4.39 pixels and average IoU of 0.991. To compare with the proposed bottom-up approach, we also develop three increasingly sophisticated top-down algorithms based on RANSAC-based model fitting. Experimental results show that our bottom-up algorithm yields better results than the top-down algorithms. We also demonstrate that using detected background grids for stitching different maps is visually better than both manual and SURF-based stitching.

我们要解决的新问题是检测手绘地籍图中的背景网格。网格提取是访问实际地图内容并将其上下文化的必要条件。由于背景网格是最底层的地图图层,被后续地图图层严重遮挡,因此该问题具有挑战性。我们提出了一种新颖的自动方法,用于自下而上提取历史地籍图中的背景网格结构。所提出的算法通过迭代提供越来越精细的网格结构估计值,在严重遮挡、信息缺失和噪声的情况下提取网格结构。其关键思路是利用背景网格线的周期性来证实彼此的存在。我们还提出了一种自动方案,用于确定任何检测到的网格的 "网格度",这样所提出的方法就能在不使用地面实况的情况下,自我评估其结果的好坏。我们提出的经验证据表明,所提出的网格度量是一个很好的质量指标。在一个包含 268 幅历史地籍图(分辨率为 1424×2136 像素)的数据集上,所提出的方法在 247 幅图像中检测到了网格,平均均方根误差(RMSE)为 5.0 像素,平均交集大于联合(IoU)为 0.990。对于自我评估为良好的网格,我们报告的平均 RMSE 为 4.39 像素,平均 IoU 为 0.991。为了与自下而上的方法进行比较,我们还基于基于 RANSAC 的模型拟合开发了三种日益复杂的自上而下算法。实验结果表明,我们的自下而上算法比自上而下算法产生了更好的结果。我们还证明,使用检测到的背景网格来拼接不同的地图在视觉效果上要好于手动拼接和基于 SURF 的拼接。
{"title":"Background grid extraction from historical hand-drawn cadastral maps","authors":"Tauseef Iftikhar, Nazar Khan","doi":"10.1007/s10032-023-00457-4","DOIUrl":"https://doi.org/10.1007/s10032-023-00457-4","url":null,"abstract":"<p>We tackle a novel problem of detecting background grids in hand-drawn cadastral maps. Grid extraction is necessary for accessing and contextualizing the actual map content. The problem is challenging since the background grid is the bottommost map layer that is severely occluded by subsequent map layers. We present a novel automatic method for robust, bottom-up extraction of background grid structures in historical cadastral maps. The proposed algorithm extracts grid structures under significant occlusion, missing information, and noise by iteratively providing an increasingly refined estimate of the grid structure. The key idea is to exploit periodicity of background grid lines to corroborate the existence of each other. We also present an automatic scheme for determining the ‘gridness’ of any detected grid so that the proposed method self-evaluates its result as being good or poor without using ground truth. We present empirical evidence to show that the proposed gridness measure is a good indicator of quality. On a dataset of 268 historical cadastral maps with resolution <span>(1424times 2136)</span> pixels, the proposed method detects grids in 247 images yielding an average root-mean-square error (RMSE) of 5.0 pixels and average intersection over union (IoU) of 0.990. On grids self-evaluated as being good, we report average RMSE of 4.39 pixels and average IoU of 0.991. To compare with the proposed bottom-up approach, we also develop three increasingly sophisticated top-down algorithms based on RANSAC-based model fitting. Experimental results show that our bottom-up algorithm yields better results than the top-down algorithms. We also demonstrate that using detected background grids for stitching different maps is visually better than both manual and SURF-based stitching.</p>","PeriodicalId":50277,"journal":{"name":"International Journal on Document Analysis and Recognition","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138556858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal on Document Analysis and Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1