IF 2.6 4区 计算机科学Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONSBig DataPub Date : 2023-10-01Epub Date: 2023-01-19DOI:10.1089/big.2021.0333
Guowei Zhang, Weilan Wang, Ce Zhang, Penghai Zhao, Mingkai Zhang
{"title":"HUTNet:一种高效的卷积神经网络,用于乌琴藏文手写体识别。","authors":"Guowei Zhang, Weilan Wang, Ce Zhang, Penghai Zhao, Mingkai Zhang","doi":"10.1089/big.2021.0333","DOIUrl":null,"url":null,"abstract":"<p><p>Recognition of handwritten Uchen Tibetan characters input has been considered an efficient way of acquiring mass data in the digital era. However, it still faces considerable challenges due to seriously touching letters and various morphological features of identical characters. Thus, deeper neural networks are required to achieve decent recognition accuracy, making an efficient, lightweight model design important to balance the inevitable trade-off between accuracy and latency. To reduce the learnable parameters of the network as much as possible and maintain acceptable accuracy, we introduce an efficient model named HUTNet based on the internal relationship between floating-point operations per second (FLOPs) and Memory Access Cost. The proposed network achieves a ResNet-18-level accuracy of 96.86%, with only a tenth of the parameters. The subsequent pruning and knowledge distillation strategies were applied to further reduce the inference latency of the model. Experiments on the test set (Handwritten Uchen Tibetan Data set by Wang [HUTDW]) containing 562 classes of 42,068 samples show that the compressed model achieves a 96.83% accuracy while maintaining lower FLOPs and fewer parameters. To verify the effectiveness of HUTNet, we tested it on the Chinese Handwriting Data sets Handwriting Database 1.1 (HWDB1.1), in which HUTNet achieved an accuracy of 97.24%, higher than that of ResNet-18 and ResNet-34. In general, we conduct extensive experiments on resource and accuracy trade-offs and show a stronger performance compared with other famous models on HUTDW and HWDB1.1. It also unlocks the critical bottleneck for handwritten Uchen Tibetan recognition on low-power computing devices.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"387-398"},"PeriodicalIF":2.6000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HUTNet: An Efficient Convolutional Neural Network for Handwritten Uchen Tibetan Character Recognition.\",\"authors\":\"Guowei Zhang, Weilan Wang, Ce Zhang, Penghai Zhao, Mingkai Zhang\",\"doi\":\"10.1089/big.2021.0333\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Recognition of handwritten Uchen Tibetan characters input has been considered an efficient way of acquiring mass data in the digital era. However, it still faces considerable challenges due to seriously touching letters and various morphological features of identical characters. Thus, deeper neural networks are required to achieve decent recognition accuracy, making an efficient, lightweight model design important to balance the inevitable trade-off between accuracy and latency. To reduce the learnable parameters of the network as much as possible and maintain acceptable accuracy, we introduce an efficient model named HUTNet based on the internal relationship between floating-point operations per second (FLOPs) and Memory Access Cost. The proposed network achieves a ResNet-18-level accuracy of 96.86%, with only a tenth of the parameters. The subsequent pruning and knowledge distillation strategies were applied to further reduce the inference latency of the model. Experiments on the test set (Handwritten Uchen Tibetan Data set by Wang [HUTDW]) containing 562 classes of 42,068 samples show that the compressed model achieves a 96.83% accuracy while maintaining lower FLOPs and fewer parameters. To verify the effectiveness of HUTNet, we tested it on the Chinese Handwriting Data sets Handwriting Database 1.1 (HWDB1.1), in which HUTNet achieved an accuracy of 97.24%, higher than that of ResNet-18 and ResNet-34. In general, we conduct extensive experiments on resource and accuracy trade-offs and show a stronger performance compared with other famous models on HUTDW and HWDB1.1. It also unlocks the critical bottleneck for handwritten Uchen Tibetan recognition on low-power computing devices.</p>\",\"PeriodicalId\":51314,\"journal\":{\"name\":\"Big Data\",\"volume\":\" \",\"pages\":\"387-398\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2023-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Big Data\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1089/big.2021.0333\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/1/19 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Big Data","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1089/big.2021.0333","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/1/19 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
HUTNet: An Efficient Convolutional Neural Network for Handwritten Uchen Tibetan Character Recognition.
Recognition of handwritten Uchen Tibetan characters input has been considered an efficient way of acquiring mass data in the digital era. However, it still faces considerable challenges due to seriously touching letters and various morphological features of identical characters. Thus, deeper neural networks are required to achieve decent recognition accuracy, making an efficient, lightweight model design important to balance the inevitable trade-off between accuracy and latency. To reduce the learnable parameters of the network as much as possible and maintain acceptable accuracy, we introduce an efficient model named HUTNet based on the internal relationship between floating-point operations per second (FLOPs) and Memory Access Cost. The proposed network achieves a ResNet-18-level accuracy of 96.86%, with only a tenth of the parameters. The subsequent pruning and knowledge distillation strategies were applied to further reduce the inference latency of the model. Experiments on the test set (Handwritten Uchen Tibetan Data set by Wang [HUTDW]) containing 562 classes of 42,068 samples show that the compressed model achieves a 96.83% accuracy while maintaining lower FLOPs and fewer parameters. To verify the effectiveness of HUTNet, we tested it on the Chinese Handwriting Data sets Handwriting Database 1.1 (HWDB1.1), in which HUTNet achieved an accuracy of 97.24%, higher than that of ResNet-18 and ResNet-34. In general, we conduct extensive experiments on resource and accuracy trade-offs and show a stronger performance compared with other famous models on HUTDW and HWDB1.1. It also unlocks the critical bottleneck for handwritten Uchen Tibetan recognition on low-power computing devices.
Big DataCOMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS-COMPUTER SCIENCE, THEORY & METHODS
CiteScore
9.10
自引率
2.20%
发文量
60
期刊介绍:
Big Data is the leading peer-reviewed journal covering the challenges and opportunities in collecting, analyzing, and disseminating vast amounts of data. The Journal addresses questions surrounding this powerful and growing field of data science and facilitates the efforts of researchers, business managers, analysts, developers, data scientists, physicists, statisticians, infrastructure developers, academics, and policymakers to improve operations, profitability, and communications within their businesses and institutions.
Spanning a broad array of disciplines focusing on novel big data technologies, policies, and innovations, the Journal brings together the community to address current challenges and enforce effective efforts to organize, store, disseminate, protect, manipulate, and, most importantly, find the most effective strategies to make this incredible amount of information work to benefit society, industry, academia, and government.
Big Data coverage includes:
Big data industry standards,
New technologies being developed specifically for big data,
Data acquisition, cleaning, distribution, and best practices,
Data protection, privacy, and policy,
Business interests from research to product,
The changing role of business intelligence,
Visualization and design principles of big data infrastructures,
Physical interfaces and robotics,
Social networking advantages for Facebook, Twitter, Amazon, Google, etc,
Opportunities around big data and how companies can harness it to their advantage.