首页 > 最新文献

2024 1st International Conference on Robotics, Engineering, Science, and Technology (RESTCON)最新文献

英文 中文
Methods of Signal to Image Transformation in Photovoltaic Fault Diagnosis in Preparation for Machine Learning Applications 光伏故障诊断中的信号到图像转换方法,为机器学习应用做准备
Pub Date : 2024-02-16 DOI: 10.1109/RESTCON60981.2024.10463558
Rolando Pula, Lorena Ilagan, Marcelo Santos
This study explores various techniques for transforming 1-dimensional time-series data into 2-dimensional images, preparing for the application of machine learning models designed for 2D data. Eight distinct methods are introduced, including recurrence plots, Markov transition, Gramian angular field, spectrogram, heatmap, direct plot, phase space transformation, and Poincaré plots. These methods are tested using data from a modeled photovoltaic (PV) grid-connected system, specifically simulating a shorted string fault and a no-fault condition. The fault and no-fault responses are captured with a fixed window size of 256 sample points, consistently applied across all methods. All transformation method is tested through python 3 programming using a laptop with minimal computing capability. The generated image of each transformation may contain 1-channel image in grayscale or 3-channel RGB image. Dimension of the generated image can be increase or decrease during saving process. Each method produces a unique visual representation of the shorted string fault and a no-fault, demonstrating diverse perspectives in transforming 1D time-series data into 2D images for subsequent machine learning applications.
本研究探讨了将一维时间序列数据转换为二维图像的各种技术,为应用为二维数据设计的机器学习模型做好准备。研究介绍了八种不同的方法,包括递推图、马尔可夫转换、格拉米安角场、频谱图、热图、直接图、相空间转换和波恩卡雷图。这些方法使用建模光伏并网系统的数据进行了测试,特别是模拟了短路串故障和无故障情况。故障和无故障响应均采用 256 个采样点的固定窗口大小来捕获,所有方法均一致适用。所有变换方法都是通过使用计算能力最低的笔记本电脑进行 python 3 编程测试的。每种变换生成的图像可能包含 1 通道灰度图像或 3 通道 RGB 图像。在保存过程中,生成图像的尺寸可以增大或减小。每种方法都能生成独特的短路字符串故障和无故障的可视化表示,展示了将一维时间序列数据转换为二维图像供后续机器学习应用的不同视角。
{"title":"Methods of Signal to Image Transformation in Photovoltaic Fault Diagnosis in Preparation for Machine Learning Applications","authors":"Rolando Pula, Lorena Ilagan, Marcelo Santos","doi":"10.1109/RESTCON60981.2024.10463558","DOIUrl":"https://doi.org/10.1109/RESTCON60981.2024.10463558","url":null,"abstract":"This study explores various techniques for transforming 1-dimensional time-series data into 2-dimensional images, preparing for the application of machine learning models designed for 2D data. Eight distinct methods are introduced, including recurrence plots, Markov transition, Gramian angular field, spectrogram, heatmap, direct plot, phase space transformation, and Poincaré plots. These methods are tested using data from a modeled photovoltaic (PV) grid-connected system, specifically simulating a shorted string fault and a no-fault condition. The fault and no-fault responses are captured with a fixed window size of 256 sample points, consistently applied across all methods. All transformation method is tested through python 3 programming using a laptop with minimal computing capability. The generated image of each transformation may contain 1-channel image in grayscale or 3-channel RGB image. Dimension of the generated image can be increase or decrease during saving process. Each method produces a unique visual representation of the shorted string fault and a no-fault, demonstrating diverse perspectives in transforming 1D time-series data into 2D images for subsequent machine learning applications.","PeriodicalId":518254,"journal":{"name":"2024 1st International Conference on Robotics, Engineering, Science, and Technology (RESTCON)","volume":"81 11","pages":"195-200"},"PeriodicalIF":0.0,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140527540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DETReg Incorporating Semi-Supervised Learning for Object Detection in the Advanced Driver-Assistance Systems DETReg 在高级驾驶辅助系统中结合半监督学习进行物体检测
Pub Date : 2024-02-16 DOI: 10.1109/RESTCON60981.2024.10463586
Keita Nakano, Kousuke Matsushima
In Advanced Driver-Assistance Systems (ADAS) and automatic driving, it is important to accurately recognize objects around the vehicle. DETReg is one of the unsupervised pre-training methods using Transformer, which is self-supervised by combining localization and categorization. DETReg performs self-supervised learning on unlabeled images. Then, it extracted a wide range of features from rich aspects of the data and gained the flexibility to adapt to many variations. Fine tuning then used the labeled dataset of the target task to fine tune the model to fit the specific dataset. This allowed DETReg to achieve higher accuracy in the object detection task. However, it is difficult to learn DETReg efficiently because of its slow learning time. In this paper, we propose a new pre-training method for object detection, called Semi-DETReg, that utilizes a few supervised labels during self-supervised learning. We incorporate semi-supervised learning into DETReg by using a portion of the supervised training data in the pre-training to improve efficiency. We demonstrate the effectiveness of our method by conducting experiments and comparing our method to a similarly trained DETReg.
在高级驾驶辅助系统(ADAS)和自动驾驶中,准确识别车辆周围的物体非常重要。DETReg 是使用 Transformer 的无监督预训练方法之一,它通过结合定位和分类实现自我监督。DETReg 对未标记的图像进行自监督学习。然后,它从数据的丰富方面提取了大量特征,并获得了适应多种变化的灵活性。然后,微调使用目标任务的标注数据集来微调模型,以适应特定的数据集。这使得 DETReg 在物体检测任务中获得了更高的准确率。然而,由于 DETReg 的学习时间较慢,因此很难对其进行高效学习。在本文中,我们提出了一种新的物体检测预训练方法,称为半 DETReg,它在自监督学习过程中利用了一些监督标签。我们将半监督学习纳入 DETReg,在预训练中使用部分监督训练数据,以提高效率。我们通过实验证明了我们的方法的有效性,并将我们的方法与经过类似训练的 DETReg 进行了比较。
{"title":"DETReg Incorporating Semi-Supervised Learning for Object Detection in the Advanced Driver-Assistance Systems","authors":"Keita Nakano, Kousuke Matsushima","doi":"10.1109/RESTCON60981.2024.10463586","DOIUrl":"https://doi.org/10.1109/RESTCON60981.2024.10463586","url":null,"abstract":"In Advanced Driver-Assistance Systems (ADAS) and automatic driving, it is important to accurately recognize objects around the vehicle. DETReg is one of the unsupervised pre-training methods using Transformer, which is self-supervised by combining localization and categorization. DETReg performs self-supervised learning on unlabeled images. Then, it extracted a wide range of features from rich aspects of the data and gained the flexibility to adapt to many variations. Fine tuning then used the labeled dataset of the target task to fine tune the model to fit the specific dataset. This allowed DETReg to achieve higher accuracy in the object detection task. However, it is difficult to learn DETReg efficiently because of its slow learning time. In this paper, we propose a new pre-training method for object detection, called Semi-DETReg, that utilizes a few supervised labels during self-supervised learning. We incorporate semi-supervised learning into DETReg by using a portion of the supervised training data in the pre-training to improve efficiency. We demonstrate the effectiveness of our method by conducting experiments and comparing our method to a similarly trained DETReg.","PeriodicalId":518254,"journal":{"name":"2024 1st International Conference on Robotics, Engineering, Science, and Technology (RESTCON)","volume":"80 8","pages":"123-128"},"PeriodicalIF":0.0,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140527541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RESTCON 2024 Messages and Keynote Speakers RESTCON 2024 致辞和主旨发言人
Pub Date : 2024-02-16 DOI: 10.1109/restcon60981.2024.10463575
{"title":"RESTCON 2024 Messages and Keynote Speakers","authors":"","doi":"10.1109/restcon60981.2024.10463575","DOIUrl":"https://doi.org/10.1109/restcon60981.2024.10463575","url":null,"abstract":"","PeriodicalId":518254,"journal":{"name":"2024 1st International Conference on Robotics, Engineering, Science, and Technology (RESTCON)","volume":"537 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140527932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mechanical Alloying Process Design by Using DEM Simulation and Experimental Validation 利用 DEM 仿真和实验验证进行机械合金工艺设计
Pub Date : 2024-02-16 DOI: 10.1109/RESTCON60981.2024.10463592
Torsak Boonthai, P. Nunthavarawong, P. Kowitwarangkul, Masaki Fuchiwaki
Mechanical alloying plays a crucial role in controlling and enhancing the characteristics of powder materials which in turn influence the quality and performance of thermal spray coatings. In this study, the optimal milling parameters for mechanical alloying were determined, specifically a rotational speed of 60 rpm, a milling period of 6 hours, and wet milling conditions. These settings led to the best preparation of feedstock powder, resulting in a minimal particle size of 17.5 µm and a narrow particle size dispersion. The observed extensive cataracting and impact zones at this rotational speed of 60 rpm corresponded to 65 of the % critical speed, indicating enhanced milling efficiency. Additionally, DEM modeling demonstrated good agreement with experimental findings, indicating enhanced milling efficiency. Additionally, DEM modeling demonstrated good agreement with experimental findings, highlighting that this rotational speed induced a cataracting regime characterized by a broad zone of impacted particles, yielding the highest impact velocity of 1.79 m/s and a ball indenter force interaction of 1.19 N.
机械合金化在控制和提高粉末材料特性方面起着至关重要的作用,而粉末材料的特性又会影响热喷涂涂层的质量和性能。本研究确定了机械合金化的最佳研磨参数,特别是 60 rpm 的转速、6 小时的研磨周期和湿研磨条件。通过这些设置,制备出的原料粉末粒度最小为 17.5 微米,粒度分散范围较窄。在 60 转/分的转速下观察到广泛的白内障和冲击区,相当于临界转速的 65%,表明研磨效率得到了提高。此外,DEM 模型与实验结果非常吻合,表明研磨效率得到了提高。此外,DEM 模型与实验结果表明了良好的一致性,突出显示了这一转速诱导的白内障机制,其特点是冲击颗粒区域宽广,产生的最高冲击速度为 1.79 m/s,球压头相互作用力为 1.19 N。
{"title":"Mechanical Alloying Process Design by Using DEM Simulation and Experimental Validation","authors":"Torsak Boonthai, P. Nunthavarawong, P. Kowitwarangkul, Masaki Fuchiwaki","doi":"10.1109/RESTCON60981.2024.10463592","DOIUrl":"https://doi.org/10.1109/RESTCON60981.2024.10463592","url":null,"abstract":"Mechanical alloying plays a crucial role in controlling and enhancing the characteristics of powder materials which in turn influence the quality and performance of thermal spray coatings. In this study, the optimal milling parameters for mechanical alloying were determined, specifically a rotational speed of 60 rpm, a milling period of 6 hours, and wet milling conditions. These settings led to the best preparation of feedstock powder, resulting in a minimal particle size of 17.5 µm and a narrow particle size dispersion. The observed extensive cataracting and impact zones at this rotational speed of 60 rpm corresponded to 65 of the % critical speed, indicating enhanced milling efficiency. Additionally, DEM modeling demonstrated good agreement with experimental findings, indicating enhanced milling efficiency. Additionally, DEM modeling demonstrated good agreement with experimental findings, highlighting that this rotational speed induced a cataracting regime characterized by a broad zone of impacted particles, yielding the highest impact velocity of 1.79 m/s and a ball indenter force interaction of 1.19 N.","PeriodicalId":518254,"journal":{"name":"2024 1st International Conference on Robotics, Engineering, Science, and Technology (RESTCON)","volume":"424 1","pages":"133-138"},"PeriodicalIF":0.0,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140527938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Detection and Human Pose Classification for Mobile Robots Interaction 移动机器人交互中的人体检测和人体姿态分类
Pub Date : 2024-02-16 DOI: 10.1109/RESTCON60981.2024.10463579
Korawee Hirunthakingpunt, Don Dawan, Chikamune Wada, Natinun Maneerung
Mobile robots are widely used in many departments such as industry, hospitals, restaurants, etc. The human detection and the human pose classification are usually used for human-robot interaction. The current study proposes human pose classification for human-robot interaction to avoid the collision. There are three main steps of the presented method. First, the algorithm detects the entire human within the determined range of 3D camera. Second, the K-Nearest Neighbor (KNN) model with skeleton points features is used for classifying the six postures of detected human such as neutral, left and right raise, both hand raise, cross hand posture and one opening hand forward. According to the posture classification, these can command the robot to move forward, stop, stop for a few seconds, and cancel the command. Finally, the command is used to interacting with the mobile robot to control the robot movement and to avoid the collision. The experiment results show that the designed algorithm can effectively detect and classify human posture with 86.14% for the accuracy of algorithms and interact effectively to avoid the collisions stop automatically within the 1.8 meters between human and robot.
移动机器人被广泛应用于工业、医院、餐厅等多个领域。人机交互通常使用人的检测和人的姿态分类。本研究提出了用于人机交互的人的姿势分类,以避免碰撞。该方法有三个主要步骤。首先,算法检测三维摄像头确定范围内的整个人类。其次,使用带有骨骼点特征的 KNN(最近邻)模型对检测到的人的六种姿势进行分类,如中立、左右抬起、双手抬起、交叉手姿势和单手向前张开。根据姿势分类,可以命令机器人前进、停止、停止几秒和取消命令。最后,利用指令与移动机器人进行交互,控制机器人运动,避免碰撞。实验结果表明,所设计的算法能有效地检测和分类人体姿态,算法准确率达 86.14%,并能在人与机器人之间 1.8 米的距离内有效互动,避免碰撞,自动停止。
{"title":"Human Detection and Human Pose Classification for Mobile Robots Interaction","authors":"Korawee Hirunthakingpunt, Don Dawan, Chikamune Wada, Natinun Maneerung","doi":"10.1109/RESTCON60981.2024.10463579","DOIUrl":"https://doi.org/10.1109/RESTCON60981.2024.10463579","url":null,"abstract":"Mobile robots are widely used in many departments such as industry, hospitals, restaurants, etc. The human detection and the human pose classification are usually used for human-robot interaction. The current study proposes human pose classification for human-robot interaction to avoid the collision. There are three main steps of the presented method. First, the algorithm detects the entire human within the determined range of 3D camera. Second, the K-Nearest Neighbor (KNN) model with skeleton points features is used for classifying the six postures of detected human such as neutral, left and right raise, both hand raise, cross hand posture and one opening hand forward. According to the posture classification, these can command the robot to move forward, stop, stop for a few seconds, and cancel the command. Finally, the command is used to interacting with the mobile robot to control the robot movement and to avoid the collision. The experiment results show that the designed algorithm can effectively detect and classify human posture with 86.14% for the accuracy of algorithms and interact effectively to avoid the collisions stop automatically within the 1.8 meters between human and robot.","PeriodicalId":518254,"journal":{"name":"2024 1st International Conference on Robotics, Engineering, Science, and Technology (RESTCON)","volume":"455 4","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140527936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing the Viability of Generative AI-Created Construction Scaffolding for Deep Learning-Based Image Segmentation 评估基于深度学习的图像分割中人工智能生成式构建脚手架的可行性
Pub Date : 2024-02-16 DOI: 10.1109/RESTCON60981.2024.10463583
Natthapol Saovana, Chavanont Khosakitchalert
Construction scaffolding serves as a pivotal temporary structure essential for construction activities, exerting a direct influence on site safety conditions. Unfortunately, the lack of documentation often leads to a shortage of training data necessary for employing image segmentation through deep learning for inspection purposes. In an effort to overcome this bottleneck, Generative AI, adept at creating images from pretrained data, emerges as a potential solution. However, the inherent black box nature of deep learning introduces the possibility of generating unrealistic images, thereb necessitating a rigorous evaluation, which constitutes the primary focus of our research. Our findings reveal that scaffolding images generated by Generative AI exhibit distinct features that our deep learning model successfully learned, resulting in an impressive mean average precision (mAP) of 82. Nonetheless, discernible patterns in image generation may be lacking, as evidenced by our deep learning system's ability to grasp scaffolding features proficiently, achieving a mAP of 69 even from the initial epoch. This observation suggests potential challenges in generating diverse scaffolding images through the Generative AI approach, emphasizing the need for further investigation before implementing it with real scenario images
建筑脚手架是建筑活动中必不可少的关键性临时结构,对工地的安全状况有直接影响。遗憾的是,由于缺乏文件记录,往往导致缺乏必要的训练数据,无法通过深度学习进行图像分割,从而达到检测目的。为了克服这一瓶颈,善于通过预训练数据创建图像的生成式人工智能成为一种潜在的解决方案。然而,深度学习固有的黑盒特性可能会生成不切实际的图像,因此有必要进行严格的评估,而这正是我们研究的主要重点。我们的研究结果表明,生成式人工智能生成的脚手架图像具有明显的特征,我们的深度学习模型成功地学习到了这些特征,平均精确度(mAP)达到了令人印象深刻的 82。然而,图像生成中可能缺乏可辨别的模式,这一点从我们的深度学习系统熟练掌握脚手架特征的能力中可见一斑,即使从最初的epoch开始,其mAP也达到了69。这一观察结果表明,通过生成式人工智能方法生成多样化的脚手架图像可能面临挑战,因此在将其应用于真实场景图像之前,有必要进行进一步研究
{"title":"Assessing the Viability of Generative AI-Created Construction Scaffolding for Deep Learning-Based Image Segmentation","authors":"Natthapol Saovana, Chavanont Khosakitchalert","doi":"10.1109/RESTCON60981.2024.10463583","DOIUrl":"https://doi.org/10.1109/RESTCON60981.2024.10463583","url":null,"abstract":"Construction scaffolding serves as a pivotal temporary structure essential for construction activities, exerting a direct influence on site safety conditions. Unfortunately, the lack of documentation often leads to a shortage of training data necessary for employing image segmentation through deep learning for inspection purposes. In an effort to overcome this bottleneck, Generative AI, adept at creating images from pretrained data, emerges as a potential solution. However, the inherent black box nature of deep learning introduces the possibility of generating unrealistic images, thereb necessitating a rigorous evaluation, which constitutes the primary focus of our research. Our findings reveal that scaffolding images generated by Generative AI exhibit distinct features that our deep learning model successfully learned, resulting in an impressive mean average precision (mAP) of 82. Nonetheless, discernible patterns in image generation may be lacking, as evidenced by our deep learning system's ability to grasp scaffolding features proficiently, achieving a mAP of 69 even from the initial epoch. This observation suggests potential challenges in generating diverse scaffolding images through the Generative AI approach, emphasizing the need for further investigation before implementing it with real scenario images","PeriodicalId":518254,"journal":{"name":"2024 1st International Conference on Robotics, Engineering, Science, and Technology (RESTCON)","volume":"56 8","pages":"38-43"},"PeriodicalIF":0.0,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140527546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Text Extraction by Optical Character Recognition-Based on the Template Card 基于模板卡的光学字符识别文字提取技术
Pub Date : 2024-02-16 DOI: 10.1109/RESTCON60981.2024.10463567
Panas Thongtaweechaikij, Piyawat Tangpong, J. Inthiam, W. Tangsuksant
This study evaluates Optical Character Recognition's (OCR) effectiveness in extracting and organizing data from student cards. Assessing diverse OCR techniques, it aims to identify optimal methods for accurate text extraction, considering different formats and languages. The research investigates OCR's impact on information retrieval, analyzing its integration into databases for improved searchability and usability. Our proposed method presents the pre-processing with OCR process including the SIFT, KNN feature matching, MSER technique for noise detection and image transformation. For the experiment, all student cards in King Mongkut’s University of Technology North Bangkok capturing by smartphone, which the resolution of camera is greater than 2 megapixel. This research compares the different technique between traditional tesseract OCR and our proposed method by setting 50% and 70% of Intersection over Union (IoU), The experiment result shows that our proposed method with 70% of IoU has the highest accuracy as 97.36%. According to the result, the proposed illustrate the feasible method for our system.
本研究评估了光学字符识别技术(OCR)在提取和整理学生证数据方面的有效性。本研究评估了各种 OCR 技术,旨在确定准确提取文本的最佳方法,同时考虑到不同的格式和语言。研究调查了 OCR 对信息检索的影响,分析了将其整合到数据库中以提高可搜索性和可用性的方法。我们提出的方法介绍了 OCR 的预处理过程,包括 SIFT、KNN 特征匹配、用于噪声检测和图像转换的 MSER 技术。在实验中,曼谷北蒙库国王科技大学的所有学生证都是用智能手机拍摄的,摄像头的分辨率大于 200 万像素。本研究通过设置 50%和 70%的交叉联合(IoU),比较了传统的魔方 OCR 与我们提出的方法之间的不同技术,实验结果表明,我们提出的方法(IoU 为 70%)的准确率最高,达到 97.36%。根据这一结果,我们提出的方法说明我们的系统是可行的。
{"title":"Text Extraction by Optical Character Recognition-Based on the Template Card","authors":"Panas Thongtaweechaikij, Piyawat Tangpong, J. Inthiam, W. Tangsuksant","doi":"10.1109/RESTCON60981.2024.10463567","DOIUrl":"https://doi.org/10.1109/RESTCON60981.2024.10463567","url":null,"abstract":"This study evaluates Optical Character Recognition's (OCR) effectiveness in extracting and organizing data from student cards. Assessing diverse OCR techniques, it aims to identify optimal methods for accurate text extraction, considering different formats and languages. The research investigates OCR's impact on information retrieval, analyzing its integration into databases for improved searchability and usability. Our proposed method presents the pre-processing with OCR process including the SIFT, KNN feature matching, MSER technique for noise detection and image transformation. For the experiment, all student cards in King Mongkut’s University of Technology North Bangkok capturing by smartphone, which the resolution of camera is greater than 2 megapixel. This research compares the different technique between traditional tesseract OCR and our proposed method by setting 50% and 70% of Intersection over Union (IoU), The experiment result shows that our proposed method with 70% of IoU has the highest accuracy as 97.36%. According to the result, the proposed illustrate the feasible method for our system.","PeriodicalId":518254,"journal":{"name":"2024 1st International Conference on Robotics, Engineering, Science, and Technology (RESTCON)","volume":"169 3","pages":"188-192"},"PeriodicalIF":0.0,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140527944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2024 1st International Conference on Robotics, Engineering, Science, and Technology (RESTCON)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1