首页 > 最新文献

JOIV : International Journal on Informatics Visualization最新文献

英文 中文
Distributed Aerial Image Stitching on Multiple Processors using Message Passing Interface 使用消息传递接口在多个处理器上进行分布式航空图像拼接
Pub Date : 2024-03-31 DOI: 10.62527/joiv.8.1.1890
Alif Wicaksana Ramadhan, Fira Aulia, Ni Made Lintang Asvini Dewi, Idris Winarno, S. Sukaridhoto
This study investigates the potential of using Message Passing Interface (MPI) parallelization to enhance the speed of the image stitching process. The image stitching process involves combining multiple images to create a seamless panoramic view. This research explores the potential benefits of segmenting photos into distributed tasks among several identical processor nodes to expedite the stitching process. However, it is crucial to consider that increasing the number of nodes may introduce a trade-off between the speed and quality of the stitching process. The initial experiments were conducted without MPI, resulting in a stitching time of 1506.63 seconds. Subsequently, the researchers employed MPI parallelization on two computer nodes, which reduced the stitching time to 624 seconds. Further improvement was observed when four computer nodes were used, resulting in a stitching time of 346.8 seconds. These findings highlight the potential benefits of MPI parallelization for image stitching tasks. The reduced stitching time achieved through parallelization demonstrates the ability to accelerate the overall stitching process. However, it is essential to carefully consider the trade-off between speed and quality when determining the optimal number of nodes to employ. By effectively distributing the workload across multiple nodes, researchers and practitioners can take advantage of the parallel processing capabilities offered by MPI to expedite image stitching tasks. Future studies could explore additional optimization techniques and evaluate the impact on speed and quality to achieve an optimal balance in real-world applications.
本研究探讨了使用消息传递接口(MPI)并行化提高图像拼接速度的可能性。图像拼接过程包括将多幅图像合并以创建无缝全景图。这项研究探讨了将照片分割成分布式任务,在几个相同的处理器节点之间进行处理,以加快拼接过程的潜在好处。但必须考虑到,增加节点数量可能会在拼接过程的速度和质量之间产生权衡。最初的实验没有使用 MPI,结果拼接时间为 1506.63 秒。随后,研究人员在两个计算机节点上采用了 MPI 并行化,将拼接时间缩短到 624 秒。当使用四个计算机节点时,情况得到进一步改善,拼接时间缩短至 346.8 秒。这些发现凸显了 MPI 并行化在图像拼接任务中的潜在优势。通过并行化缩短的拼接时间证明了加快整个拼接过程的能力。不过,在确定采用的最佳节点数量时,必须仔细考虑速度和质量之间的权衡。通过在多个节点上有效分配工作负载,研究人员和从业人员可以利用 MPI 提供的并行处理能力来加快图像拼接任务。未来的研究可以探索更多优化技术,并评估其对速度和质量的影响,从而在实际应用中实现最佳平衡。
{"title":"Distributed Aerial Image Stitching on Multiple Processors using Message Passing Interface","authors":"Alif Wicaksana Ramadhan, Fira Aulia, Ni Made Lintang Asvini Dewi, Idris Winarno, S. Sukaridhoto","doi":"10.62527/joiv.8.1.1890","DOIUrl":"https://doi.org/10.62527/joiv.8.1.1890","url":null,"abstract":"This study investigates the potential of using Message Passing Interface (MPI) parallelization to enhance the speed of the image stitching process. The image stitching process involves combining multiple images to create a seamless panoramic view. This research explores the potential benefits of segmenting photos into distributed tasks among several identical processor nodes to expedite the stitching process. However, it is crucial to consider that increasing the number of nodes may introduce a trade-off between the speed and quality of the stitching process. The initial experiments were conducted without MPI, resulting in a stitching time of 1506.63 seconds. Subsequently, the researchers employed MPI parallelization on two computer nodes, which reduced the stitching time to 624 seconds. Further improvement was observed when four computer nodes were used, resulting in a stitching time of 346.8 seconds. These findings highlight the potential benefits of MPI parallelization for image stitching tasks. The reduced stitching time achieved through parallelization demonstrates the ability to accelerate the overall stitching process. However, it is essential to carefully consider the trade-off between speed and quality when determining the optimal number of nodes to employ. By effectively distributing the workload across multiple nodes, researchers and practitioners can take advantage of the parallel processing capabilities offered by MPI to expedite image stitching tasks. Future studies could explore additional optimization techniques and evaluate the impact on speed and quality to achieve an optimal balance in real-world applications.","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140358573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCOV-CNN: A Simple CNN Architecture for COVID-19 Identification Based on the CT Images SCOV-CNN:基于 CT 图像的用于 COVID-19 识别的简单 CNN 架构
Pub Date : 2024-03-31 DOI: 10.62527/joiv.8.1.1750
Toto - Haryanto, H. Suhartanto, A. Murni, K. Kusmardi, Marina Yusoff, Jasni Mohmad Zain
Since the coronavirus was first discovered in Wuhan, it has widely spread and was finally declared a global pandemic by the WHO. Image processing plays an essential role in examining the lungs of affected patients. Computed Tomography (CT) and X-ray images have been popularly used to examine the lungs of COVID-19 patients. This research aims to design a simple Convolution Neural Network (CNN) architecture called SCOV-CNN for the classification of the virus based on CT images and implementation on the web-based application. The data used in this work were CT images of 120 patients from hospitals in Brazil. SCOV-CNN was inspired by the LeNet architecture, but it has a deeper convolution and pooling layer structure. Combining seven and five kernel sizes for convolution and padding schemes can preserve the feature information from the images.  Furthermore, it has three fully connected (FC) layers with a dropout of 0.3 on each. In addition, the model was evaluated using the sensitivity, specificity, precision, F1 score, and ROC curve values. The results showed that the architecture we proposed was comparable to some prominent deep learning techniques in terms of accuracy (0.96), precision (0.98), and F1 score (0.95). The best model was integrated into a website-based system to help and facilitate the users' activities. We use Python Flask Pam tools as a web server on the server side and JavaScript for the User Interface (UI) Design
自冠状病毒首次在武汉被发现以来,它已广泛传播,并最终被世界卫生组织宣布为全球大流行病。图像处理在检查受影响病人的肺部方面发挥着至关重要的作用。计算机断层扫描(CT)和 X 射线图像已被广泛用于检查 COVID-19 患者的肺部。本研究旨在设计一种名为 SCOV-CNN 的简单卷积神经网络(CNN)架构,用于根据 CT 图像对病毒进行分类,并在基于网络的应用程序中实施。这项研究使用的数据是巴西医院 120 名患者的 CT 图像。SCOV-CNN 受到 LeNet 架构的启发,但具有更深的卷积和池化层结构。结合七核和五核的卷积和填充方案,可以保留图像的特征信息。 此外,它有三个全连接(FC)层,每个层的 dropout 为 0.3。此外,我们还使用灵敏度、特异性、精确度、F1 分数和 ROC 曲线值对模型进行了评估。结果表明,我们提出的架构在准确度(0.96)、精确度(0.98)和 F1 分数(0.95)方面可与一些著名的深度学习技术相媲美。最佳模型被集成到一个基于网站的系统中,以帮助和促进用户的活动。我们在服务器端使用 Python Flask Pam 工具作为网络服务器,在用户界面(UI)设计中使用 JavaScript。
{"title":"SCOV-CNN: A Simple CNN Architecture for COVID-19 Identification Based on the CT Images","authors":"Toto - Haryanto, H. Suhartanto, A. Murni, K. Kusmardi, Marina Yusoff, Jasni Mohmad Zain","doi":"10.62527/joiv.8.1.1750","DOIUrl":"https://doi.org/10.62527/joiv.8.1.1750","url":null,"abstract":"Since the coronavirus was first discovered in Wuhan, it has widely spread and was finally declared a global pandemic by the WHO. Image processing plays an essential role in examining the lungs of affected patients. Computed Tomography (CT) and X-ray images have been popularly used to examine the lungs of COVID-19 patients. This research aims to design a simple Convolution Neural Network (CNN) architecture called SCOV-CNN for the classification of the virus based on CT images and implementation on the web-based application. The data used in this work were CT images of 120 patients from hospitals in Brazil. SCOV-CNN was inspired by the LeNet architecture, but it has a deeper convolution and pooling layer structure. Combining seven and five kernel sizes for convolution and padding schemes can preserve the feature information from the images.  Furthermore, it has three fully connected (FC) layers with a dropout of 0.3 on each. In addition, the model was evaluated using the sensitivity, specificity, precision, F1 score, and ROC curve values. The results showed that the architecture we proposed was comparable to some prominent deep learning techniques in terms of accuracy (0.96), precision (0.98), and F1 score (0.95). The best model was integrated into a website-based system to help and facilitate the users' activities. We use Python Flask Pam tools as a web server on the server side and JavaScript for the User Interface (UI) Design","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"25 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140359044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a Java Library with Bacterial Foraging Optimization for Feature Selection of High-Dimensional Data 开发用于高维数据特征选择的细菌觅食优化 Java 库
Pub Date : 2024-03-31 DOI: 10.62527/joiv.8.1.2149
T. Badriyah, I. Syarif, Fitriani Rohmah Hardiyanti
High-dimensional data allows researchers to conduct comprehensive analyses. However, such data often exhibits characteristics like small sample sizes, class imbalance, and high complexity, posing challenges for classification. One approach employed to tackle high-dimensional data is feature selection. This study uses the Bacterial Foraging Optimization (BFO) algorithm for feature selection. A dedicated BFO Java library is developed to extend the capabilities of WEKA for feature selection purposes. Experimental results confirm the successful integration of BFO. The outcomes of BFO's feature selection are then compared against those of other evolutionary algorithms, namely Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), and Ant Colony Optimization (ACO).  Comparison of algorithms conducted using the same datasets.  The experimental results indicate that BFO effectively reduces features while maintaining consistent accuracy. In 4 out of 9 datasets, BFO outperforms other algorithms, showcasing superior processing time performance in 6 datasets. BFO is a favorable choice for selecting features in high-dimensional datasets, providing consistent accuracy and effective processing. The optimal fraction of features in the Ovarian Cancer dataset signifies that the dataset retains a minimal number of selected attributes. Consequently, the learning process gains speed due to the reduced feature set. Remarkably, accuracy substantially increased, rising from 0.868 before feature selection to 0.886 after feature selection. The classification processing time has also been significantly shortened, completing the task in just 0.3 seconds, marking a remarkable improvement from the previous 56.8 seconds.
高维数据允许研究人员进行综合分析。然而,这类数据往往具有样本量小、类不平衡和复杂性高等特点,给分类带来了挑战。处理高维数据的一种方法是特征选择。本研究使用细菌觅食优化(BFO)算法进行特征选择。我们开发了一个专用的 BFO Java 库,以扩展 WEKA 在特征选择方面的功能。实验结果证实了 BFO 的成功集成。然后,将 BFO 的特征选择结果与其他进化算法(即遗传算法 (GA)、粒子群优化 (PSO)、人工蜂群 (ABC) 和蚁群优化 (ACO))的结果进行了比较。 使用相同的数据集对算法进行比较。 实验结果表明,BFO 能有效减少特征,同时保持稳定的准确性。在 9 个数据集中的 4 个数据集中,BFO 的表现优于其他算法,在 6 个数据集中,BFO 的处理时间表现更优。BFO 是在高维数据集中选择特征的有利选择,既能提供一致的准确性,又能进行有效的处理。卵巢癌数据集中的最佳特征分数表明,该数据集保留了最少数量的选定属性。因此,由于特征集的减少,学习过程的速度得到了提高。值得注意的是,准确率大幅提高,从特征选择前的 0.868 提高到特征选择后的 0.886。分类处理时间也大大缩短,仅用 0.3 秒就完成了任务,比之前的 56.8 秒有了显著提高。
{"title":"Development of a Java Library with Bacterial Foraging Optimization for Feature Selection of High-Dimensional Data","authors":"T. Badriyah, I. Syarif, Fitriani Rohmah Hardiyanti","doi":"10.62527/joiv.8.1.2149","DOIUrl":"https://doi.org/10.62527/joiv.8.1.2149","url":null,"abstract":"High-dimensional data allows researchers to conduct comprehensive analyses. However, such data often exhibits characteristics like small sample sizes, class imbalance, and high complexity, posing challenges for classification. One approach employed to tackle high-dimensional data is feature selection. This study uses the Bacterial Foraging Optimization (BFO) algorithm for feature selection. A dedicated BFO Java library is developed to extend the capabilities of WEKA for feature selection purposes. Experimental results confirm the successful integration of BFO. The outcomes of BFO's feature selection are then compared against those of other evolutionary algorithms, namely Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), and Ant Colony Optimization (ACO).  Comparison of algorithms conducted using the same datasets.  The experimental results indicate that BFO effectively reduces features while maintaining consistent accuracy. In 4 out of 9 datasets, BFO outperforms other algorithms, showcasing superior processing time performance in 6 datasets. BFO is a favorable choice for selecting features in high-dimensional datasets, providing consistent accuracy and effective processing. The optimal fraction of features in the Ovarian Cancer dataset signifies that the dataset retains a minimal number of selected attributes. Consequently, the learning process gains speed due to the reduced feature set. Remarkably, accuracy substantially increased, rising from 0.868 before feature selection to 0.886 after feature selection. The classification processing time has also been significantly shortened, completing the task in just 0.3 seconds, marking a remarkable improvement from the previous 56.8 seconds.","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"37 36","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140358062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Handwritten Character Recognition using Deep Learning Algorithm with Machine Learning Classifier 使用深度学习算法和机器学习分类器识别手写字符
Pub Date : 2024-03-31 DOI: 10.62527/joiv.8.1.1707
Muhamad Arief Liman, Antonio Josef, Gede Putra Kusuma
Handwritten character recognition is a problem that has been worked on for many mainstream languages. Handwritten letter recognition has been proven to achieve promising results. Several studies using deep learning models have been conducted to achieve better accuracies. In this paper, the authors conducted two experiments on the EMNIST Letters dataset: Wavemix-Lite and CoAtNet. The Wavemix-Lite model uses Two-Dimensional Discrete Wavelet Transform Level 1 to reduce the parameters and speed up the runtime. The CoAtNet is a combined model of CNN and Visual Transformer where the image is broken down into fixed-size patches. The feature extraction part of the model is used to embed the input image into a feature vector. From those two models, the authors hooked the value of the features of the Global Average Pool layer using EMNIST Letters data. The features hooked from the training results of the two models, such as SVM, Random Forest, and XGBoost models, were used to train the machine learning classifier. The experiments conducted by the authors show that the best machine-learning model is the Random Forest, with 96.03% accuracy using the Wavemix-Lite model and 97.90% accuracy using the CoAtNet model. These results showcased the benefit of using a machine learning model for classifying image features that are extracted using a deep learning model.
手写字符识别是许多主流语言都在研究的问题。手写字母识别已被证明取得了可喜的成果。为了达到更好的准确率,已经开展了几项使用深度学习模型的研究。在本文中,作者在 EMNIST Letters 数据集上进行了两项实验:Wavemix-Lite 和 CoAtNet。Wavemix-Lite 模型使用二维离散小波变换一级来减少参数并加快运行速度。CoAtNet 是 CNN 和视觉变换器的组合模型,其中图像被分解成固定大小的斑块。该模型的特征提取部分用于将输入图像嵌入特征向量。从这两个模型中,作者利用 EMNIST Letters 数据钩取了全局平均池层的特征值。从这两个模型(如 SVM、随机森林和 XGBoost 模型)的训练结果中钩选出的特征被用来训练机器学习分类器。作者进行的实验表明,最好的机器学习模型是随机森林,使用 Wavemix-Lite 模型的准确率为 96.03%,使用 CoAtNet 模型的准确率为 97.90%。这些结果展示了使用机器学习模型对使用深度学习模型提取的图像特征进行分类的好处。
{"title":"Handwritten Character Recognition using Deep Learning Algorithm with Machine Learning Classifier","authors":"Muhamad Arief Liman, Antonio Josef, Gede Putra Kusuma","doi":"10.62527/joiv.8.1.1707","DOIUrl":"https://doi.org/10.62527/joiv.8.1.1707","url":null,"abstract":"Handwritten character recognition is a problem that has been worked on for many mainstream languages. Handwritten letter recognition has been proven to achieve promising results. Several studies using deep learning models have been conducted to achieve better accuracies. In this paper, the authors conducted two experiments on the EMNIST Letters dataset: Wavemix-Lite and CoAtNet. The Wavemix-Lite model uses Two-Dimensional Discrete Wavelet Transform Level 1 to reduce the parameters and speed up the runtime. The CoAtNet is a combined model of CNN and Visual Transformer where the image is broken down into fixed-size patches. The feature extraction part of the model is used to embed the input image into a feature vector. From those two models, the authors hooked the value of the features of the Global Average Pool layer using EMNIST Letters data. The features hooked from the training results of the two models, such as SVM, Random Forest, and XGBoost models, were used to train the machine learning classifier. The experiments conducted by the authors show that the best machine-learning model is the Random Forest, with 96.03% accuracy using the Wavemix-Lite model and 97.90% accuracy using the CoAtNet model. These results showcased the benefit of using a machine learning model for classifying image features that are extracted using a deep learning model.","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"121 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140359816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coordination of The Apprenticeship Industrial Program with The Siakama Application 工业学徒计划与西亚卡马应用程序的协调
Pub Date : 2024-03-31 DOI: 10.62527/joiv.8.1.2245
Henny Yustisia, Laras Oktavia Andreas, Risma Apdeni, Bambang Heriyadi, Jusmita Weriza
This research aims to examine the implementation of the SIAKAMA application in the Apprenticeship Industrial Program. This program was created as a SIAKAMA application to overcome hurdles during the monitoring and evaluation stages. At the monitoring stage, supervising lecturers and field supervisors can use the SIAKAMA application to monitor all Apprenticeship Industrial program student activities in the field, resulting in a good and smooth communication and coordination system. At the evaluation stage, the supervising lecturer and field supervisors in the SIAKAMA application can conduct assessments based on student activities in the field, including daily evaluations and final assessments after the Apprenticeship Industrial Program has been finished. This study employs a quantitative descriptive technique, the Research & Development method, and the 4D development model. A sample of Apprenticeship Industrial Program students from five departments of the Faculty of Engineering, Padang State University, was used in this study. The SIAKAMA application was found to be valid with a value of 0.876, practical with a value of 78.67, and effective with a value of 81.22% after data analysis using SPSS 25. This suggests that implementing the SIAKAMA application to enhance the work competency of Apprenticeship Industrial Program students is viable. The Apprenticeship Industrial Program model represents a modification of the Three Set of Actor development model, yet it hasn't been incorporated with the Industrial Revolution 4.0. Engaging in this Program enables students to acquire 4C skills, including Creativity and Innovation, Critical Thinking and Problem Solving, Communication, and Collaboration.
本研究旨在考察 SIAKAMA 应用程序在工业学徒计划中的实施情况。该计划是作为 SIAKAMA 应用程序创建的,目的是克服监测和评估阶段的障碍。在监督阶段,指导讲师和现场监督员可以使用 SIAKAMA 应用程序监督工业学徒项目学生在现场的所有活动,从而形成一个良好、顺畅的沟通和协调系统。在评估阶段,SIAKAMA 应用程序中的指导讲师和现场督导可以根据学生在现场的活动进行评估,包括日常评估和学徒制工业项目结束后的最终评估。本研究采用了定量描述技术、研究与开发方法和 4D 开发模型。本研究使用了来自巴东国立大学工程学院五个系的工业学徒计划学生样本。使用 SPSS 25 进行数据分析后发现,SIAKAMA 应用的有效值为 0.876,实用值为 78.67,有效率为 81.22%。这表明,实施 SIAKAMA 应用程序来提高工业学徒制学生的工作能力是可行的。学徒制工业项目模式是对 "演员三件套 "发展模式的修正,但尚未与工业革命 4.0 相结合。参与该计划可使学生获得 4C 技能,包括创造力和创新、批判性思维和解决问题、沟通和协作。
{"title":"Coordination of The Apprenticeship Industrial Program with The Siakama Application","authors":"Henny Yustisia, Laras Oktavia Andreas, Risma Apdeni, Bambang Heriyadi, Jusmita Weriza","doi":"10.62527/joiv.8.1.2245","DOIUrl":"https://doi.org/10.62527/joiv.8.1.2245","url":null,"abstract":"This research aims to examine the implementation of the SIAKAMA application in the Apprenticeship Industrial Program. This program was created as a SIAKAMA application to overcome hurdles during the monitoring and evaluation stages. At the monitoring stage, supervising lecturers and field supervisors can use the SIAKAMA application to monitor all Apprenticeship Industrial program student activities in the field, resulting in a good and smooth communication and coordination system. At the evaluation stage, the supervising lecturer and field supervisors in the SIAKAMA application can conduct assessments based on student activities in the field, including daily evaluations and final assessments after the Apprenticeship Industrial Program has been finished. This study employs a quantitative descriptive technique, the Research & Development method, and the 4D development model. A sample of Apprenticeship Industrial Program students from five departments of the Faculty of Engineering, Padang State University, was used in this study. The SIAKAMA application was found to be valid with a value of 0.876, practical with a value of 78.67, and effective with a value of 81.22% after data analysis using SPSS 25. This suggests that implementing the SIAKAMA application to enhance the work competency of Apprenticeship Industrial Program students is viable. The Apprenticeship Industrial Program model represents a modification of the Three Set of Actor development model, yet it hasn't been incorporated with the Industrial Revolution 4.0. Engaging in this Program enables students to acquire 4C skills, including Creativity and Innovation, Critical Thinking and Problem Solving, Communication, and Collaboration.","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"113 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140360364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classifying Gender Based on Face Images Using Vision Transformer 利用视觉变换器根据人脸图像进行性别分类
Pub Date : 2024-03-31 DOI: 10.62527/joiv.8.1.1923
Ganjar Gingin Tahyudin, M. D. Sulistiyo, Muhammad Arzaki, Ema Rachmawati
Due to various factors that cause visual alterations in the collected facial images, gender classification based on image processing continues to be a performance challenge for classifier models. The Vision Transformer model is used in this study to suggest a technique for identifying a person’s gender from their face images. This study investigates how well a facial image-based model can distinguish between male and female genders. It also investigates the rarely discussed performance on the variation and complexity of data caused by differences in racial and age groups. We trained on the AFAD dataset and then carried out same-dataset and cross-dataset evaluations, the latter of which considers the UTKFace dataset.  From the experiments and analysis in the same-dataset evaluation, the highest validation accuracy of  happens for the image of size  pixels with eight patches. In comparison, the highest testing accuracy of  occurs for the image of size  pixels with  patches. Moreover, the experiments and analysis in the cross-dataset evaluation show that the model works optimally for the image size  pixels with  patches, with the value of the model’s accuracy, precision, recall, and F1-score being , , , and , respectively. Furthermore, the misclassification analysis shows that the model works optimally in classifying the gender of people between 21-70 years old. The findings of this study can serve as a baseline for conducting further analysis on the effectiveness of gender classifier models considering various physical factors.
由于各种因素导致采集的面部图像发生视觉变化,基于图像处理的性别分类仍然是分类器模型面临的性能挑战。本研究利用视觉变换器模型提出了一种从面部图像识别性别的技术。本研究探讨了基于面部图像的模型能在多大程度上区分男性和女性。此外,本研究还探讨了因种族和年龄组差异而导致的数据变化和复杂性方面鲜有讨论的性能问题。我们在 AFAD 数据集上进行了训练,然后进行了同数据集和跨数据集评估,后者考虑了UTKFace 数据集。 从同数据集评估的实验和分析结果来看,具有八个补丁的大小像素图像的验证准确率最高。相比之下,带有补丁的像素大小图像的测试准确率最高。此外,跨数据集评估的实验和分析表明,该模型在有补丁的大小像素图像上的效果最佳,模型的准确度、精确度、召回率和 F1 分数分别为 、 、 和 。此外,误分类分析表明,该模型在对 21-70 岁人群进行性别分类时效果最佳。本研究的结果可作为进一步分析考虑各种物理因素的性别分类器模型有效性的基线。
{"title":"Classifying Gender Based on Face Images Using Vision Transformer","authors":"Ganjar Gingin Tahyudin, M. D. Sulistiyo, Muhammad Arzaki, Ema Rachmawati","doi":"10.62527/joiv.8.1.1923","DOIUrl":"https://doi.org/10.62527/joiv.8.1.1923","url":null,"abstract":"Due to various factors that cause visual alterations in the collected facial images, gender classification based on image processing continues to be a performance challenge for classifier models. The Vision Transformer model is used in this study to suggest a technique for identifying a person’s gender from their face images. This study investigates how well a facial image-based model can distinguish between male and female genders. It also investigates the rarely discussed performance on the variation and complexity of data caused by differences in racial and age groups. We trained on the AFAD dataset and then carried out same-dataset and cross-dataset evaluations, the latter of which considers the UTKFace dataset.  From the experiments and analysis in the same-dataset evaluation, the highest validation accuracy of  happens for the image of size  pixels with eight patches. In comparison, the highest testing accuracy of  occurs for the image of size  pixels with  patches. Moreover, the experiments and analysis in the cross-dataset evaluation show that the model works optimally for the image size  pixels with  patches, with the value of the model’s accuracy, precision, recall, and F1-score being , , , and , respectively. Furthermore, the misclassification analysis shows that the model works optimally in classifying the gender of people between 21-70 years old. The findings of this study can serve as a baseline for conducting further analysis on the effectiveness of gender classifier models considering various physical factors.","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"1 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140359408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of Job Recommendations in Vocational Education Using the Intelligent Job Matching Model 利用智能职位匹配模型分析职业教育中的职位建议
Pub Date : 2024-03-31 DOI: 10.62527/joiv.8.1.2201
Geovanne Farell, Cho Nwe Zin Latt, N. Jalinus, Asmar Yulastri, Rido Wahyudi
Vocational high schools are one of the educational stages impacted by Indonesia's low quality of education. Vocational High Schools play a crucial role in improving human resources. Graduates of Vocational High Schools can continue their education at universities or enter the workforce directly. Many students are found to have not yet considered their career path after graduation. At the same time, graduates are still expected to find mismatched employment with their expertise and skills. This research uses CRISP-DM, or Cross Industry Standard Process for Data Mining, to build machine learning models. The approach used is content-based filtering. This model recommends items similar to previously liked or selected items by the user. Item similarity can be calculated based on the features of the items being compared. After students receive job recommendations through intelligent job matching, they can use these recommendations as references when applying for jobs that align with their results. This process helps students direct their steps toward finding jobs that match their profiles, ultimately increasing their chances of success in the job market. These recommendations are crucial in guiding students toward career paths that align with their abilities and interests. The Intelligent Job Matching Model developed in this research provides recommendations for the job-matching process. This model benefits graduates by providing job recommendations aligned with their profiles and offers advantages to the job market. By implementing the Model of Intelligent Job Matching in the recruitment process, applicants with job qualifications can be matched effectively.
职业高中是受印尼教育质量低下影响的教育阶段之一。职业高中在改善人力资源方面发挥着至关重要的作用。职业高中的毕业生可以继续在大学深造,也可以直接参加工作。许多学生还没有考虑过毕业后的就业方向。与此同时,毕业生仍要寻找与其专业知识和技能不匹配的工作。这项研究使用 CRISP-DM(数据挖掘跨行业标准流程)来构建机器学习模型。使用的方法是基于内容的过滤。该模型推荐与用户之前喜欢或选择的项目相似的项目。项目相似度可以根据被比较项目的特征来计算。学生通过智能工作匹配获得工作推荐后,在申请与其结果相符的工作时,可以将这些推荐作为参考。这一过程可以帮助学生引导他们找到与自身情况相匹配的工作,最终增加他们在就业市场上取得成功的机会。这些建议对于引导学生选择符合自身能力和兴趣的职业道路至关重要。本研究开发的智能工作匹配模型为工作匹配过程提供了建议。该模型可为毕业生提供与其个人情况相符的工作建议,从而使毕业生受益,并为就业市场带来优势。通过在招聘过程中实施智能工作匹配模型,可以有效地匹配具有工作资格的求职者。
{"title":"Analysis of Job Recommendations in Vocational Education Using the Intelligent Job Matching Model","authors":"Geovanne Farell, Cho Nwe Zin Latt, N. Jalinus, Asmar Yulastri, Rido Wahyudi","doi":"10.62527/joiv.8.1.2201","DOIUrl":"https://doi.org/10.62527/joiv.8.1.2201","url":null,"abstract":"Vocational high schools are one of the educational stages impacted by Indonesia's low quality of education. Vocational High Schools play a crucial role in improving human resources. Graduates of Vocational High Schools can continue their education at universities or enter the workforce directly. Many students are found to have not yet considered their career path after graduation. At the same time, graduates are still expected to find mismatched employment with their expertise and skills. This research uses CRISP-DM, or Cross Industry Standard Process for Data Mining, to build machine learning models. The approach used is content-based filtering. This model recommends items similar to previously liked or selected items by the user. Item similarity can be calculated based on the features of the items being compared. After students receive job recommendations through intelligent job matching, they can use these recommendations as references when applying for jobs that align with their results. This process helps students direct their steps toward finding jobs that match their profiles, ultimately increasing their chances of success in the job market. These recommendations are crucial in guiding students toward career paths that align with their abilities and interests. The Intelligent Job Matching Model developed in this research provides recommendations for the job-matching process. This model benefits graduates by providing job recommendations aligned with their profiles and offers advantages to the job market. By implementing the Model of Intelligent Job Matching in the recruitment process, applicants with job qualifications can be matched effectively.","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"24 17","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140358536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimum, Maximum, and Average Implementation of Patterned Datasets in Mapping Cryptocurrency Fluctuation Patterns 在映射加密货币波动模式时实现模式化数据集的最小值、最大值和平均值
Pub Date : 2024-03-31 DOI: 10.62527/joiv.8.1.1543
Rizky Parlika, M. Mustafid, Basuki Rahmat
Cryptocurrency price fluctuations are increasingly interesting and are of concern to researchers around the world. Many ways have been proposed to predict the next price, whether it will go up or down. This research shows how to create a patterned dataset from an API connection shared by Indonesia's leading digital currency market, Indodax. From the data on the movement of all cryptocurrencies, the lowest price variable is taken for 24 hours, the latest price, the highest price for 24 hours, and the time of price movement, which is then programmed into a pattern dataset. This patterned dataset is then mined and stored continuously on the MySQL Server DBMS on the hosting service. The patterned dataset is then separated per month, and the data per day is calculated. The minimum, maximum, and average functions are then applied to form a graph that displays paired lines of the movement of the patterned dataset in Crash and Moon conditions. From the observations, the Patterned Graphical Pair dataset using the Average function provides the best potential for predicting future cryptocurrency price fluctuations with the Bitcoin case study. The novelty of this research is the development of patterned datasets for predicting cryptocurrency fluctuations based on the influence of bitcoin price movements on all currencies in the cryptocurrency trading market. This research also proved the truth of hypotheses a and b related to the start and end of fluctuations.
加密货币的价格波动越来越引人关注,也引起了世界各地研究人员的关注。人们提出了许多方法来预测下一个价格是上涨还是下跌。本研究展示了如何从印度尼西亚领先的数字货币市场 Indodax 共享的 API 连接中创建模式化数据集。从所有加密货币的变动数据中,提取 24 小时内的最低价格变量、最新价格、24 小时内的最高价格以及价格变动时间,然后将其编入模式数据集。然后对该模式数据集进行挖掘,并持续存储在托管服务的 MySQL 服务器数据库管理系统中。然后将模式数据集按月分离,计算出每天的数据。然后应用最小值、最大值和平均值函数形成图表,显示模式化数据集在 "撞击 "和 "月球 "条件下的运动配对线。从观察结果来看,使用平均函数的模式化图形配对数据集最有可能预测比特币案例研究中未来加密货币的价格波动。本研究的新颖之处在于根据比特币价格走势对加密货币交易市场中所有货币的影响,开发了用于预测加密货币波动的模式化数据集。这项研究还证明了与波动开始和结束有关的假设 a 和 b 的真实性。
{"title":"Minimum, Maximum, and Average Implementation of Patterned Datasets in Mapping Cryptocurrency Fluctuation Patterns","authors":"Rizky Parlika, M. Mustafid, Basuki Rahmat","doi":"10.62527/joiv.8.1.1543","DOIUrl":"https://doi.org/10.62527/joiv.8.1.1543","url":null,"abstract":"Cryptocurrency price fluctuations are increasingly interesting and are of concern to researchers around the world. Many ways have been proposed to predict the next price, whether it will go up or down. This research shows how to create a patterned dataset from an API connection shared by Indonesia's leading digital currency market, Indodax. From the data on the movement of all cryptocurrencies, the lowest price variable is taken for 24 hours, the latest price, the highest price for 24 hours, and the time of price movement, which is then programmed into a pattern dataset. This patterned dataset is then mined and stored continuously on the MySQL Server DBMS on the hosting service. The patterned dataset is then separated per month, and the data per day is calculated. The minimum, maximum, and average functions are then applied to form a graph that displays paired lines of the movement of the patterned dataset in Crash and Moon conditions. From the observations, the Patterned Graphical Pair dataset using the Average function provides the best potential for predicting future cryptocurrency price fluctuations with the Bitcoin case study. The novelty of this research is the development of patterned datasets for predicting cryptocurrency fluctuations based on the influence of bitcoin price movements on all currencies in the cryptocurrency trading market. This research also proved the truth of hypotheses a and b related to the start and end of fluctuations.","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"11 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140358994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Analysis of Feature Mel Frequency Cepstral Coefficient and Short Time Fourier Transform Input for Lie Detection using Convolutional Neural Network 利用卷积神经网络进行谎言检测的特征 Mel 频率倒频谱系数和短时傅里叶变换输入的性能分析
Pub Date : 2024-03-31 DOI: 10.62527/joiv.8.1.2062
Dewi Kusumawati, A. A. Ilham, A. Achmad, Ingrid Nurtanio
This study aims to determine which model is more effective in detecting lies between models with Mel Frequency Cepstral Coefficient (MFCC) and Short Time Fourier Transform (STFT) processes using Convolutional Neural Network (CNN). MFCC and STFT processes are based on digital voice data from video recordings that have been given lie or truth information regarding certain situations. Data is then pre-processed and trained on CNN. The results of model performance evaluation with hyper-tuning parameters and random search implementation show that using MFCC as Voice data processing provides better performance with higher accuracy than using the STFT process. The best parameters from MFCC are obtained with filter convolutional=64, kerneconvolutional1=5, filterconvolutional2=112, kernel convolutional2=3, filter convolutional3=32, kernelconvolutional3 =5, dense1=96, optimizer=RMSProp, learning rate=0.001 which achieves an accuracy of  97.13%, with an AUC value of 0.97. Using the STFT, the best parameters are obtained with filter convolutional1=96, kernel convolutional1=5, convolutional2 filters=48, convolutional2 kernels=5, convolutional3 filters=96, convolutional3 kernels=5, dense1=128, Optimizer=Adaddelta, learning rate=0.001, which achieves an accuracy of 95.39% with an AUC value of 0.95. Prosodics are used to compare the performance of MFCC and STFT. The result is that prosodic has a low accuracy of 68%. The analysis shows that using MFCC as the process of sound extraction with the CNN model produces the best performance for cases of lie detection using audio. It can be optimized for further research by combining CNN architectural models such as ResNet, AlexNet, and other architectures to obtain new models and improve lie detection accuracy.
本研究旨在确定使用卷积神经网络(CNN)的梅尔频率倒频谱系数(MFCC)和短时傅里叶变换(STFT)处理模型之间哪种模型在检测谎言方面更有效。MFCC 和 STFT 处理基于来自视频录像的数字语音数据,这些视频录像提供了有关某些情况的谎言或真相信息。然后在 CNN 上对数据进行预处理和训练。使用超调整参数和随机搜索实现的模型性能评估结果表明,使用 MFCC 作为语音数据处理比使用 STFT 处理具有更好的性能和更高的准确性。MFCC 的最佳参数是滤波器卷积=64、核卷积1=5、滤波器卷积2=112、核卷积2=3、滤波器卷积3=32、核卷积3=5、dense1=96、优化器=RMSProp、学习率=0.001,准确率达到 97.13%,AUC 值为 0.97。使用 STFT 时,最佳参数为:滤波器 convolutional1=96,核 convolutional1=5,滤波器 convolutional2=48,核 convolutional2=5,滤波器 convolutional3=96,核 convolutional3=5,dense1=128,优化器=Adaddelta,学习率=0.001,准确率为 95.39%,AUC 值为 0.95。前音用于比较 MFCC 和 STFT 的性能。结果显示,前音的准确率较低,仅为 68%。分析表明,在使用音频进行谎言检测时,使用 MFCC 作为 CNN 模型的声音提取过程能产生最佳性能。在进一步的研究中,可以通过结合 CNN 架构模型(如 ResNet、AlexNet 和其他架构)进行优化,以获得新的模型,提高谎言检测的准确率。
{"title":"Performance Analysis of Feature Mel Frequency Cepstral Coefficient and Short Time Fourier Transform Input for Lie Detection using Convolutional Neural Network","authors":"Dewi Kusumawati, A. A. Ilham, A. Achmad, Ingrid Nurtanio","doi":"10.62527/joiv.8.1.2062","DOIUrl":"https://doi.org/10.62527/joiv.8.1.2062","url":null,"abstract":"This study aims to determine which model is more effective in detecting lies between models with Mel Frequency Cepstral Coefficient (MFCC) and Short Time Fourier Transform (STFT) processes using Convolutional Neural Network (CNN). MFCC and STFT processes are based on digital voice data from video recordings that have been given lie or truth information regarding certain situations. Data is then pre-processed and trained on CNN. The results of model performance evaluation with hyper-tuning parameters and random search implementation show that using MFCC as Voice data processing provides better performance with higher accuracy than using the STFT process. The best parameters from MFCC are obtained with filter convolutional=64, kerneconvolutional1=5, filterconvolutional2=112, kernel convolutional2=3, filter convolutional3=32, kernelconvolutional3 =5, dense1=96, optimizer=RMSProp, learning rate=0.001 which achieves an accuracy of  97.13%, with an AUC value of 0.97. Using the STFT, the best parameters are obtained with filter convolutional1=96, kernel convolutional1=5, convolutional2 filters=48, convolutional2 kernels=5, convolutional3 filters=96, convolutional3 kernels=5, dense1=128, Optimizer=Adaddelta, learning rate=0.001, which achieves an accuracy of 95.39% with an AUC value of 0.95. Prosodics are used to compare the performance of MFCC and STFT. The result is that prosodic has a low accuracy of 68%. The analysis shows that using MFCC as the process of sound extraction with the CNN model produces the best performance for cases of lie detection using audio. It can be optimized for further research by combining CNN architectural models such as ResNet, AlexNet, and other architectures to obtain new models and improve lie detection accuracy.","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"32 24","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140358282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification of Lombok Songket and Sasambo Batik Motifs Using the Convolution Neural Network (CNN) Algorithm 使用卷积神经网络 (CNN) 算法对龙目岛松吉和沙桑布蜡染图案进行分类
Pub Date : 2024-03-31 DOI: 10.62527/joiv.8.1.1386
Suthami Ariessaputra, Viviana Herlita Vidiasari, Sudi Mariyanto Al Sasongko, Budi Darmawan, Sabar Nababan
Sasambo batik is a traditional batik from the West Nusa Tenggara province. Sasambo itself is an abbreviation of three tribes, namely the Sasak (sa) in the Lombok Islands, the Samawa (sam), and the Mbojo (bo) tribes in Sumbawa Island. Classification of batik motifs can use image processing technology, one of which is the Convolution Neural Network (CNN) algorithm. Before entering the classification process, the batik image first undergoes image resizing. After that, proceed with the operation of the convolution, pooling, and fully connected layers. The sample image of Lombok songket motifs and Sasambo batik consists of 20 songket fabric data with the same motif and color and 14 songket data with the same motif but different colors. In addition, there are 10 data points on songket fabrics with other motifs and colors. In addition, there are 5 data points on Sasambo batik fabrics with the same motif and color and 5 data points on Sasambo batik fabrics with the same motif but different colors. The training data rotates the image by 150as many as 20 photos. Testing with motifs with the same color shows that the system's success rate is 83.85%. The highest average recognition for Sasambo batik cloth is in testing motifs with the same color for data in the database at 93.66%. The CNN modeling classification results indicate that the Sasambo batik cloth can be a reference for developing songket categorization using a website platform or the Android system. 
Sasambo 蜡染是西努沙登加拉省的一种传统蜡染。Sasambo 本身是三个部落的缩写,即龙目岛的 Sasak(sa)部落、松巴哇岛的 Samawa(sam)部落和 Mbojo(bo)部落。蜡染图案的分类可以使用图像处理技术,其中之一是卷积神经网络(CNN)算法。在进入分类流程之前,首先要对蜡染图像进行图像大小调整。之后,进行卷积层、池化层和全连接层的操作。龙目岛 songket 图案和 Sasambo 蜡染的样本图像由 20 个图案和颜色相同的 songket 织物数据和 14 个图案相同但颜色不同的 songket 数据组成。此外,还有 10 个具有其他图案和颜色的 songket 织物数据点。此外,还有 5 个图案和颜色相同的沙桑布蜡染织物数据点和 5 个图案相同但颜色不同的沙桑布蜡染织物数据点。训练数据将图像旋转 150 次,多达 20 张照片。使用相同颜色的图案进行的测试表明,系统的成功率为 83.85%。在测试数据库中颜色相同的图案时,沙桑布蜡染布的平均识别率最高,达到 93.66%。CNN 建模分类结果表明,Sasambo 蜡染布可作为使用网站平台或安卓系统开发 songket 分类的参考。
{"title":"Classification of Lombok Songket and Sasambo Batik Motifs Using the Convolution Neural Network (CNN) Algorithm","authors":"Suthami Ariessaputra, Viviana Herlita Vidiasari, Sudi Mariyanto Al Sasongko, Budi Darmawan, Sabar Nababan","doi":"10.62527/joiv.8.1.1386","DOIUrl":"https://doi.org/10.62527/joiv.8.1.1386","url":null,"abstract":"Sasambo batik is a traditional batik from the West Nusa Tenggara province. Sasambo itself is an abbreviation of three tribes, namely the Sasak (sa) in the Lombok Islands, the Samawa (sam), and the Mbojo (bo) tribes in Sumbawa Island. Classification of batik motifs can use image processing technology, one of which is the Convolution Neural Network (CNN) algorithm. Before entering the classification process, the batik image first undergoes image resizing. After that, proceed with the operation of the convolution, pooling, and fully connected layers. The sample image of Lombok songket motifs and Sasambo batik consists of 20 songket fabric data with the same motif and color and 14 songket data with the same motif but different colors. In addition, there are 10 data points on songket fabrics with other motifs and colors. In addition, there are 5 data points on Sasambo batik fabrics with the same motif and color and 5 data points on Sasambo batik fabrics with the same motif but different colors. The training data rotates the image by 150as many as 20 photos. Testing with motifs with the same color shows that the system's success rate is 83.85%. The highest average recognition for Sasambo batik cloth is in testing motifs with the same color for data in the database at 93.66%. The CNN modeling classification results indicate that the Sasambo batik cloth can be a reference for developing songket categorization using a website platform or the Android system. ","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"28 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140359027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
JOIV : International Journal on Informatics Visualization
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1