Alif Wicaksana Ramadhan, Fira Aulia, Ni Made Lintang Asvini Dewi, Idris Winarno, S. Sukaridhoto
This study investigates the potential of using Message Passing Interface (MPI) parallelization to enhance the speed of the image stitching process. The image stitching process involves combining multiple images to create a seamless panoramic view. This research explores the potential benefits of segmenting photos into distributed tasks among several identical processor nodes to expedite the stitching process. However, it is crucial to consider that increasing the number of nodes may introduce a trade-off between the speed and quality of the stitching process. The initial experiments were conducted without MPI, resulting in a stitching time of 1506.63 seconds. Subsequently, the researchers employed MPI parallelization on two computer nodes, which reduced the stitching time to 624 seconds. Further improvement was observed when four computer nodes were used, resulting in a stitching time of 346.8 seconds. These findings highlight the potential benefits of MPI parallelization for image stitching tasks. The reduced stitching time achieved through parallelization demonstrates the ability to accelerate the overall stitching process. However, it is essential to carefully consider the trade-off between speed and quality when determining the optimal number of nodes to employ. By effectively distributing the workload across multiple nodes, researchers and practitioners can take advantage of the parallel processing capabilities offered by MPI to expedite image stitching tasks. Future studies could explore additional optimization techniques and evaluate the impact on speed and quality to achieve an optimal balance in real-world applications.
{"title":"Distributed Aerial Image Stitching on Multiple Processors using Message Passing Interface","authors":"Alif Wicaksana Ramadhan, Fira Aulia, Ni Made Lintang Asvini Dewi, Idris Winarno, S. Sukaridhoto","doi":"10.62527/joiv.8.1.1890","DOIUrl":"https://doi.org/10.62527/joiv.8.1.1890","url":null,"abstract":"This study investigates the potential of using Message Passing Interface (MPI) parallelization to enhance the speed of the image stitching process. The image stitching process involves combining multiple images to create a seamless panoramic view. This research explores the potential benefits of segmenting photos into distributed tasks among several identical processor nodes to expedite the stitching process. However, it is crucial to consider that increasing the number of nodes may introduce a trade-off between the speed and quality of the stitching process. The initial experiments were conducted without MPI, resulting in a stitching time of 1506.63 seconds. Subsequently, the researchers employed MPI parallelization on two computer nodes, which reduced the stitching time to 624 seconds. Further improvement was observed when four computer nodes were used, resulting in a stitching time of 346.8 seconds. These findings highlight the potential benefits of MPI parallelization for image stitching tasks. The reduced stitching time achieved through parallelization demonstrates the ability to accelerate the overall stitching process. However, it is essential to carefully consider the trade-off between speed and quality when determining the optimal number of nodes to employ. By effectively distributing the workload across multiple nodes, researchers and practitioners can take advantage of the parallel processing capabilities offered by MPI to expedite image stitching tasks. Future studies could explore additional optimization techniques and evaluate the impact on speed and quality to achieve an optimal balance in real-world applications.","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140358573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Toto - Haryanto, H. Suhartanto, A. Murni, K. Kusmardi, Marina Yusoff, Jasni Mohmad Zain
Since the coronavirus was first discovered in Wuhan, it has widely spread and was finally declared a global pandemic by the WHO. Image processing plays an essential role in examining the lungs of affected patients. Computed Tomography (CT) and X-ray images have been popularly used to examine the lungs of COVID-19 patients. This research aims to design a simple Convolution Neural Network (CNN) architecture called SCOV-CNN for the classification of the virus based on CT images and implementation on the web-based application. The data used in this work were CT images of 120 patients from hospitals in Brazil. SCOV-CNN was inspired by the LeNet architecture, but it has a deeper convolution and pooling layer structure. Combining seven and five kernel sizes for convolution and padding schemes can preserve the feature information from the images. Furthermore, it has three fully connected (FC) layers with a dropout of 0.3 on each. In addition, the model was evaluated using the sensitivity, specificity, precision, F1 score, and ROC curve values. The results showed that the architecture we proposed was comparable to some prominent deep learning techniques in terms of accuracy (0.96), precision (0.98), and F1 score (0.95). The best model was integrated into a website-based system to help and facilitate the users' activities. We use Python Flask Pam tools as a web server on the server side and JavaScript for the User Interface (UI) Design
{"title":"SCOV-CNN: A Simple CNN Architecture for COVID-19 Identification Based on the CT Images","authors":"Toto - Haryanto, H. Suhartanto, A. Murni, K. Kusmardi, Marina Yusoff, Jasni Mohmad Zain","doi":"10.62527/joiv.8.1.1750","DOIUrl":"https://doi.org/10.62527/joiv.8.1.1750","url":null,"abstract":"Since the coronavirus was first discovered in Wuhan, it has widely spread and was finally declared a global pandemic by the WHO. Image processing plays an essential role in examining the lungs of affected patients. Computed Tomography (CT) and X-ray images have been popularly used to examine the lungs of COVID-19 patients. This research aims to design a simple Convolution Neural Network (CNN) architecture called SCOV-CNN for the classification of the virus based on CT images and implementation on the web-based application. The data used in this work were CT images of 120 patients from hospitals in Brazil. SCOV-CNN was inspired by the LeNet architecture, but it has a deeper convolution and pooling layer structure. Combining seven and five kernel sizes for convolution and padding schemes can preserve the feature information from the images. Furthermore, it has three fully connected (FC) layers with a dropout of 0.3 on each. In addition, the model was evaluated using the sensitivity, specificity, precision, F1 score, and ROC curve values. The results showed that the architecture we proposed was comparable to some prominent deep learning techniques in terms of accuracy (0.96), precision (0.98), and F1 score (0.95). The best model was integrated into a website-based system to help and facilitate the users' activities. We use Python Flask Pam tools as a web server on the server side and JavaScript for the User Interface (UI) Design","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"25 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140359044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Badriyah, I. Syarif, Fitriani Rohmah Hardiyanti
High-dimensional data allows researchers to conduct comprehensive analyses. However, such data often exhibits characteristics like small sample sizes, class imbalance, and high complexity, posing challenges for classification. One approach employed to tackle high-dimensional data is feature selection. This study uses the Bacterial Foraging Optimization (BFO) algorithm for feature selection. A dedicated BFO Java library is developed to extend the capabilities of WEKA for feature selection purposes. Experimental results confirm the successful integration of BFO. The outcomes of BFO's feature selection are then compared against those of other evolutionary algorithms, namely Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), and Ant Colony Optimization (ACO). Comparison of algorithms conducted using the same datasets. The experimental results indicate that BFO effectively reduces features while maintaining consistent accuracy. In 4 out of 9 datasets, BFO outperforms other algorithms, showcasing superior processing time performance in 6 datasets. BFO is a favorable choice for selecting features in high-dimensional datasets, providing consistent accuracy and effective processing. The optimal fraction of features in the Ovarian Cancer dataset signifies that the dataset retains a minimal number of selected attributes. Consequently, the learning process gains speed due to the reduced feature set. Remarkably, accuracy substantially increased, rising from 0.868 before feature selection to 0.886 after feature selection. The classification processing time has also been significantly shortened, completing the task in just 0.3 seconds, marking a remarkable improvement from the previous 56.8 seconds.
{"title":"Development of a Java Library with Bacterial Foraging Optimization for Feature Selection of High-Dimensional Data","authors":"T. Badriyah, I. Syarif, Fitriani Rohmah Hardiyanti","doi":"10.62527/joiv.8.1.2149","DOIUrl":"https://doi.org/10.62527/joiv.8.1.2149","url":null,"abstract":"High-dimensional data allows researchers to conduct comprehensive analyses. However, such data often exhibits characteristics like small sample sizes, class imbalance, and high complexity, posing challenges for classification. One approach employed to tackle high-dimensional data is feature selection. This study uses the Bacterial Foraging Optimization (BFO) algorithm for feature selection. A dedicated BFO Java library is developed to extend the capabilities of WEKA for feature selection purposes. Experimental results confirm the successful integration of BFO. The outcomes of BFO's feature selection are then compared against those of other evolutionary algorithms, namely Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), and Ant Colony Optimization (ACO). Comparison of algorithms conducted using the same datasets. The experimental results indicate that BFO effectively reduces features while maintaining consistent accuracy. In 4 out of 9 datasets, BFO outperforms other algorithms, showcasing superior processing time performance in 6 datasets. BFO is a favorable choice for selecting features in high-dimensional datasets, providing consistent accuracy and effective processing. The optimal fraction of features in the Ovarian Cancer dataset signifies that the dataset retains a minimal number of selected attributes. Consequently, the learning process gains speed due to the reduced feature set. Remarkably, accuracy substantially increased, rising from 0.868 before feature selection to 0.886 after feature selection. The classification processing time has also been significantly shortened, completing the task in just 0.3 seconds, marking a remarkable improvement from the previous 56.8 seconds.","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"37 36","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140358062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhamad Arief Liman, Antonio Josef, Gede Putra Kusuma
Handwritten character recognition is a problem that has been worked on for many mainstream languages. Handwritten letter recognition has been proven to achieve promising results. Several studies using deep learning models have been conducted to achieve better accuracies. In this paper, the authors conducted two experiments on the EMNIST Letters dataset: Wavemix-Lite and CoAtNet. The Wavemix-Lite model uses Two-Dimensional Discrete Wavelet Transform Level 1 to reduce the parameters and speed up the runtime. The CoAtNet is a combined model of CNN and Visual Transformer where the image is broken down into fixed-size patches. The feature extraction part of the model is used to embed the input image into a feature vector. From those two models, the authors hooked the value of the features of the Global Average Pool layer using EMNIST Letters data. The features hooked from the training results of the two models, such as SVM, Random Forest, and XGBoost models, were used to train the machine learning classifier. The experiments conducted by the authors show that the best machine-learning model is the Random Forest, with 96.03% accuracy using the Wavemix-Lite model and 97.90% accuracy using the CoAtNet model. These results showcased the benefit of using a machine learning model for classifying image features that are extracted using a deep learning model.
{"title":"Handwritten Character Recognition using Deep Learning Algorithm with Machine Learning Classifier","authors":"Muhamad Arief Liman, Antonio Josef, Gede Putra Kusuma","doi":"10.62527/joiv.8.1.1707","DOIUrl":"https://doi.org/10.62527/joiv.8.1.1707","url":null,"abstract":"Handwritten character recognition is a problem that has been worked on for many mainstream languages. Handwritten letter recognition has been proven to achieve promising results. Several studies using deep learning models have been conducted to achieve better accuracies. In this paper, the authors conducted two experiments on the EMNIST Letters dataset: Wavemix-Lite and CoAtNet. The Wavemix-Lite model uses Two-Dimensional Discrete Wavelet Transform Level 1 to reduce the parameters and speed up the runtime. The CoAtNet is a combined model of CNN and Visual Transformer where the image is broken down into fixed-size patches. The feature extraction part of the model is used to embed the input image into a feature vector. From those two models, the authors hooked the value of the features of the Global Average Pool layer using EMNIST Letters data. The features hooked from the training results of the two models, such as SVM, Random Forest, and XGBoost models, were used to train the machine learning classifier. The experiments conducted by the authors show that the best machine-learning model is the Random Forest, with 96.03% accuracy using the Wavemix-Lite model and 97.90% accuracy using the CoAtNet model. These results showcased the benefit of using a machine learning model for classifying image features that are extracted using a deep learning model.","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"121 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140359816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research aims to examine the implementation of the SIAKAMA application in the Apprenticeship Industrial Program. This program was created as a SIAKAMA application to overcome hurdles during the monitoring and evaluation stages. At the monitoring stage, supervising lecturers and field supervisors can use the SIAKAMA application to monitor all Apprenticeship Industrial program student activities in the field, resulting in a good and smooth communication and coordination system. At the evaluation stage, the supervising lecturer and field supervisors in the SIAKAMA application can conduct assessments based on student activities in the field, including daily evaluations and final assessments after the Apprenticeship Industrial Program has been finished. This study employs a quantitative descriptive technique, the Research & Development method, and the 4D development model. A sample of Apprenticeship Industrial Program students from five departments of the Faculty of Engineering, Padang State University, was used in this study. The SIAKAMA application was found to be valid with a value of 0.876, practical with a value of 78.67, and effective with a value of 81.22% after data analysis using SPSS 25. This suggests that implementing the SIAKAMA application to enhance the work competency of Apprenticeship Industrial Program students is viable. The Apprenticeship Industrial Program model represents a modification of the Three Set of Actor development model, yet it hasn't been incorporated with the Industrial Revolution 4.0. Engaging in this Program enables students to acquire 4C skills, including Creativity and Innovation, Critical Thinking and Problem Solving, Communication, and Collaboration.
{"title":"Coordination of The Apprenticeship Industrial Program with The Siakama Application","authors":"Henny Yustisia, Laras Oktavia Andreas, Risma Apdeni, Bambang Heriyadi, Jusmita Weriza","doi":"10.62527/joiv.8.1.2245","DOIUrl":"https://doi.org/10.62527/joiv.8.1.2245","url":null,"abstract":"This research aims to examine the implementation of the SIAKAMA application in the Apprenticeship Industrial Program. This program was created as a SIAKAMA application to overcome hurdles during the monitoring and evaluation stages. At the monitoring stage, supervising lecturers and field supervisors can use the SIAKAMA application to monitor all Apprenticeship Industrial program student activities in the field, resulting in a good and smooth communication and coordination system. At the evaluation stage, the supervising lecturer and field supervisors in the SIAKAMA application can conduct assessments based on student activities in the field, including daily evaluations and final assessments after the Apprenticeship Industrial Program has been finished. This study employs a quantitative descriptive technique, the Research & Development method, and the 4D development model. A sample of Apprenticeship Industrial Program students from five departments of the Faculty of Engineering, Padang State University, was used in this study. The SIAKAMA application was found to be valid with a value of 0.876, practical with a value of 78.67, and effective with a value of 81.22% after data analysis using SPSS 25. This suggests that implementing the SIAKAMA application to enhance the work competency of Apprenticeship Industrial Program students is viable. The Apprenticeship Industrial Program model represents a modification of the Three Set of Actor development model, yet it hasn't been incorporated with the Industrial Revolution 4.0. Engaging in this Program enables students to acquire 4C skills, including Creativity and Innovation, Critical Thinking and Problem Solving, Communication, and Collaboration.","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"113 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140360364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ganjar Gingin Tahyudin, M. D. Sulistiyo, Muhammad Arzaki, Ema Rachmawati
Due to various factors that cause visual alterations in the collected facial images, gender classification based on image processing continues to be a performance challenge for classifier models. The Vision Transformer model is used in this study to suggest a technique for identifying a person’s gender from their face images. This study investigates how well a facial image-based model can distinguish between male and female genders. It also investigates the rarely discussed performance on the variation and complexity of data caused by differences in racial and age groups. We trained on the AFAD dataset and then carried out same-dataset and cross-dataset evaluations, the latter of which considers the UTKFace dataset. From the experiments and analysis in the same-dataset evaluation, the highest validation accuracy of happens for the image of size pixels with eight patches. In comparison, the highest testing accuracy of occurs for the image of size pixels with patches. Moreover, the experiments and analysis in the cross-dataset evaluation show that the model works optimally for the image size pixels with patches, with the value of the model’s accuracy, precision, recall, and F1-score being , , , and , respectively. Furthermore, the misclassification analysis shows that the model works optimally in classifying the gender of people between 21-70 years old. The findings of this study can serve as a baseline for conducting further analysis on the effectiveness of gender classifier models considering various physical factors.
{"title":"Classifying Gender Based on Face Images Using Vision Transformer","authors":"Ganjar Gingin Tahyudin, M. D. Sulistiyo, Muhammad Arzaki, Ema Rachmawati","doi":"10.62527/joiv.8.1.1923","DOIUrl":"https://doi.org/10.62527/joiv.8.1.1923","url":null,"abstract":"Due to various factors that cause visual alterations in the collected facial images, gender classification based on image processing continues to be a performance challenge for classifier models. The Vision Transformer model is used in this study to suggest a technique for identifying a person’s gender from their face images. This study investigates how well a facial image-based model can distinguish between male and female genders. It also investigates the rarely discussed performance on the variation and complexity of data caused by differences in racial and age groups. We trained on the AFAD dataset and then carried out same-dataset and cross-dataset evaluations, the latter of which considers the UTKFace dataset. From the experiments and analysis in the same-dataset evaluation, the highest validation accuracy of happens for the image of size pixels with eight patches. In comparison, the highest testing accuracy of occurs for the image of size pixels with patches. Moreover, the experiments and analysis in the cross-dataset evaluation show that the model works optimally for the image size pixels with patches, with the value of the model’s accuracy, precision, recall, and F1-score being , , , and , respectively. Furthermore, the misclassification analysis shows that the model works optimally in classifying the gender of people between 21-70 years old. The findings of this study can serve as a baseline for conducting further analysis on the effectiveness of gender classifier models considering various physical factors.","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"1 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140359408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Geovanne Farell, Cho Nwe Zin Latt, N. Jalinus, Asmar Yulastri, Rido Wahyudi
Vocational high schools are one of the educational stages impacted by Indonesia's low quality of education. Vocational High Schools play a crucial role in improving human resources. Graduates of Vocational High Schools can continue their education at universities or enter the workforce directly. Many students are found to have not yet considered their career path after graduation. At the same time, graduates are still expected to find mismatched employment with their expertise and skills. This research uses CRISP-DM, or Cross Industry Standard Process for Data Mining, to build machine learning models. The approach used is content-based filtering. This model recommends items similar to previously liked or selected items by the user. Item similarity can be calculated based on the features of the items being compared. After students receive job recommendations through intelligent job matching, they can use these recommendations as references when applying for jobs that align with their results. This process helps students direct their steps toward finding jobs that match their profiles, ultimately increasing their chances of success in the job market. These recommendations are crucial in guiding students toward career paths that align with their abilities and interests. The Intelligent Job Matching Model developed in this research provides recommendations for the job-matching process. This model benefits graduates by providing job recommendations aligned with their profiles and offers advantages to the job market. By implementing the Model of Intelligent Job Matching in the recruitment process, applicants with job qualifications can be matched effectively.
{"title":"Analysis of Job Recommendations in Vocational Education Using the Intelligent Job Matching Model","authors":"Geovanne Farell, Cho Nwe Zin Latt, N. Jalinus, Asmar Yulastri, Rido Wahyudi","doi":"10.62527/joiv.8.1.2201","DOIUrl":"https://doi.org/10.62527/joiv.8.1.2201","url":null,"abstract":"Vocational high schools are one of the educational stages impacted by Indonesia's low quality of education. Vocational High Schools play a crucial role in improving human resources. Graduates of Vocational High Schools can continue their education at universities or enter the workforce directly. Many students are found to have not yet considered their career path after graduation. At the same time, graduates are still expected to find mismatched employment with their expertise and skills. This research uses CRISP-DM, or Cross Industry Standard Process for Data Mining, to build machine learning models. The approach used is content-based filtering. This model recommends items similar to previously liked or selected items by the user. Item similarity can be calculated based on the features of the items being compared. After students receive job recommendations through intelligent job matching, they can use these recommendations as references when applying for jobs that align with their results. This process helps students direct their steps toward finding jobs that match their profiles, ultimately increasing their chances of success in the job market. These recommendations are crucial in guiding students toward career paths that align with their abilities and interests. The Intelligent Job Matching Model developed in this research provides recommendations for the job-matching process. This model benefits graduates by providing job recommendations aligned with their profiles and offers advantages to the job market. By implementing the Model of Intelligent Job Matching in the recruitment process, applicants with job qualifications can be matched effectively.","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"24 17","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140358536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cryptocurrency price fluctuations are increasingly interesting and are of concern to researchers around the world. Many ways have been proposed to predict the next price, whether it will go up or down. This research shows how to create a patterned dataset from an API connection shared by Indonesia's leading digital currency market, Indodax. From the data on the movement of all cryptocurrencies, the lowest price variable is taken for 24 hours, the latest price, the highest price for 24 hours, and the time of price movement, which is then programmed into a pattern dataset. This patterned dataset is then mined and stored continuously on the MySQL Server DBMS on the hosting service. The patterned dataset is then separated per month, and the data per day is calculated. The minimum, maximum, and average functions are then applied to form a graph that displays paired lines of the movement of the patterned dataset in Crash and Moon conditions. From the observations, the Patterned Graphical Pair dataset using the Average function provides the best potential for predicting future cryptocurrency price fluctuations with the Bitcoin case study. The novelty of this research is the development of patterned datasets for predicting cryptocurrency fluctuations based on the influence of bitcoin price movements on all currencies in the cryptocurrency trading market. This research also proved the truth of hypotheses a and b related to the start and end of fluctuations.
加密货币的价格波动越来越引人关注,也引起了世界各地研究人员的关注。人们提出了许多方法来预测下一个价格是上涨还是下跌。本研究展示了如何从印度尼西亚领先的数字货币市场 Indodax 共享的 API 连接中创建模式化数据集。从所有加密货币的变动数据中,提取 24 小时内的最低价格变量、最新价格、24 小时内的最高价格以及价格变动时间,然后将其编入模式数据集。然后对该模式数据集进行挖掘,并持续存储在托管服务的 MySQL 服务器数据库管理系统中。然后将模式数据集按月分离,计算出每天的数据。然后应用最小值、最大值和平均值函数形成图表,显示模式化数据集在 "撞击 "和 "月球 "条件下的运动配对线。从观察结果来看,使用平均函数的模式化图形配对数据集最有可能预测比特币案例研究中未来加密货币的价格波动。本研究的新颖之处在于根据比特币价格走势对加密货币交易市场中所有货币的影响,开发了用于预测加密货币波动的模式化数据集。这项研究还证明了与波动开始和结束有关的假设 a 和 b 的真实性。
{"title":"Minimum, Maximum, and Average Implementation of Patterned Datasets in Mapping Cryptocurrency Fluctuation Patterns","authors":"Rizky Parlika, M. Mustafid, Basuki Rahmat","doi":"10.62527/joiv.8.1.1543","DOIUrl":"https://doi.org/10.62527/joiv.8.1.1543","url":null,"abstract":"Cryptocurrency price fluctuations are increasingly interesting and are of concern to researchers around the world. Many ways have been proposed to predict the next price, whether it will go up or down. This research shows how to create a patterned dataset from an API connection shared by Indonesia's leading digital currency market, Indodax. From the data on the movement of all cryptocurrencies, the lowest price variable is taken for 24 hours, the latest price, the highest price for 24 hours, and the time of price movement, which is then programmed into a pattern dataset. This patterned dataset is then mined and stored continuously on the MySQL Server DBMS on the hosting service. The patterned dataset is then separated per month, and the data per day is calculated. The minimum, maximum, and average functions are then applied to form a graph that displays paired lines of the movement of the patterned dataset in Crash and Moon conditions. From the observations, the Patterned Graphical Pair dataset using the Average function provides the best potential for predicting future cryptocurrency price fluctuations with the Bitcoin case study. The novelty of this research is the development of patterned datasets for predicting cryptocurrency fluctuations based on the influence of bitcoin price movements on all currencies in the cryptocurrency trading market. This research also proved the truth of hypotheses a and b related to the start and end of fluctuations.","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"11 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140358994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dewi Kusumawati, A. A. Ilham, A. Achmad, Ingrid Nurtanio
This study aims to determine which model is more effective in detecting lies between models with Mel Frequency Cepstral Coefficient (MFCC) and Short Time Fourier Transform (STFT) processes using Convolutional Neural Network (CNN). MFCC and STFT processes are based on digital voice data from video recordings that have been given lie or truth information regarding certain situations. Data is then pre-processed and trained on CNN. The results of model performance evaluation with hyper-tuning parameters and random search implementation show that using MFCC as Voice data processing provides better performance with higher accuracy than using the STFT process. The best parameters from MFCC are obtained with filter convolutional=64, kerneconvolutional1=5, filterconvolutional2=112, kernel convolutional2=3, filter convolutional3=32, kernelconvolutional3 =5, dense1=96, optimizer=RMSProp, learning rate=0.001 which achieves an accuracy of 97.13%, with an AUC value of 0.97. Using the STFT, the best parameters are obtained with filter convolutional1=96, kernel convolutional1=5, convolutional2 filters=48, convolutional2 kernels=5, convolutional3 filters=96, convolutional3 kernels=5, dense1=128, Optimizer=Adaddelta, learning rate=0.001, which achieves an accuracy of 95.39% with an AUC value of 0.95. Prosodics are used to compare the performance of MFCC and STFT. The result is that prosodic has a low accuracy of 68%. The analysis shows that using MFCC as the process of sound extraction with the CNN model produces the best performance for cases of lie detection using audio. It can be optimized for further research by combining CNN architectural models such as ResNet, AlexNet, and other architectures to obtain new models and improve lie detection accuracy.
{"title":"Performance Analysis of Feature Mel Frequency Cepstral Coefficient and Short Time Fourier Transform Input for Lie Detection using Convolutional Neural Network","authors":"Dewi Kusumawati, A. A. Ilham, A. Achmad, Ingrid Nurtanio","doi":"10.62527/joiv.8.1.2062","DOIUrl":"https://doi.org/10.62527/joiv.8.1.2062","url":null,"abstract":"This study aims to determine which model is more effective in detecting lies between models with Mel Frequency Cepstral Coefficient (MFCC) and Short Time Fourier Transform (STFT) processes using Convolutional Neural Network (CNN). MFCC and STFT processes are based on digital voice data from video recordings that have been given lie or truth information regarding certain situations. Data is then pre-processed and trained on CNN. The results of model performance evaluation with hyper-tuning parameters and random search implementation show that using MFCC as Voice data processing provides better performance with higher accuracy than using the STFT process. The best parameters from MFCC are obtained with filter convolutional=64, kerneconvolutional1=5, filterconvolutional2=112, kernel convolutional2=3, filter convolutional3=32, kernelconvolutional3 =5, dense1=96, optimizer=RMSProp, learning rate=0.001 which achieves an accuracy of 97.13%, with an AUC value of 0.97. Using the STFT, the best parameters are obtained with filter convolutional1=96, kernel convolutional1=5, convolutional2 filters=48, convolutional2 kernels=5, convolutional3 filters=96, convolutional3 kernels=5, dense1=128, Optimizer=Adaddelta, learning rate=0.001, which achieves an accuracy of 95.39% with an AUC value of 0.95. Prosodics are used to compare the performance of MFCC and STFT. The result is that prosodic has a low accuracy of 68%. The analysis shows that using MFCC as the process of sound extraction with the CNN model produces the best performance for cases of lie detection using audio. It can be optimized for further research by combining CNN architectural models such as ResNet, AlexNet, and other architectures to obtain new models and improve lie detection accuracy.","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"32 24","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140358282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sasambo batik is a traditional batik from the West Nusa Tenggara province. Sasambo itself is an abbreviation of three tribes, namely the Sasak (sa) in the Lombok Islands, the Samawa (sam), and the Mbojo (bo) tribes in Sumbawa Island. Classification of batik motifs can use image processing technology, one of which is the Convolution Neural Network (CNN) algorithm. Before entering the classification process, the batik image first undergoes image resizing. After that, proceed with the operation of the convolution, pooling, and fully connected layers. The sample image of Lombok songket motifs and Sasambo batik consists of 20 songket fabric data with the same motif and color and 14 songket data with the same motif but different colors. In addition, there are 10 data points on songket fabrics with other motifs and colors. In addition, there are 5 data points on Sasambo batik fabrics with the same motif and color and 5 data points on Sasambo batik fabrics with the same motif but different colors. The training data rotates the image by 150as many as 20 photos. Testing with motifs with the same color shows that the system's success rate is 83.85%. The highest average recognition for Sasambo batik cloth is in testing motifs with the same color for data in the database at 93.66%. The CNN modeling classification results indicate that the Sasambo batik cloth can be a reference for developing songket categorization using a website platform or the Android system.
{"title":"Classification of Lombok Songket and Sasambo Batik Motifs Using the Convolution Neural Network (CNN) Algorithm","authors":"Suthami Ariessaputra, Viviana Herlita Vidiasari, Sudi Mariyanto Al Sasongko, Budi Darmawan, Sabar Nababan","doi":"10.62527/joiv.8.1.1386","DOIUrl":"https://doi.org/10.62527/joiv.8.1.1386","url":null,"abstract":"Sasambo batik is a traditional batik from the West Nusa Tenggara province. Sasambo itself is an abbreviation of three tribes, namely the Sasak (sa) in the Lombok Islands, the Samawa (sam), and the Mbojo (bo) tribes in Sumbawa Island. Classification of batik motifs can use image processing technology, one of which is the Convolution Neural Network (CNN) algorithm. Before entering the classification process, the batik image first undergoes image resizing. After that, proceed with the operation of the convolution, pooling, and fully connected layers. The sample image of Lombok songket motifs and Sasambo batik consists of 20 songket fabric data with the same motif and color and 14 songket data with the same motif but different colors. In addition, there are 10 data points on songket fabrics with other motifs and colors. In addition, there are 5 data points on Sasambo batik fabrics with the same motif and color and 5 data points on Sasambo batik fabrics with the same motif but different colors. The training data rotates the image by 150as many as 20 photos. Testing with motifs with the same color shows that the system's success rate is 83.85%. The highest average recognition for Sasambo batik cloth is in testing motifs with the same color for data in the database at 93.66%. The CNN modeling classification results indicate that the Sasambo batik cloth can be a reference for developing songket categorization using a website platform or the Android system. ","PeriodicalId":513790,"journal":{"name":"JOIV : International Journal on Informatics Visualization","volume":"28 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140359027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}